hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e7ed0ea373d60abf2b4fe32372441e2068d72175 | 11,040 | ipynb | Jupyter Notebook | ee_api_colab_setup.ipynb | pahdsn/SENSE_2020_GEE | f550999972cc7ba8a37cc06a33eb7683de724760 | [
"CC0-1.0"
] | null | null | null | ee_api_colab_setup.ipynb | pahdsn/SENSE_2020_GEE | f550999972cc7ba8a37cc06a33eb7683de724760 | [
"CC0-1.0"
] | null | null | null | ee_api_colab_setup.ipynb | pahdsn/SENSE_2020_GEE | f550999972cc7ba8a37cc06a33eb7683de724760 | [
"CC0-1.0"
] | null | null | null | 35.844156 | 284 | 0.522192 | [
[
[
"#@title Copyright 2019 Google LLC. { display-mode: \"form\" }\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"<table class=\"ee-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/ee-api-colab-setup.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a>\n</td><td>\n<a target=\"_blank\" href=\"https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/ee-api-colab-setup.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td></table>",
"_____no_output_____"
],
[
"# Earth Engine Python API Colab Setup\n\nThis notebook demonstrates how to setup the Earth Engine Python API in Colab and provides several examples of how to print and visualize Earth Engine processed data.\n\n## Import API and get credentials\n\nThe Earth Engine API is installed by default in Google Colaboratory so requires only importing and authenticating. These steps must be completed for each new Colab session, if you restart your Colab kernel, or if your Colab virtual machine is recycled due to inactivity.\n\n### Import the API\n\nRun the following cell to import the API into your session.",
"_____no_output_____"
]
],
[
[
"import ee",
"_____no_output_____"
]
],
[
[
"### Authenticate and initialize\n\nRun the `ee.Authenticate` function to authenticate your access to Earth Engine servers and `ee.Initialize` to initialize it. Upon running the following cell you'll be asked to grant Earth Engine access to your Google account. Follow the instructions printed to the cell.",
"_____no_output_____"
]
],
[
[
"# Trigger the authentication flow.\nee.Authenticate()\n\n# Initialize the library.\nee.Initialize()",
"_____no_output_____"
]
],
[
[
"### Test the API\n\nTest the API by printing the elevation of Mount Everest.",
"_____no_output_____"
]
],
[
[
"# Print the elevation of Mount Everest.\ndem = ee.Image('USGS/SRTMGL1_003')\nxy = ee.Geometry.Point([86.9250, 27.9881])\nelev = dem.sample(xy, 30).first().get('elevation').getInfo()\nprint('Mount Everest elevation (m):', elev)",
"_____no_output_____"
]
],
[
[
"## Map visualization\n\n`ee.Image` objects can be displayed to notebook output cells. The following two\nexamples demonstrate displaying a static image and an interactive map.\n",
"_____no_output_____"
],
[
"### Static image\n\nThe `IPython.display` module contains the `Image` function, which can display\nthe results of a URL representing an image generated from a call to the Earth\nEngine `getThumbUrl` function. The following cell will display a thumbnail\nof the global elevation model.",
"_____no_output_____"
]
],
[
[
"# Import the Image function from the IPython.display module. \nfrom IPython.display import Image\n\n# Display a thumbnail of global elevation.\nImage(url = dem.updateMask(dem.gt(0))\n .getThumbURL({'min': 0, 'max': 4000, 'dimensions': 512,\n 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))",
"_____no_output_____"
]
],
[
[
"### Interactive map\n\nThe [`folium`](https://python-visualization.github.io/folium/)\nlibrary can be used to display `ee.Image` objects on an interactive\n[Leaflet](https://leafletjs.com/) map. Folium has no default\nmethod for handling tiles from Earth Engine, so one must be defined\nand added to the `folium.Map` module before use.\n\nThe following cell provides an example of adding a method for handing Earth Engine\ntiles and using it to display an elevation model to a Leaflet map.",
"_____no_output_____"
]
],
[
[
"# Import the Folium library.\nimport folium\n\n# Define a method for displaying Earth Engine image tiles to folium map.\ndef add_ee_layer(self, ee_image_object, vis_params, name):\n map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)\n folium.raster_layers.TileLayer(\n tiles = map_id_dict['tile_fetcher'].url_format,\n attr = 'Map Data © <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n name = name,\n overlay = True,\n control = True\n ).add_to(self)\n\n# Add EE drawing method to folium.\nfolium.Map.add_ee_layer = add_ee_layer\n\n# Set visualization parameters.\nvis_params = {\n 'min': 0,\n 'max': 4000,\n 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}\n\n# Create a folium map object.\nmy_map = folium.Map(location=[20, 0], zoom_start=3, height=500)\n\n# Add the elevation model to the map object.\nmy_map.add_ee_layer(dem.updateMask(dem.gt(0)), vis_params, 'DEM')\n\n# Add a layer control panel to the map.\nmy_map.add_child(folium.LayerControl())\n\n# Display the map.\ndisplay(my_map)",
"_____no_output_____"
]
],
[
[
"## Chart visualization\n\nSome Earth Engine functions produce tabular data that can be plotted by\ndata visualization packages such as `matplotlib`. The following example\ndemonstrates the display of tabular data from Earth Engine as a scatter\nplot. See [Charting in Colaboratory](https://colab.sandbox.google.com/notebooks/charts.ipynb)\nfor more information.",
"_____no_output_____"
]
],
[
[
"# Import the matplotlib.pyplot module.\nimport matplotlib.pyplot as plt\n\n# Fetch a Landsat image.\nimg = ee.Image('LANDSAT/LT05/C01/T1_SR/LT05_034033_20000913')\n\n# Select Red and NIR bands, scale them, and sample 500 points.\nsamp_fc = img.select(['B3','B4']).divide(10000).sample(scale=30, numPixels=500)\n\n# Arrange the sample as a list of lists.\nsamp_dict = samp_fc.reduceColumns(ee.Reducer.toList().repeat(2), ['B3', 'B4'])\nsamp_list = ee.List(samp_dict.get('list'))\n\n# Save server-side ee.List as a client-side Python list.\nsamp_data = samp_list.getInfo()\n\n# Display a scatter plot of Red-NIR sample pairs using matplotlib.\nplt.scatter(samp_data[0], samp_data[1], alpha=0.2)\nplt.xlabel('Red', fontsize=12)\nplt.ylabel('NIR', fontsize=12)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7ed0f77f87114102e80cb17798924a0b46a64dd | 117,352 | ipynb | Jupyter Notebook | 1_mosaic_data_attention_experiments/3_stage_wise_training/Attention_weights_for_every_data/type4_data/init_1/exp_kernel/both_random_what/lr_0.1/type4_attn_ewts_gaussian_kernel.ipynb | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | 2 | 2019-08-24T07:20:35.000Z | 2020-03-27T08:16:59.000Z | 1_mosaic_data_attention_experiments/3_stage_wise_training/Attention_weights_for_every_data/type4_data/init_1/exp_kernel/both_random_what/lr_0.1/type4_attn_ewts_gaussian_kernel.ipynb | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | null | null | null | 1_mosaic_data_attention_experiments/3_stage_wise_training/Attention_weights_for_every_data/type4_data/init_1/exp_kernel/both_random_what/lr_0.1/type4_attn_ewts_gaussian_kernel.ipynb | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | 3 | 2019-06-21T09:34:32.000Z | 2019-09-19T10:43:07.000Z | 103.943313 | 46,518 | 0.758973 | [
[
[
"import numpy as np\nimport pandas as pd\n\nimport torch\nimport torchvision\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms, utils\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nfrom matplotlib import pyplot as plt\nfrom myrmsprop import MyRmsprop\n%matplotlib inline\ntorch.backends.cudnn.deterministic = True\ntorch.backends.cudnn.benchmark = False",
"_____no_output_____"
],
[
"train_data = np.load(\"train_type4_data.npy\",allow_pickle=True)\n\ntest_data = np.load(\"test_type4_data.npy\",allow_pickle=True)",
"_____no_output_____"
],
[
"mosaic_list_of_images = train_data[0][\"mosaic_list\"]\nmosaic_label = train_data[0][\"mosaic_label\"]\nfore_idx = train_data[0][\"fore_idx\"]\n\n\ntest_mosaic_list_of_images = test_data[0][\"mosaic_list\"]\ntest_mosaic_label = test_data[0][\"mosaic_label\"]\ntest_fore_idx = test_data[0][\"fore_idx\"]",
"_____no_output_____"
],
[
"class MosaicDataset1(Dataset):\n \"\"\"MosaicDataset dataset.\"\"\"\n\n def __init__(self, mosaic_list, mosaic_label,fore_idx):\n \"\"\"\n Args:\n csv_file (string): Path to the csv file with annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n self.mosaic = mosaic_list\n self.label = mosaic_label\n self.fore_idx = fore_idx\n \n def __len__(self):\n return len(self.label)\n\n def __getitem__(self, idx):\n return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]",
"_____no_output_____"
],
[
"batch = 3000\ntrain_dataset = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)\ntrain_loader = DataLoader( train_dataset,batch_size= batch ,shuffle=False)\n#batch = 2000\n#test_dataset = MosaicDataset1(test_mosaic_list_of_images, test_mosaic_label, test_fore_idx)\n#test_loader = DataLoader(test_dataset,batch_size= batch ,shuffle=False)",
"_____no_output_____"
],
[
"n_batches = 3000//batch\nbg = []\nfor i in range(n_batches):\n torch.manual_seed(i)\n betag = torch.randn(3000,9)#torch.ones((250,9))/9\n bg.append( betag.requires_grad_() )",
"_____no_output_____"
],
[
"len(bg)",
"_____no_output_____"
],
[
"#H = np.zeros((27000,27000))\nfor i, data in enumerate(train_loader, 0):\n print(i) # only one batch\n inputs,_,_ = data\n inputs = torch.reshape(inputs,(27000,2))\n dis = torch.cdist(inputs,inputs)**2\n gamma = -1/torch.median(dis)\n print(gamma)\n H = torch.exp(gamma*dis)\n",
"0\ntensor(-0.0948)\n"
],
[
"H.shape",
"_____no_output_____"
],
[
"class Module2(nn.Module):\n def __init__(self):\n super(Module2, self).__init__()\n self.linear1 = nn.Linear(2,100)\n self.linear2 = nn.Linear(100,3)\n\n def forward(self,x):\n x = F.relu(self.linear1(x))\n x = self.linear2(x)\n return x",
"_____no_output_____"
],
[
"torch.manual_seed(1234)\nwhat_net = Module2().double()\n\n#what_net.load_state_dict(torch.load(\"type4_what_net.pt\"))\nwhat_net = what_net.to(\"cuda\")",
"_____no_output_____"
],
[
"def attn_avg(x,beta):\n y = torch.zeros([batch,2], dtype=torch.float64)\n y = y.to(\"cuda\")\n alpha = F.softmax(beta,dim=1) # alphas\n #print(alpha[0],x[0,:])\n for i in range(9): \n alpha1 = alpha[:,i] \n y = y + torch.mul(alpha1[:,None],x[:,i])\n return y,alpha\n",
"_____no_output_____"
],
[
"def calculate_attn_loss(dataloader,what,criter):\n what.eval()\n r_loss = 0\n alphas = []\n lbls = []\n pred = []\n fidices = []\n correct = 0\n tot = 0\n with torch.no_grad():\n for i, data in enumerate(dataloader, 0):\n inputs, labels,fidx= data\n lbls.append(labels)\n fidices.append(fidx)\n inputs = inputs.double()\n beta = bg[i] # beta for ith batch\n inputs, labels,beta = inputs.to(\"cuda\"),labels.to(\"cuda\"),beta.to(\"cuda\")\n avg,alpha = attn_avg(inputs,beta)\n alpha = alpha.to(\"cuda\")\n outputs = what(avg)\n _, predicted = torch.max(outputs.data, 1)\n correct += sum(predicted == labels)\n tot += len(predicted)\n pred.append(predicted.cpu().numpy())\n alphas.append(alpha.cpu().numpy())\n loss = criter(outputs, labels)\n r_loss += loss.item()\n alphas = np.concatenate(alphas,axis=0)\n pred = np.concatenate(pred,axis=0)\n lbls = np.concatenate(lbls,axis=0)\n fidices = np.concatenate(fidices,axis=0)\n #print(alphas.shape,pred.shape,lbls.shape,fidices.shape) \n analysis = analyse_data(alphas,lbls,pred,fidices)\n return r_loss/(i+1),analysis,correct.item(),tot,correct.item()/tot",
"_____no_output_____"
],
[
"# for param in what_net.parameters():\n# param.requires_grad = False",
"_____no_output_____"
],
[
"\ndef analyse_data(alphas,lbls,predicted,f_idx):\n '''\n analysis data is created here\n '''\n batch = len(predicted)\n amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0\n for j in range (batch):\n focus = np.argmax(alphas[j])\n if(alphas[j][focus] >= 0.5):\n amth +=1\n else:\n alth +=1\n if(focus == f_idx[j] and predicted[j] == lbls[j]):\n ftpt += 1\n elif(focus != f_idx[j] and predicted[j] == lbls[j]):\n ffpt +=1\n elif(focus == f_idx[j] and predicted[j] != lbls[j]):\n ftpf +=1\n elif(focus != f_idx[j] and predicted[j] != lbls[j]):\n ffpf +=1\n #print(sum(predicted==lbls),ftpt+ffpt)\n return [ftpt,ffpt,ftpf,ffpf,amth,alth]",
"_____no_output_____"
],
[
"optim1 = []\nH= H.to(\"cpu\")\nfor i in range(n_batches):\n optim1.append(MyRmsprop([bg[i]],H=H,lr=0.1))\n #optim1.append(optim.RMSprop([bg[i]],lr=0.1))",
"_____no_output_____"
],
[
"# instantiate optimizer\noptimizer_what = optim.RMSprop(what_net.parameters(), lr=0.001)#, momentum=0.9)#,nesterov=True)\n\n\n \n\n\ncriterion = nn.CrossEntropyLoss()\nacti = []\nanalysis_data_tr = []\nanalysis_data_tst = []\nloss_curi_tr = []\nloss_curi_tst = []\nepochs = 200\n\n\n# calculate zeroth epoch loss and FTPT values\nrunning_loss,anlys_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion)\nprint('training epoch: [%d ] loss: %.3f correct: %.3f, total: %.3f, accuracy: %.3f' %(0,running_loss,correct,total,accuracy)) \nloss_curi_tr.append(running_loss)\nanalysis_data_tr.append(anlys_data)\n\n\n\n\n# training starts \nfor epoch in range(epochs): # loop over the dataset multiple times\n ep_lossi = []\n running_loss = 0.0\n what_net.train()\n for i, data in enumerate(train_loader, 0):\n # get the inputs\n inputs, labels,_ = data\n inputs = inputs.double()\n beta = bg[i] # alpha for ith batch\n #print(labels)\n inputs, labels,beta = inputs.to(\"cuda\"),labels.to(\"cuda\"),beta.to(\"cuda\")\n \n # zero the parameter gradients\n optimizer_what.zero_grad()\n optim1[i].zero_grad()\n \n # forward + backward + optimize\n avg,alpha = attn_avg(inputs,beta)\n outputs = what_net(avg) \n loss = criterion(outputs, labels)\n\n # print statistics\n running_loss += loss.item()\n #alpha.retain_grad()\n loss.backward(retain_graph=False)\n optimizer_what.step()\n optim1[i].step()\n\n\n running_loss_tr,anls_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion)\n analysis_data_tr.append(anls_data)\n loss_curi_tr.append(running_loss_tr) #loss per epoch\n print('training epoch: [%d ] loss: %.3f correct: %.3f, total: %.3f, accuracy: %.3f' %(epoch+1,running_loss_tr,correct,total,accuracy)) \n\n\n \n if running_loss_tr<=0.08:\n break\nprint('Finished Training run ')\nanalysis_data_tr = np.array(analysis_data_tr)\n\n",
"training epoch: [0 ] loss: 1.478 correct: 971.000, total: 3000.000, accuracy: 0.324\ntraining epoch: [1 ] loss: 1.145 correct: 991.000, total: 3000.000, accuracy: 0.330\ntraining epoch: [2 ] loss: 1.138 correct: 1047.000, total: 3000.000, accuracy: 0.349\ntraining epoch: [3 ] loss: 1.124 correct: 1013.000, total: 3000.000, accuracy: 0.338\ntraining epoch: [4 ] loss: 1.103 correct: 1050.000, total: 3000.000, accuracy: 0.350\ntraining epoch: [5 ] loss: 1.097 correct: 1012.000, total: 3000.000, accuracy: 0.337\ntraining epoch: [6 ] loss: 1.093 correct: 1130.000, total: 3000.000, accuracy: 0.377\ntraining epoch: [7 ] loss: 1.091 correct: 1148.000, total: 3000.000, accuracy: 0.383\ntraining epoch: [8 ] loss: 1.089 correct: 1196.000, total: 3000.000, accuracy: 0.399\ntraining epoch: [9 ] loss: 1.086 correct: 1210.000, total: 3000.000, accuracy: 0.403\ntraining epoch: [10 ] loss: 1.083 correct: 1220.000, total: 3000.000, accuracy: 0.407\ntraining epoch: [11 ] loss: 1.080 correct: 1257.000, total: 3000.000, accuracy: 0.419\ntraining epoch: [12 ] loss: 1.076 correct: 1281.000, total: 3000.000, accuracy: 0.427\ntraining epoch: [13 ] loss: 1.070 correct: 1327.000, total: 3000.000, accuracy: 0.442\ntraining epoch: [14 ] loss: 1.063 correct: 1357.000, total: 3000.000, accuracy: 0.452\ntraining epoch: [15 ] loss: 1.055 correct: 1402.000, total: 3000.000, accuracy: 0.467\ntraining epoch: [16 ] loss: 1.045 correct: 1425.000, total: 3000.000, accuracy: 0.475\ntraining epoch: [17 ] loss: 1.032 correct: 1491.000, total: 3000.000, accuracy: 0.497\ntraining epoch: [18 ] loss: 1.018 correct: 1536.000, total: 3000.000, accuracy: 0.512\ntraining epoch: [19 ] loss: 1.003 correct: 1512.000, total: 3000.000, accuracy: 0.504\ntraining epoch: [20 ] loss: 0.989 correct: 1592.000, total: 3000.000, accuracy: 0.531\ntraining epoch: [21 ] loss: 0.986 correct: 1298.000, total: 3000.000, accuracy: 0.433\ntraining epoch: [22 ] loss: 0.995 correct: 1526.000, total: 3000.000, accuracy: 0.509\ntraining epoch: [23 ] loss: 0.979 correct: 1322.000, total: 3000.000, accuracy: 0.441\ntraining epoch: [24 ] loss: 0.946 correct: 1650.000, total: 3000.000, accuracy: 0.550\ntraining epoch: [25 ] loss: 0.904 correct: 1513.000, total: 3000.000, accuracy: 0.504\ntraining epoch: [26 ] loss: 0.875 correct: 1854.000, total: 3000.000, accuracy: 0.618\ntraining epoch: [27 ] loss: 0.850 correct: 1712.000, total: 3000.000, accuracy: 0.571\ntraining epoch: [28 ] loss: 0.828 correct: 1951.000, total: 3000.000, accuracy: 0.650\ntraining epoch: [29 ] loss: 0.807 correct: 1932.000, total: 3000.000, accuracy: 0.644\ntraining epoch: [30 ] loss: 0.788 correct: 2091.000, total: 3000.000, accuracy: 0.697\ntraining epoch: [31 ] loss: 0.770 correct: 2067.000, total: 3000.000, accuracy: 0.689\ntraining epoch: [32 ] loss: 0.752 correct: 2182.000, total: 3000.000, accuracy: 0.727\ntraining epoch: [33 ] loss: 0.736 correct: 2159.000, total: 3000.000, accuracy: 0.720\ntraining epoch: [34 ] loss: 0.721 correct: 2227.000, total: 3000.000, accuracy: 0.742\ntraining epoch: [35 ] loss: 0.707 correct: 2227.000, total: 3000.000, accuracy: 0.742\ntraining epoch: [36 ] loss: 0.694 correct: 2276.000, total: 3000.000, accuracy: 0.759\ntraining epoch: [37 ] loss: 0.682 correct: 2278.000, total: 3000.000, accuracy: 0.759\ntraining epoch: [38 ] loss: 0.670 correct: 2327.000, total: 3000.000, accuracy: 0.776\ntraining epoch: [39 ] loss: 0.660 correct: 2341.000, total: 3000.000, accuracy: 0.780\ntraining epoch: [40 ] loss: 0.650 correct: 2377.000, total: 3000.000, accuracy: 0.792\ntraining epoch: [41 ] loss: 0.640 correct: 2402.000, total: 3000.000, accuracy: 0.801\ntraining epoch: [42 ] loss: 0.631 correct: 2440.000, total: 3000.000, accuracy: 0.813\ntraining epoch: [43 ] loss: 0.623 correct: 2471.000, total: 3000.000, accuracy: 0.824\ntraining epoch: [44 ] loss: 0.615 correct: 2507.000, total: 3000.000, accuracy: 0.836\ntraining epoch: [45 ] loss: 0.607 correct: 2530.000, total: 3000.000, accuracy: 0.843\ntraining epoch: [46 ] loss: 0.600 correct: 2557.000, total: 3000.000, accuracy: 0.852\ntraining epoch: [47 ] loss: 0.592 correct: 2604.000, total: 3000.000, accuracy: 0.868\ntraining epoch: [48 ] loss: 0.583 correct: 2690.000, total: 3000.000, accuracy: 0.897\ntraining epoch: [49 ] loss: 0.576 correct: 2702.000, total: 3000.000, accuracy: 0.901\ntraining epoch: [50 ] loss: 0.570 correct: 2723.000, total: 3000.000, accuracy: 0.908\ntraining epoch: [51 ] loss: 0.564 correct: 2743.000, total: 3000.000, accuracy: 0.914\ntraining epoch: [52 ] loss: 0.558 correct: 2761.000, total: 3000.000, accuracy: 0.920\ntraining epoch: [53 ] loss: 0.552 correct: 2773.000, total: 3000.000, accuracy: 0.924\ntraining epoch: [54 ] loss: 0.547 correct: 2792.000, total: 3000.000, accuracy: 0.931\ntraining epoch: [55 ] loss: 0.542 correct: 2799.000, total: 3000.000, accuracy: 0.933\ntraining epoch: [56 ] loss: 0.537 correct: 2822.000, total: 3000.000, accuracy: 0.941\ntraining epoch: [57 ] loss: 0.533 correct: 2830.000, total: 3000.000, accuracy: 0.943\ntraining epoch: [58 ] loss: 0.528 correct: 2844.000, total: 3000.000, accuracy: 0.948\ntraining epoch: [59 ] loss: 0.524 correct: 2854.000, total: 3000.000, accuracy: 0.951\ntraining epoch: [60 ] loss: 0.519 correct: 2865.000, total: 3000.000, accuracy: 0.955\ntraining epoch: [61 ] loss: 0.515 correct: 2871.000, total: 3000.000, accuracy: 0.957\ntraining epoch: [62 ] loss: 0.511 correct: 2880.000, total: 3000.000, accuracy: 0.960\ntraining epoch: [63 ] loss: 0.507 correct: 2884.000, total: 3000.000, accuracy: 0.961\ntraining epoch: [64 ] loss: 0.503 correct: 2889.000, total: 3000.000, accuracy: 0.963\ntraining epoch: [65 ] loss: 0.499 correct: 2892.000, total: 3000.000, accuracy: 0.964\ntraining epoch: [66 ] loss: 0.496 correct: 2904.000, total: 3000.000, accuracy: 0.968\ntraining epoch: [67 ] loss: 0.492 correct: 2905.000, total: 3000.000, accuracy: 0.968\ntraining epoch: [68 ] loss: 0.488 correct: 2914.000, total: 3000.000, accuracy: 0.971\ntraining epoch: [69 ] loss: 0.485 correct: 2914.000, total: 3000.000, accuracy: 0.971\ntraining epoch: [70 ] loss: 0.482 correct: 2923.000, total: 3000.000, accuracy: 0.974\ntraining epoch: [71 ] loss: 0.478 correct: 2918.000, total: 3000.000, accuracy: 0.973\ntraining epoch: [72 ] loss: 0.475 correct: 2929.000, total: 3000.000, accuracy: 0.976\ntraining epoch: [73 ] loss: 0.472 correct: 2921.000, total: 3000.000, accuracy: 0.974\ntraining epoch: [74 ] loss: 0.469 correct: 2936.000, total: 3000.000, accuracy: 0.979\ntraining epoch: [75 ] loss: 0.465 correct: 2921.000, total: 3000.000, accuracy: 0.974\ntraining epoch: [76 ] loss: 0.462 correct: 2943.000, total: 3000.000, accuracy: 0.981\ntraining epoch: [77 ] loss: 0.459 correct: 2920.000, total: 3000.000, accuracy: 0.973\ntraining epoch: [78 ] loss: 0.456 correct: 2950.000, total: 3000.000, accuracy: 0.983\ntraining epoch: [79 ] loss: 0.454 correct: 2914.000, total: 3000.000, accuracy: 0.971\ntraining epoch: [80 ] loss: 0.451 correct: 2955.000, total: 3000.000, accuracy: 0.985\ntraining epoch: [81 ] loss: 0.448 correct: 2877.000, total: 3000.000, accuracy: 0.959\ntraining epoch: [82 ] loss: 0.446 correct: 2947.000, total: 3000.000, accuracy: 0.982\ntraining epoch: [83 ] loss: 0.445 correct: 2742.000, total: 3000.000, accuracy: 0.914\ntraining epoch: [84 ] loss: 0.445 correct: 2825.000, total: 3000.000, accuracy: 0.942\ntraining epoch: [85 ] loss: 0.445 correct: 2429.000, total: 3000.000, accuracy: 0.810\ntraining epoch: [86 ] loss: 0.446 correct: 2488.000, total: 3000.000, accuracy: 0.829\ntraining epoch: [87 ] loss: 0.447 correct: 2191.000, total: 3000.000, accuracy: 0.730\ntraining epoch: [88 ] loss: 0.444 correct: 2397.000, total: 3000.000, accuracy: 0.799\ntraining epoch: [89 ] loss: 0.440 correct: 2298.000, total: 3000.000, accuracy: 0.766\ntraining epoch: [90 ] loss: 0.434 correct: 2677.000, total: 3000.000, accuracy: 0.892\ntraining epoch: [91 ] loss: 0.429 correct: 2613.000, total: 3000.000, accuracy: 0.871\ntraining epoch: [92 ] loss: 0.425 correct: 2889.000, total: 3000.000, accuracy: 0.963\ntraining epoch: [93 ] loss: 0.421 correct: 2830.000, total: 3000.000, accuracy: 0.943\ntraining epoch: [94 ] loss: 0.418 correct: 2959.000, total: 3000.000, accuracy: 0.986\ntraining epoch: [95 ] loss: 0.415 correct: 2907.000, total: 3000.000, accuracy: 0.969\ntraining epoch: [96 ] loss: 0.412 correct: 2970.000, total: 3000.000, accuracy: 0.990\ntraining epoch: [97 ] loss: 0.410 correct: 2937.000, total: 3000.000, accuracy: 0.979\ntraining epoch: [98 ] loss: 0.407 correct: 2972.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [99 ] loss: 0.405 correct: 2947.000, total: 3000.000, accuracy: 0.982\ntraining epoch: [100 ] loss: 0.403 correct: 2975.000, total: 3000.000, accuracy: 0.992\ntraining epoch: [101 ] loss: 0.401 correct: 2962.000, total: 3000.000, accuracy: 0.987\ntraining epoch: [102 ] loss: 0.399 correct: 2976.000, total: 3000.000, accuracy: 0.992\ntraining epoch: [103 ] loss: 0.396 correct: 2965.000, total: 3000.000, accuracy: 0.988\ntraining epoch: [104 ] loss: 0.394 correct: 2978.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [105 ] loss: 0.392 correct: 2968.000, total: 3000.000, accuracy: 0.989\ntraining epoch: [106 ] loss: 0.390 correct: 2980.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [107 ] loss: 0.388 correct: 2969.000, total: 3000.000, accuracy: 0.990\ntraining epoch: [108 ] loss: 0.386 correct: 2982.000, total: 3000.000, accuracy: 0.994\ntraining epoch: [109 ] loss: 0.384 correct: 2970.000, total: 3000.000, accuracy: 0.990\ntraining epoch: [110 ] loss: 0.382 correct: 2983.000, total: 3000.000, accuracy: 0.994\ntraining epoch: [111 ] loss: 0.380 correct: 2973.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [112 ] loss: 0.378 correct: 2985.000, total: 3000.000, accuracy: 0.995\ntraining epoch: [113 ] loss: 0.376 correct: 2974.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [114 ] loss: 0.375 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [115 ] loss: 0.373 correct: 2976.000, total: 3000.000, accuracy: 0.992\ntraining epoch: [116 ] loss: 0.371 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [117 ] loss: 0.369 correct: 2977.000, total: 3000.000, accuracy: 0.992\ntraining epoch: [118 ] loss: 0.367 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [119 ] loss: 0.365 correct: 2979.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [120 ] loss: 0.363 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [121 ] loss: 0.361 correct: 2980.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [122 ] loss: 0.360 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [123 ] loss: 0.358 correct: 2979.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [124 ] loss: 0.356 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [125 ] loss: 0.354 correct: 2981.000, total: 3000.000, accuracy: 0.994\ntraining epoch: [126 ] loss: 0.352 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [127 ] loss: 0.351 correct: 2980.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [128 ] loss: 0.349 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [129 ] loss: 0.347 correct: 2979.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [130 ] loss: 0.345 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [131 ] loss: 0.344 correct: 2978.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [132 ] loss: 0.342 correct: 2986.000, total: 3000.000, accuracy: 0.995\ntraining epoch: [133 ] loss: 0.341 correct: 2974.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [134 ] loss: 0.339 correct: 2990.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [135 ] loss: 0.338 correct: 2959.000, total: 3000.000, accuracy: 0.986\ntraining epoch: [136 ] loss: 0.337 correct: 2986.000, total: 3000.000, accuracy: 0.995\ntraining epoch: [137 ] loss: 0.336 correct: 2946.000, total: 3000.000, accuracy: 0.982\ntraining epoch: [138 ] loss: 0.335 correct: 2974.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [139 ] loss: 0.335 correct: 2924.000, total: 3000.000, accuracy: 0.975\ntraining epoch: [140 ] loss: 0.335 correct: 2956.000, total: 3000.000, accuracy: 0.985\ntraining epoch: [141 ] loss: 0.335 correct: 2900.000, total: 3000.000, accuracy: 0.967\ntraining epoch: [142 ] loss: 0.333 correct: 2941.000, total: 3000.000, accuracy: 0.980\ntraining epoch: [143 ] loss: 0.332 correct: 2896.000, total: 3000.000, accuracy: 0.965\ntraining epoch: [144 ] loss: 0.329 correct: 2955.000, total: 3000.000, accuracy: 0.985\ntraining epoch: [145 ] loss: 0.327 correct: 2924.000, total: 3000.000, accuracy: 0.975\ntraining epoch: [146 ] loss: 0.324 correct: 2973.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [147 ] loss: 0.322 correct: 2940.000, total: 3000.000, accuracy: 0.980\ntraining epoch: [148 ] loss: 0.319 correct: 2985.000, total: 3000.000, accuracy: 0.995\ntraining epoch: [149 ] loss: 0.317 correct: 2960.000, total: 3000.000, accuracy: 0.987\ntraining epoch: [150 ] loss: 0.315 correct: 2990.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [151 ] loss: 0.313 correct: 2972.000, total: 3000.000, accuracy: 0.991\ntraining epoch: [152 ] loss: 0.311 correct: 2992.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [153 ] loss: 0.309 correct: 2977.000, total: 3000.000, accuracy: 0.992\ntraining epoch: [154 ] loss: 0.308 correct: 2993.000, total: 3000.000, accuracy: 0.998\ntraining epoch: [155 ] loss: 0.306 correct: 2981.000, total: 3000.000, accuracy: 0.994\ntraining epoch: [156 ] loss: 0.305 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [157 ] loss: 0.303 correct: 2983.000, total: 3000.000, accuracy: 0.994\ntraining epoch: [158 ] loss: 0.301 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [159 ] loss: 0.300 correct: 2986.000, total: 3000.000, accuracy: 0.995\ntraining epoch: [160 ] loss: 0.299 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [161 ] loss: 0.297 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [162 ] loss: 0.296 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [163 ] loss: 0.294 correct: 2989.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [164 ] loss: 0.293 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [165 ] loss: 0.291 correct: 2990.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [166 ] loss: 0.290 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [167 ] loss: 0.289 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [168 ] loss: 0.287 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [169 ] loss: 0.286 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [170 ] loss: 0.285 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [171 ] loss: 0.283 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [172 ] loss: 0.282 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [173 ] loss: 0.281 correct: 2990.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [174 ] loss: 0.279 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [175 ] loss: 0.278 correct: 2989.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [176 ] loss: 0.277 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [177 ] loss: 0.275 correct: 2988.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [178 ] loss: 0.274 correct: 2992.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [179 ] loss: 0.273 correct: 2988.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [180 ] loss: 0.272 correct: 2992.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [181 ] loss: 0.270 correct: 2985.000, total: 3000.000, accuracy: 0.995\ntraining epoch: [182 ] loss: 0.269 correct: 2995.000, total: 3000.000, accuracy: 0.998\ntraining epoch: [183 ] loss: 0.268 correct: 2982.000, total: 3000.000, accuracy: 0.994\ntraining epoch: [184 ] loss: 0.267 correct: 2996.000, total: 3000.000, accuracy: 0.999\ntraining epoch: [185 ] loss: 0.266 correct: 2979.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [186 ] loss: 0.265 correct: 2995.000, total: 3000.000, accuracy: 0.998\ntraining epoch: [187 ] loss: 0.264 correct: 2977.000, total: 3000.000, accuracy: 0.992\ntraining epoch: [188 ] loss: 0.264 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [189 ] loss: 0.263 correct: 2965.000, total: 3000.000, accuracy: 0.988\ntraining epoch: [190 ] loss: 0.262 correct: 2988.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [191 ] loss: 0.262 correct: 2962.000, total: 3000.000, accuracy: 0.987\ntraining epoch: [192 ] loss: 0.261 correct: 2987.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [193 ] loss: 0.260 correct: 2962.000, total: 3000.000, accuracy: 0.987\ntraining epoch: [194 ] loss: 0.258 correct: 2988.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [195 ] loss: 0.257 correct: 2963.000, total: 3000.000, accuracy: 0.988\ntraining epoch: [196 ] loss: 0.255 correct: 2989.000, total: 3000.000, accuracy: 0.996\ntraining epoch: [197 ] loss: 0.254 correct: 2967.000, total: 3000.000, accuracy: 0.989\ntraining epoch: [198 ] loss: 0.252 correct: 2991.000, total: 3000.000, accuracy: 0.997\ntraining epoch: [199 ] loss: 0.250 correct: 2979.000, total: 3000.000, accuracy: 0.993\ntraining epoch: [200 ] loss: 0.249 correct: 2996.000, total: 3000.000, accuracy: 0.999\nFinished Training run \n"
],
[
"columns = [\"epochs\", \"argmax > 0.5\" ,\"argmax < 0.5\", \"focus_true_pred_true\", \"focus_false_pred_true\", \"focus_true_pred_false\", \"focus_false_pred_false\" ]\ndf_train = pd.DataFrame()\ndf_test = pd.DataFrame()\ndf_train[columns[0]] = np.arange(0,epoch+2)\ndf_train[columns[1]] = analysis_data_tr[:,-2]/30\ndf_train[columns[2]] = analysis_data_tr[:,-1]/30\ndf_train[columns[3]] = analysis_data_tr[:,0]/30\ndf_train[columns[4]] = analysis_data_tr[:,1]/30\ndf_train[columns[5]] = analysis_data_tr[:,2]/30\ndf_train[columns[6]] = analysis_data_tr[:,3]/30",
"_____no_output_____"
],
[
"df_train",
"_____no_output_____"
],
[
"fig= plt.figure(figsize=(6,6))\nplt.plot(df_train[columns[0]],df_train[columns[3]], label =\"focus_true_pred_true \")\nplt.plot(df_train[columns[0]],df_train[columns[4]], label =\"focus_false_pred_true \")\nplt.plot(df_train[columns[0]],df_train[columns[5]], label =\"focus_true_pred_false \")\nplt.plot(df_train[columns[0]],df_train[columns[6]], label =\"focus_false_pred_false \")\nplt.title(\"On Train set\")\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.xlabel(\"epochs\")\nplt.ylabel(\"percentage of data\")\nplt.xticks([0,50,100,150,200])\n#plt.vlines(vline_list,min(min(df_train[columns[3]]/300),min(df_train[columns[4]]/300),min(df_train[columns[5]]/300),min(df_train[columns[6]]/300)), max(max(df_train[columns[3]]/300),max(df_train[columns[4]]/300),max(df_train[columns[5]]/300),max(df_train[columns[6]]/300)),linestyles='dotted')\nplt.show()\nfig.savefig(\"train_analysis.pdf\")\nfig.savefig(\"train_analysis.png\")",
"_____no_output_____"
],
[
"aph = []\nfor i in bg:\n aph.append(F.softmax(i,dim=1).detach().numpy())\naph = np.concatenate(aph,axis=0)\n# torch.save({\n# 'epoch': 500,\n# 'model_state_dict': what_net.state_dict(),\n# #'optimizer_state_dict': optimizer_what.state_dict(),\n# \"optimizer_alpha\":optim1,\n# \"FTPT_analysis\":analysis_data_tr,\n# \"alpha\":aph\n\n# }, \"type4_what_net_500.pt\")",
"_____no_output_____"
],
[
"aph[0]",
"_____no_output_____"
],
[
"xx,yy= np.meshgrid(np.arange(1,8,0.01),np.arange(2,9,0.01))\nX = np.concatenate((xx.reshape(-1,1),yy.reshape(-1,1)),axis=1)\nX = torch.Tensor(X).double().to(\"cuda\")\nY1 = what_net(X)",
"_____no_output_____"
],
[
"Y1 = Y1.to(\"cpu\")\nY1 = Y1.detach().numpy()\nY1 = torch.softmax(torch.Tensor(Y1),dim=1)\n_,Z4= torch.max(Y1,1)\nZ1 = Y1[:,0]\nZ2 = Y1[:,1]\nZ3 = Y1[:,2]",
"_____no_output_____"
],
[
"X = X.to(\"cpu\")",
"_____no_output_____"
],
[
"data = np.load(\"type_4_data.npy\",allow_pickle=True)\nx = data[0][\"X\"]\ny = data[0][\"Y\"]\nidx= []\nfor i in range(10):\n print(i,sum(y==i))\n idx.append(y==i)",
"0 482\n1 485\n2 536\n3 504\n4 493\n5 513\n6 497\n7 486\n8 522\n9 482\n"
],
[
"avrg = []\nwith torch.no_grad():\n for i, data in enumerate(train_loader):\n inputs , labels , fore_idx = data\n inputs = inputs.double()\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n beta = bg[i]\n beta = beta.to(\"cuda\")\n avg,alpha = attn_avg(inputs,beta)\n \n avrg.append(avg.detach().cpu().numpy())\navrg= np.concatenate(avrg,axis=0)",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(6,6))\n#plt.scatter(X[:,0],X[:,1],c=Z4)\nZ4 = Z4.reshape(xx.shape)\nplt.contourf(xx, yy, Z4, alpha=0.4)\nfor i in range(3):\n plt.scatter(x[idx[i],0],x[idx[i],1],label=\"class_\"+str(i),alpha=0.8)\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.scatter(avrg[:,0],avrg[:,1],alpha=0.2)\nplt.savefig(\"decision_boundary.png\",bbox_inches=\"tight\")\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ed1276a1061fd8cba4ebdc42eb146ee083447a | 782,025 | ipynb | Jupyter Notebook | nbs/dl1/lesson2-download.ipynb | piggybox/course-v3 | 78375a996e5a2b7befa19a4d231f2d082d9fb1ab | [
"Apache-2.0"
] | null | null | null | nbs/dl1/lesson2-download.ipynb | piggybox/course-v3 | 78375a996e5a2b7befa19a4d231f2d082d9fb1ab | [
"Apache-2.0"
] | 3 | 2020-02-25T23:31:17.000Z | 2022-02-26T05:13:20.000Z | nbs/dl1/lesson2-download.ipynb | piggybox/course-v3 | 78375a996e5a2b7befa19a4d231f2d082d9fb1ab | [
"Apache-2.0"
] | null | null | null | 542.319695 | 446,052 | 0.930193 | [
[
[
"# Creating your own dataset from Google Images\n\n*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)*",
"_____no_output_____"
],
[
"In this tutorial we will see how to easily create an image dataset through Google Images. **Note**: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).",
"_____no_output_____"
]
],
[
[
"from fastai.vision import *",
"_____no_output_____"
]
],
[
[
"## Get a list of URLs",
"_____no_output_____"
],
[
"### Search and scroll",
"_____no_output_____"
],
[
"Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.\n\nScroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.\n\nIt is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, \"canis lupus lupus\", it might be a good idea to exclude other variants:\n\n \"canis lupus lupus\" -dog -arctos -familiaris -baileyi -occidentalis\n\nYou can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.",
"_____no_output_____"
],
[
"### Download into file",
"_____no_output_____"
],
[
"Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.\n\nPress <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd> in Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd> in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.\n\nYou will need to get the urls of each of the images. Before running the following commands, you may want to disable ad blocking extensions (uBlock, AdBlockPlus etc.) in Chrome. Otherwise window.open() coomand doesn't work. Then you can run the following commands:\n\n```javascript\nurls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);\nwindow.open('data:text/csv;charset=utf-8,' + escape(urls.join('\\n')));\n```",
"_____no_output_____"
],
[
"### Create directory and upload urls file into your server",
"_____no_output_____"
],
[
"Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.",
"_____no_output_____"
]
],
[
[
"folder = 'black'\nfile = 'urls_black.csv'",
"_____no_output_____"
],
[
"folder = 'teddys'\nfile = 'urls_teddys.csv'",
"_____no_output_____"
],
[
"folder = 'grizzly'\nfile = 'urls_grizzly.csv'",
"_____no_output_____"
]
],
[
[
"You will need to run this cell once per each category.",
"_____no_output_____"
]
],
[
[
"path = Path('data/bears')\ndest = path/folder\ndest.mkdir(parents=True, exist_ok=True)",
"_____no_output_____"
],
[
"path.ls()",
"_____no_output_____"
]
],
[
[
"Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.\n\n",
"_____no_output_____"
],
[
"## Download images",
"_____no_output_____"
],
[
"Now you will need to download your images from their respective urls.\n\nfast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.\n\nLet's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.\n\nYou will need to run this line once for every category.",
"_____no_output_____"
]
],
[
[
"classes = ['teddys','grizzly','black']",
"_____no_output_____"
],
[
"download_images(path/file, dest, max_pics=200)",
"_____no_output_____"
],
[
"# If you have problems download, try with `max_workers=0` to see exceptions:\ndownload_images(path/file, dest, max_pics=20, max_workers=0)",
"_____no_output_____"
]
],
[
[
"Then we can remove any images that can't be opened:",
"_____no_output_____"
]
],
[
[
"for c in classes:\n print(c)\n verify_images(path/c, delete=True, max_size=500)",
"teddys\n"
]
],
[
[
"## View data",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\ndata = ImageDataBunch.from_folder(path, train=\".\", valid_pct=0.2,\n ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"# If you already cleaned your data, run this cell instead of the one before\n# np.random.seed(42)\n# data = ImageDataBunch.from_csv(path, folder=\".\", valid_pct=0.2, csv_labels='cleaned.csv',\n# ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)",
"_____no_output_____"
]
],
[
[
"Good! Let's take a look at some of our pictures then.",
"_____no_output_____"
]
],
[
[
"data.classes",
"_____no_output_____"
],
[
"data.show_batch(rows=3, figsize=(7,8))",
"_____no_output_____"
],
[
"data.classes, data.c, len(data.train_ds), len(data.valid_ds)",
"_____no_output_____"
]
],
[
[
"## Train model",
"_____no_output_____"
]
],
[
[
"learn = cnn_learner(data, models.resnet34, metrics=error_rate)",
"_____no_output_____"
],
[
"learn.fit_one_cycle(4)",
"_____no_output_____"
],
[
"learn.save('stage-1')",
"_____no_output_____"
],
[
"learn.unfreeze()",
"_____no_output_____"
],
[
"learn.lr_find()",
"_____no_output_____"
],
[
"learn.recorder.plot()",
"_____no_output_____"
],
[
"learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4))",
"_____no_output_____"
],
[
"learn.save('stage-2')",
"_____no_output_____"
]
],
[
[
"## Interpretation",
"_____no_output_____"
]
],
[
[
"learn.load('stage-2');",
"_____no_output_____"
],
[
"interp = ClassificationInterpretation.from_learner(learn)",
"_____no_output_____"
],
[
"interp.plot_confusion_matrix()",
"_____no_output_____"
]
],
[
[
"## Cleaning Up\n\nSome of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.\n\nUsing the `ImageCleaner` widget from `fastai.widgets` we can prune our top losses, removing photos that don't belong.",
"_____no_output_____"
]
],
[
[
"from fastai.widgets import *",
"_____no_output_____"
]
],
[
[
"First we need to get the file paths from our top_losses. We can do this with `.from_toplosses`. We then feed the top losses indexes and corresponding dataset to `ImageCleaner`.\n\nNotice that the widget will not delete images directly from disk but it will create a new csv file `cleaned.csv` from where you can create a new ImageDataBunch with the corrected labels to continue training your model.",
"_____no_output_____"
],
[
"In order to clean the entire set of images, we need to create a new dataset without the split. The video lecture demostrated the use of the `ds_type` param which no longer has any effect. See [the thread](https://forums.fast.ai/t/duplicate-widget/30975/10) for more details.",
"_____no_output_____"
]
],
[
[
"db = (ImageList.from_folder(path)\n .no_split()\n .label_from_folder()\n .transform(get_transforms(), size=224)\n .databunch()\n )",
"_____no_output_____"
],
[
"# If you already cleaned your data using indexes from `from_toplosses`,\n# run this cell instead of the one before to proceed with removing duplicates.\n# Otherwise all the results of the previous step would be overwritten by\n# the new run of `ImageCleaner`.\n\n# db = (ImageList.from_csv(path, 'cleaned.csv', folder='.')\n# .no_split()\n# .label_from_df()\n# .transform(get_transforms(), size=224)\n# .databunch()\n# )",
"_____no_output_____"
]
],
[
[
"Then we create a new learner to use our new databunch with all the images.",
"_____no_output_____"
]
],
[
[
"learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)\n\nlearn_cln.load('stage-2');",
"_____no_output_____"
],
[
"ds, idxs = DatasetFormatter().from_toplosses(learn_cln)",
"_____no_output_____"
]
],
[
[
"Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via [/tree](/tree), not [/lab](/lab). Running the `ImageCleaner` widget in Jupyter Lab is [not currently supported](https://github.com/fastai/fastai/issues/1539).",
"_____no_output_____"
]
],
[
[
"ImageCleaner(ds, idxs, path)",
"_____no_output_____"
]
],
[
[
"Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. `ImageCleaner` will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from `top_losses.ImageCleaner(ds, idxs)`",
"_____no_output_____"
],
[
"You can also find duplicates in your dataset and delete them! To do this, you need to run `.from_similars` to get the potential duplicates' ids and then run `ImageCleaner` with `duplicates=True`. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.",
"_____no_output_____"
],
[
"Make sure to recreate the databunch and `learn_cln` from the `cleaned.csv` file. Otherwise the file would be overwritten from scratch, loosing all the results from cleaning the data from toplosses.",
"_____no_output_____"
]
],
[
[
"ds, idxs = DatasetFormatter().from_similars(learn_cln)",
"Getting activations...\n"
],
[
"ImageCleaner(ds, idxs, path, duplicates=True)",
"_____no_output_____"
]
],
[
[
"Remember to recreate your ImageDataBunch from your `cleaned.csv` to include the changes you made in your data!",
"_____no_output_____"
],
[
"## Putting your model in production",
"_____no_output_____"
],
[
"First thing first, let's export the content of our `Learner` object for production:",
"_____no_output_____"
]
],
[
[
"learn.export()",
"_____no_output_____"
]
],
[
[
"This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).",
"_____no_output_____"
],
[
"You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:",
"_____no_output_____"
]
],
[
[
"defaults.device = torch.device('cpu')",
"_____no_output_____"
],
[
"img = open_image(path/'black'/'00000021.jpg')\nimg",
"_____no_output_____"
]
],
[
[
"We create our `Learner` in production enviromnent like this, jsut make sure that `path` contains the file 'export.pkl' from before.",
"_____no_output_____"
]
],
[
[
"learn = load_learner(path)",
"_____no_output_____"
],
[
"pred_class,pred_idx,outputs = learn.predict(img)\npred_class",
"_____no_output_____"
]
],
[
[
"So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):\n\n```python\[email protected](\"/classify-url\", methods=[\"GET\"])\nasync def classify_url(request):\n bytes = await get_bytes(request.query_params[\"url\"])\n img = open_image(BytesIO(bytes))\n _,_,losses = learner.predict(img)\n return JSONResponse({\n \"predictions\": sorted(\n zip(cat_learner.data.classes, map(float, losses)),\n key=lambda p: p[1],\n reverse=True\n )\n })\n```\n\n(This example is for the [Starlette](https://www.starlette.io/) web app toolkit.)",
"_____no_output_____"
],
[
"## Things that can go wrong",
"_____no_output_____"
],
[
"- Most of the time things will train fine with the defaults\n- There's not much you really need to tune (despite what you've heard!)\n- Most likely are\n - Learning rate\n - Number of epochs",
"_____no_output_____"
],
[
"### Learning rate (LR) too high",
"_____no_output_____"
]
],
[
[
"learn = cnn_learner(data, models.resnet34, metrics=error_rate)",
"_____no_output_____"
],
[
"learn.fit_one_cycle(1, max_lr=0.5)",
"Total time: 00:13\nepoch train_loss valid_loss error_rate \n1 12.220007 1144188288.000000 0.765957 (00:13)\n\n"
]
],
[
[
"### Learning rate (LR) too low",
"_____no_output_____"
]
],
[
[
"learn = cnn_learner(data, models.resnet34, metrics=error_rate)",
"_____no_output_____"
]
],
[
[
"Previously we had this result:\n\n```\nTotal time: 00:57\nepoch train_loss valid_loss error_rate\n1 1.030236 0.179226 0.028369 (00:14)\n2 0.561508 0.055464 0.014184 (00:13)\n3 0.396103 0.053801 0.014184 (00:13)\n4 0.316883 0.050197 0.021277 (00:15)\n```",
"_____no_output_____"
]
],
[
[
"learn.fit_one_cycle(5, max_lr=1e-5)",
"Total time: 01:07\nepoch train_loss valid_loss error_rate\n1 1.349151 1.062807 0.609929 (00:13)\n2 1.373262 1.045115 0.546099 (00:13)\n3 1.346169 1.006288 0.468085 (00:13)\n4 1.334486 0.978713 0.453901 (00:13)\n5 1.320978 0.978108 0.446809 (00:13)\n\n"
],
[
"learn.recorder.plot_losses()",
"_____no_output_____"
]
],
[
[
"As well as taking a really long time, it's getting too many looks at each image, so may overfit.",
"_____no_output_____"
],
[
"### Too few epochs",
"_____no_output_____"
]
],
[
[
"learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False)",
"_____no_output_____"
],
[
"learn.fit_one_cycle(1)",
"Total time: 00:14\nepoch train_loss valid_loss error_rate\n1 0.602823 0.119616 0.049645 (00:14)\n\n"
]
],
[
[
"### Too many epochs",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\ndata = ImageDataBunch.from_folder(path, train=\".\", valid_pct=0.9, bs=32, \n ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0\n ),size=224, num_workers=4).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0)\nlearn.unfreeze()",
"_____no_output_____"
],
[
"learn.fit_one_cycle(40, slice(1e-6,1e-4))",
"Total time: 06:39\nepoch train_loss valid_loss error_rate\n1 1.513021 1.041628 0.507326 (00:13)\n2 1.290093 0.994758 0.443223 (00:09)\n3 1.185764 0.936145 0.410256 (00:09)\n4 1.117229 0.838402 0.322344 (00:09)\n5 1.022635 0.734872 0.252747 (00:09)\n6 0.951374 0.627288 0.192308 (00:10)\n7 0.916111 0.558621 0.184982 (00:09)\n8 0.839068 0.503755 0.177656 (00:09)\n9 0.749610 0.433475 0.144689 (00:09)\n10 0.678583 0.367560 0.124542 (00:09)\n11 0.615280 0.327029 0.100733 (00:10)\n12 0.558776 0.298989 0.095238 (00:09)\n13 0.518109 0.266998 0.084249 (00:09)\n14 0.476290 0.257858 0.084249 (00:09)\n15 0.436865 0.227299 0.067766 (00:09)\n16 0.457189 0.236593 0.078755 (00:10)\n17 0.420905 0.240185 0.080586 (00:10)\n18 0.395686 0.255465 0.082418 (00:09)\n19 0.373232 0.263469 0.080586 (00:09)\n20 0.348988 0.258300 0.080586 (00:10)\n21 0.324616 0.261346 0.080586 (00:09)\n22 0.311310 0.236431 0.071429 (00:09)\n23 0.328342 0.245841 0.069597 (00:10)\n24 0.306411 0.235111 0.064103 (00:10)\n25 0.289134 0.227465 0.069597 (00:09)\n26 0.284814 0.226022 0.064103 (00:09)\n27 0.268398 0.222791 0.067766 (00:09)\n28 0.255431 0.227751 0.073260 (00:10)\n29 0.240742 0.235949 0.071429 (00:09)\n30 0.227140 0.225221 0.075092 (00:09)\n31 0.213877 0.214789 0.069597 (00:09)\n32 0.201631 0.209382 0.062271 (00:10)\n33 0.189988 0.210684 0.065934 (00:09)\n34 0.181293 0.214666 0.073260 (00:09)\n35 0.184095 0.222575 0.073260 (00:09)\n36 0.194615 0.229198 0.076923 (00:10)\n37 0.186165 0.218206 0.075092 (00:09)\n38 0.176623 0.207198 0.062271 (00:10)\n39 0.166854 0.207256 0.065934 (00:10)\n40 0.162692 0.206044 0.062271 (00:09)\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7ed284d1f2728372b200cfc602c174d60d7dcb4 | 146,163 | ipynb | Jupyter Notebook | logs/xlm_1e-5_xnli_ende/statistics.ipynb | 28Smiles/SAS-AIED2020 | 13eb600fedb6130f2a353d1a66e7eace774345e3 | [
"MIT"
] | null | null | null | logs/xlm_1e-5_xnli_ende/statistics.ipynb | 28Smiles/SAS-AIED2020 | 13eb600fedb6130f2a353d1a66e7eace774345e3 | [
"MIT"
] | null | null | null | logs/xlm_1e-5_xnli_ende/statistics.ipynb | 28Smiles/SAS-AIED2020 | 13eb600fedb6130f2a353d1a66e7eace774345e3 | [
"MIT"
] | null | null | null | 466.974441 | 17,668 | 0.941825 | [
[
[
"import os\nimport pandas as pd\nfrom xml.dom import minidom\nimport numpy as np\nfrom mpl_toolkits import mplot3d\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.metrics import confusion_matrix, f1_score, accuracy_score, precision_score, recall_score, matthews_corrcoef, r2_score, roc_auc_score, auc, roc_curve",
"_____no_output_____"
],
[
"def load_semeval(t = 'train', lang = 'en'):\n semeval_keys = {\n 'correct': 2,\n 'incorrect': 1,\n 'contradictory': 0\n }\n \n file = minidom.parse('../../datasets/semeval2013-3way-' + lang + '/' + t + '.xml')\n \n for exercise in file.getElementsByTagName('exercise'):\n \n yield (\n [ reference.firstChild.data for reference in exercise.getElementsByTagName('reference') ],\n [ (answer.firstChild.data, answer.attributes['accuracy'].value) for answer in exercise.getElementsByTagName('answer') ]\n )",
"_____no_output_____"
],
[
"semeval = pd.DataFrame([\n (t, len(r), len(a))\n for t in [ 'train', 'unseen_answers', 'unseen_questions', 'unseen_domains' ]\n for r, a in load_semeval(t, 'en') \n], columns = [ 'dataset', 'references', 'answers' ])",
"_____no_output_____"
],
[
"plt.axis('equal');\nplt.pie(semeval.groupby([ 'dataset' ]).sum().reset_index()['answers'], \n labels = semeval.groupby([ 'dataset' ]).count().reset_index()['dataset'].map(str) + \n \" (\" + \n semeval.groupby([ 'dataset' ]).sum().reset_index()['answers'].map(str) +\n \")\"\n)\nplt.title('Answer Count')\nplt.show()",
"_____no_output_____"
],
[
"plt.axis('equal');\nplt.pie(semeval.groupby([ 'dataset' ]).sum().reset_index()['references'], \n labels = semeval.groupby([ 'dataset' ]).count().reset_index()['dataset'].map(str) + \n \" (\" + \n semeval.groupby([ 'dataset' ]).sum().reset_index()['references'].map(str) +\n \")\"\n)\nplt.title('Reference Count')\nplt.show()",
"_____no_output_____"
],
[
"pds = semeval.groupby([ 'dataset' ]).count().reset_index()\npds['references'] = semeval.groupby([ 'dataset' ]).sum().reset_index()['references'] / semeval.groupby([ 'dataset' ]).count().reset_index()['references']\n\nplt.figure(figsize=(8,4))\nsns.barplot(x = 'dataset', y = 'references', data=pds)\nplt.title('Avg. Refences Per Exercise')\nplt.show()",
"_____no_output_____"
],
[
"sns.barplot(\n x = 'references',\n y = 'answers',\n hue = 'dataset',\n ci= None,\n data = semeval\n)\nplt.title('Distribution of Answer Count to Reference Count')\nplt.show()",
"_____no_output_____"
],
[
"sns.scatterplot(\n x = 'references',\n y = 'answers',\n hue = 'dataset',\n s = 50,\n data = semeval[(semeval['dataset'] == 'unseen_questions') | (semeval['dataset'] == 'unseen_domains')]\n)\nplt.legend(loc='lower right')\nplt.title('Distribution of Answer Count to Reference Count')\nplt.show()",
"_____no_output_____"
],
[
"df = pd.DataFrame([\n (t, len(r), ans[1], 1)\n for t in [ 'train', 'unseen_answers', 'unseen_questions', 'unseen_domains' ]\n for r, a in load_semeval(t, 'en')\n for ans in a\n], columns = [ 'dataset', 'references', 'label', 'labels' ]).groupby([ 'dataset', 'label', 'references' ]).count().reset_index()\n\nfor t in [ 'train', 'unseen_answers', 'unseen_questions', 'unseen_domains' ]:\n sns.barplot(\n x = 'references',\n y = 'labels',\n hue = 'label',\n ci = None,\n data = df[df['dataset'] == t]\n )\n plt.title('Distribution of Answer Count to Reference Count (%s)' % t)\n plt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ed3f165042ecf082e821a57f89ecbabc8ce1c8 | 255,506 | ipynb | Jupyter Notebook | tutorials_and_examples/UV_vector.ipynb | shaunwbell/ipythonnb | c2f35b1524dc14fb0f12a8846a794af1bd3b3d3a | [
"MIT"
] | 3 | 2017-03-23T16:52:44.000Z | 2022-03-08T16:53:29.000Z | tutorials_and_examples/UV_vector.ipynb | shaunwbell/ipythonnb | c2f35b1524dc14fb0f12a8846a794af1bd3b3d3a | [
"MIT"
] | null | null | null | tutorials_and_examples/UV_vector.ipynb | shaunwbell/ipythonnb | c2f35b1524dc14fb0f12a8846a794af1bd3b3d3a | [
"MIT"
] | 2 | 2017-03-30T22:01:25.000Z | 2019-10-17T17:30:29.000Z | 1,264.881188 | 249,252 | 0.95759 | [
[
[
"import xarray as xa\n",
"/Users/bell/anaconda2/envs/py37/lib/python3.7/site-packages/dask/config.py:168: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n data = yaml.load(f.read()) or {}\n/Users/bell/anaconda2/envs/py37/lib/python3.7/site-packages/distributed/config.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n defaults = yaml.load(f)\n"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport cartopy\nimport cartopy.crs as ccrs\nfrom cartopy.feature import NaturalEarthFeature\n\n\ndef sample_data(shape=(20, 30)):\n \"\"\"\n Returns ``(x, y, u, v, crs)`` of some vector data\n computed mathematically. The returned crs will be a rotated\n pole CRS, meaning that the vectors will be unevenly spaced in\n regular PlateCarree space.\n\n \"\"\"\n crs = ccrs.RotatedPole(pole_longitude=177.5, pole_latitude=37.5)\n\n x = np.linspace(311.9, 391.1, shape[1])\n y = np.linspace(-23.6, 24.8, shape[0])\n\n x2d, y2d = np.meshgrid(x, y)\n u = 10 * (2 * np.cos(2 * np.deg2rad(x2d) + 3 * np.deg2rad(y2d + 30)) ** 2)\n v = 20 * np.cos(6 * np.deg2rad(x2d))\n\n return x, y, u, v, crs",
"_____no_output_____"
],
[
"xdf = xa.open_dataset('/Users/bell/Downloads/for_Phyllis/uo_168_180_10_mon2014.nc')\nxdf.info",
"_____no_output_____"
],
[
"u = xdf.uo.values[1,1,:,:]\nu[u>999]=np.nan\n\nv = xdf.vo.values[1,1,:,:]\nv[v>999]=np.nan",
"/Users/bell/anaconda2/envs/py37/lib/python3.7/site-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in greater\n \n/Users/bell/anaconda2/envs/py37/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in greater\n \"\"\"\n"
],
[
"fig = plt.figure(1,figsize=(9,9))\nax = plt.axes(projection=ccrs.Orthographic(-135, 90))\n\ncoast = NaturalEarthFeature(category='physical', scale='50m',\n facecolor='grey', name='land')\n\nax.set_global()\nax.gridlines()\n\nax.quiver(xdf.lon.values, xdf.lat.values, u, v, transform=ccrs.PlateCarree())\nax.set_extent([-180, -130, 55, 90], crs=ccrs.PlateCarree())\nfeature = ax.add_feature(coast, edgecolor='black')\n\nfig.savefig('demo.png',dpi=300)",
"/Users/bell/anaconda2/envs/py37/lib/python3.7/site-packages/cartopy/mpl/geoaxes.py:1752: RuntimeWarning: invalid value encountered in less\n u, v = self.projection.transform_vectors(t, x, y, u, v)\n/Users/bell/anaconda2/envs/py37/lib/python3.7/site-packages/cartopy/mpl/geoaxes.py:1752: RuntimeWarning: invalid value encountered in greater\n u, v = self.projection.transform_vectors(t, x, y, u, v)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7ed5748510df8b9d4ee207e554f1565fba6580c | 191,666 | ipynb | Jupyter Notebook | Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb | keelywright1/matplotlib-challenge | 1497f907fb675dd4a6af93b8814d606fe531d9ef | [
"ADSL"
] | null | null | null | Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb | keelywright1/matplotlib-challenge | 1497f907fb675dd4a6af93b8814d606fe531d9ef | [
"ADSL"
] | null | null | null | Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb | keelywright1/matplotlib-challenge | 1497f907fb675dd4a6af93b8814d606fe531d9ef | [
"ADSL"
] | null | null | null | 123.178663 | 25,580 | 0.824784 | [
[
[
"## Observations and Insights ",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport scipy.stats as stats\nimport numpy as np\nfrom scipy.stats import linregress\n\n# Study data files\nmouse_metadata_path = \"data/Mouse_metadata.csv\"\nstudy_results_path = \"data/Study_results.csv\"\n\n# Read the mouse data and the study results\nmouse_metadata = pd.read_csv(mouse_metadata_path)\nstudy_results = pd.read_csv(study_results_path)\n\n# Combine the data into a single dataset\nstudy_data_complete = pd.merge(study_results, mouse_metadata, how=\"left\", on=\"Mouse ID\")\n\n# Display the data table for preview\nstudy_data_complete.head()",
"_____no_output_____"
],
[
"# Checking the number of mice.\nstudy_data_complete['Mouse ID'].nunique()",
"_____no_output_____"
],
[
"# Optional: Get all the data for the duplicate mouse ID. \nstudy_data_complete[study_data_complete[\"Mouse ID\"] == \"g989\"]\n",
"_____no_output_____"
],
[
"# Create a clean DataFrame by dropping the duplicate mouse by its ID.\nclean = study_data_complete[study_data_complete[\"Mouse ID\"] != \"g989\"]\nclean.head()",
"_____no_output_____"
],
[
"# Checking the number of mice in the clean DataFrame.\nclean['Mouse ID'].nunique()\n",
"_____no_output_____"
]
],
[
[
"## Summary Statistics",
"_____no_output_____"
]
],
[
[
"clean",
"_____no_output_____"
],
[
"# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume \n# for each regimen\n\ngrp = clean.groupby('Drug Regimen')['Tumor Volume (mm3)']\npd.DataFrame({'mean':grp.mean(),'median':grp.median(),'var':grp.var(),'std':grp.std(),'sem':grp.sem()})",
"_____no_output_____"
],
[
"# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen\n# Using the aggregation method, produce the same summary statistics in a single line\n\ngrp.agg(['mean','median','var','std','sem'])",
"_____no_output_____"
]
],
[
[
"## Bar and Pie Charts",
"_____no_output_____"
]
],
[
[
"# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.\n# plot the mouse counts for each drug using pandas\n\nplt.figure(figsize=[15,6])\nmeasurements = clean.groupby('Drug Regimen').Sex.count()\nmeasurements.plot(kind='bar',rot=45,title='Total Measurements per Drug')\nplt.ylabel('Measurements')\nplt.show()",
"_____no_output_____"
],
[
"measurements",
"_____no_output_____"
],
[
"measurements.values",
"_____no_output_____"
],
[
"# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.\n# plot the bar graph of mice count per drug regimen\n\nplt.figure(figsize=[15,6])\nplt.bar(measurements.index,measurements.values)\nplt.title('Total Measurements per Drug')\nplt.ylabel('Measurements')\nplt.xlabel('Drug regimen')\nplt.show()",
"_____no_output_____"
],
[
"pd.DataFrame.plot()",
"_____no_output_____"
],
[
"clean.Sex.value_counts().index",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pandas\nclean.Sex.value_counts().plot.pie(autopct='%1.1f%%', explode=[.1,0],shadow=True)\nplt.show()",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pyplot\nplt.pie(clean.Sex.value_counts(), autopct='%1.1f%%',\n labels=clean.Sex.value_counts().index,explode=[.1,0],shadow=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Quartiles, Outliers and Boxplots",
"_____no_output_____"
]
],
[
[
"# Reset index so drug regimen column persists after inner merge\n# Start by getting the last (greatest) timepoint for each mouse\ntimemax = clean.groupby('Mouse ID').max().Timepoint.reset_index()\n\n# Merge this group df with the original dataframe to get the tumor volume at the last timepoint\ntumormax = timemax.merge(clean,on=['Mouse ID','Timepoint'])\n# show all rows of data\ntumormax",
"_____no_output_____"
],
[
"# get mouse count per drug\ntumormax.groupby('Drug Regimen').Timepoint.count()",
"_____no_output_____"
],
[
"# Calculate the final tumor volume of each mouse across four of the treatment regimens: \n# Capomulin, Ramicane, Infubinol, and Ceftamin\n\n# Put treatments into a list for for loop (and later for plot labels)\ndrugs = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']\n\n# Create empty list to fill with tumor vol data (for plotting)\ntumor_list = []\n\n# set drug regimen as index and drop associated regimens while only keeping Capomulin, Ramicane, Infubinol, and Ceftamin\nfor drug in drugs:\n # add subset \n # tumor volumes for each Drug Regimen\n # Locate the rows which contain mice on each drug and get the tumor volumes\n tumor_data = tumormax[tumormax['Drug Regimen'] == drug]['Tumor Volume (mm3)']\n # Calculate the IQR and quantitatively determine if there are any potential outliers. \n iqr = tumor_data.quantile(.75) - tumor_data.quantile(.25)\n # Determine outliers using upper and lower bounds\n lower_bound = tumor_data.quantile(.25) - (1.5*iqr)\n upper_bound = tumor_data.quantile(.75) + (1.5*iqr)\n tumor_list.append(tumor_data)\n \n # isolated view of just capomulin for later use\n print(f'{drug} potential outliers: {tumor_data[(tumor_data<lower_bound)|(tumor_data>upper_bound)]}')",
"Capomulin potential outliers: Series([], Name: Tumor Volume (mm3), dtype: float64)\nRamicane potential outliers: Series([], Name: Tumor Volume (mm3), dtype: float64)\nInfubinol potential outliers: 31 36.321346\nName: Tumor Volume (mm3), dtype: float64\nCeftamin potential outliers: Series([], Name: Tumor Volume (mm3), dtype: float64)\n"
],
[
"# Generate a box plot of the final tumor volume of each mouse across four regimens of interest\nplt.figure(figsize=[10,5])\n#set drugs to be analyzed, colors for the plots, and markers\nplt.boxplot(tumor_list,labels=drugs, flierprops={'markerfacecolor':'red','markersize':30})\nplt.ylabel('Final Tumor Valume (mm3)')\nplt.xticks(fontsize=18)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Line and Scatter Plots",
"_____no_output_____"
]
],
[
[
"# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin\n#change index to mouse ID \n#remove other mouse IDs so only s185 shows\n#set the x-axis equal to the Timepoint and y-axis to Tumor Volume\n\nplt.figure(figsize=[15,6])\nclean[(clean['Drug Regimen']=='Capomulin')&(clean['Mouse ID']=='s185')]\\\n.set_index('Timepoint')['Tumor Volume (mm3)'].plot()\nplt.ylabel('Tumor Volume (mm3)')\nplt.title('Tumor Volume vs. Timepoint for Mouse s185')\nplt.grid()\nplt.show()",
"_____no_output_____"
],
[
"# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen\n# group by mouse ID to find average tumor volume\n\ntumor_weight = clean[clean['Drug Regimen']=='Capomulin'].groupby('Mouse ID').mean()\\\n .set_index('Weight (g)')['Tumor Volume (mm3)']\n\n# establish x-axis value for the weight of the mice\n# produce scatter plot of the data\n\nplt.figure(figsize=[15,6])\nplt.scatter(tumor_weight.index,tumor_weight.values)\nplt.xlabel('Weight (g) Average')\nplt.ylabel('Tumor Volume (mm3) Average')\nplt.title('Capomulin Treatment Weight vs Tumor Volume Average')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Correlation and Regression",
"_____no_output_____"
]
],
[
[
"tumor_weight.head()",
"_____no_output_____"
],
[
"# Calculate the correlation coefficient and linear regression model \n# for mouse weight and average tumor volume for the Capomulin regimen\n#establish x and y values and find St. Pearson Correlation Coefficient for Mouse Weight and Tumor Volume Avg\nlinear_corr = stats.pearsonr(tumor_weight.index,tumor_weight.values)\n\n# establish linear regression values\nmodel = linregress(tumor_weight.index,tumor_weight.values)\n\n# linear regression line \ny_values=tumor_weight.index*model[0]+model[1]\n# scatter plot of the data\nplt.figure(figsize=[15,6])\nplt.plot(tumor_weight.index,y_values,color='red')\nplt.xlabel('Weight (g) Average')\nplt.ylabel('Tumor Volume (mm3) Average')\nplt.title('Capomulin Treatment Weight vs Tumor Volume Average')\nplt.scatter(tumor_weight.index,tumor_weight.values)\nplt.show()\n#print St. Pearson Correlation Coefficient\nprint(f'The correlation between mouse weight and average tumor volume is {linear_corr[0]:.2f}')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7ed59c7b1acd5830155810c6846a86e1a4e821b | 22,350 | ipynb | Jupyter Notebook | tasks/question_answering/Question_Answering.ipynb | ARBML/tkseem | dd771b46c1ef948451ef7ff82c71f70df02a3b19 | [
"MIT"
] | 32 | 2020-07-30T20:55:59.000Z | 2022-03-02T23:07:57.000Z | tasks/question_answering/Question_Answering.ipynb | abdullahmuaad9/tkseem | dd771b46c1ef948451ef7ff82c71f70df02a3b19 | [
"MIT"
] | 2 | 2020-08-12T16:21:57.000Z | 2021-01-05T04:03:24.000Z | tasks/question_answering/Question_Answering.ipynb | abdullahmuaad9/tkseem | dd771b46c1ef948451ef7ff82c71f70df02a3b19 | [
"MIT"
] | 3 | 2020-08-03T09:09:35.000Z | 2020-12-19T01:39:44.000Z | 36.759868 | 546 | 0.500224 | [
[
[
"!pip install tkseem",
"_____no_output_____"
]
],
[
[
"## Question Answering",
"_____no_output_____"
],
[
"### Download and Prepare Data",
"_____no_output_____"
]
],
[
[
"!wget https://dl.fbaipublicfiles.com/MLQA/MLQA_V1.zip\n!unzip MLQA_V1.zip",
"_____no_output_____"
]
],
[
[
"### Prepare Data",
"_____no_output_____"
]
],
[
[
"import json \n\ndef read_data(file_path, max_context_size = 100):\n # Read dataset\n with open(file_path) as f:\n data = json.load(f)\n\n contexts = []\n questions = []\n answers = []\n labels = []\n \n for i in range(len(data['data'])):\n\n paragraph_object = data['data'][i][\"paragraphs\"]\n \n for j in range(len(paragraph_object)):\n\n context_object = paragraph_object[j]\n context_text = context_object['context']\n\n if len(context_text.split()) > max_context_size:\n continue\n for k in range(len(context_object['qas'])):\n\n question_object = context_object['qas'][k]\n question_text = question_object['question']\n \n answer_object = question_object['answers'][0]\n answer_text = answer_object['text']\n answer_start = answer_object['answer_start']\n answer_end = answer_start + len(answer_text)\n\n answer_start = len(context_text[:answer_start].split())\n answer_end = answer_start + len(answer_text.split())\n if answer_end >= max_context_size:\n answer_end = max_context_size -1\n labels.append([answer_start, answer_end])\n questions.append(question_text)\n contexts.append(context_text)\n answers.append(answer_text)\n \n with open('train_contexts.txt', 'w') as f:\n f.write(('\\n').join(contexts))\n with open('train_questions.txt', 'w') as f:\n f.write(('\\n').join(questions))\n return {'qas':questions, 'ctx':contexts, 'ans':answers, 'lbl':labels}",
"_____no_output_____"
],
[
"train_data = read_data('MLQA_V1/test/test-context-ar-question-ar.json')",
"_____no_output_____"
],
[
"for i in range(10):\n print(train_data['qas'][i])\n print(train_data['ctx'][i])\n print(train_data['ans'][i])\n print(\"==============\")",
"ما الذي جعل شريط الاختبار للطائرة؟\nبحيرة جرووم كانت تستخدم للقصف المدفعي والتدريب علي المدفعية خلال الحرب العالمية الثانية، ولكن تم التخلي عنها بعد ذلك حتى نيسان / أبريل 1955، عندما تم اختياره من قبل فريق لوكهيد اسكنك كموقع مثالي لاختبار لوكهيد يو-2 - 2 طائرة التجسس. قاع البحيرة قدم الشريط المثالية التي يمكن عمل اختبارات الطائرات المزعجة، ارتفاع سلسلة جبال وادي الإيمجرانت ومحيط NTS يحمي موقع الاختبار من أعين المتطفلين والتدخل الخارجي.\nقاع البحيرة\n==============\nمن كان يرافق طائرة يو -2 عند التسليم؟\nشيدت لوكهيد قاعدة مؤقتة في الموقع، ثم عرفت باسم الموقع الثاني أو \"المزرعة\"، التي تتألف من أكثر بقليل من بضعة مخابئ، وحلقات عمل ومنازل متنقلة لفريقها الصغير. في ثلاثة أشهر فقط شيد مدرج طوله 5000 ودخل الخدمة بحلول تموز / يوليو 1955. حصلت المزرعة على تسليم أول يو 2 في 24 يوليو، 1955 من بوربانك على سي 124 جلوب ماستر الثاني طائرة شحن، يرافقه فنيي وكهيد على دي سي 3. انطلق أول يو - 2 من الجرووم في 4 أغسطس، 1955. بدأت عمليات تحليق أسطول يو 2 تحت سيطرة وكالة المخابرات المركزية الأمريكية في الأجواء السوفياتية بحلول منتصف عام 1956.\nفنيي وكهيد\n==============\n ما هو نوع العمل الذي يواجهه الطيارون العسكريون إذا انتقلوا إلى \\n مناطق محظورة؟ \nعلى عكس الكثير من حدود نيليس، والمنطقة المحيطة بها في البحيرة بشكل دائم خارج الحدود سواء على المدنيين وطبيعية حركة الطيران العسكري. محطات الرادار لحماية المنطقة، والأفراد غير مصرح بها سرعان ما تطرد. حتى طيارين التدريب العسكري في خطر NAFR إجراءات التأديبية إذا تواجدوا بطريق الخطأ في \"المربع\"الحظور للجرووم والأجواء المحيطة بها.\nإجراءات التأديبية\n==============\nمتى تم نشر مقال مجلة الطيران؟\nفي كانون الثاني 2006، نشر مؤرخ الفضاء دواين أ يوم مقال نشر في المجلة الإلكترونية الطيران والفضاء استعراض بعنوان \"رواد الفضاء والمنطقة 51 : حادث سكايلاب\". المقال كان مبنيا على مذكرة مكتوبة في عام 1974 إلى مديروكالة المخابرات المركزية يام كولبي من قبل عملاء مجهولين لوكالة الاستخبارات المركزية. وذكرت المذكرة أن رواد الفضاء على متن سكايلاب 4، وذلك كجزء من برنامج أوسع نطاقا، عن غير قصد بالتقاط صور لموقع الذي قالت المذكرة :\nكانون الثاني 2006\n==============\nما هو الموقع الذي أصبح مركزاً للأطباق الطائرة ونظريات المؤامرة؟\nلطبيعتها السرية وفيما لا شك فيه بحوث تصنيف الطائرات، إلى جانب تقارير عن الظواهر غير العادية، قد أدت الي ان تصبح منطقة 51 مركزا للاطباق الطائرة الحديثة ونظريات المؤامرة. بعض الأنشطة المذكورة في مثل هذه النظريات في منطقة 51 تشمل ما يلي :\nمنطقة 51\n==============\nما كان محور مؤامرة الجسم الغريب الحديثة؟\\n\nلطبيعتها السرية وفيما لا شك فيه بحوث تصنيف الطائرات، إلى جانب تقارير عن الظواهر غير العادية، قد أدت الي ان تصبح منطقة 51 مركزا للاطباق الطائرة الحديثة ونظريات المؤامرة. بعض الأنشطة المذكورة في مثل هذه النظريات في منطقة 51 تشمل ما يلي :\nمنطقة 51\n==============\nمالذي يُظن بأنه قد تم بنائه في روزويل؟\nالتخزين، والفحص، والهندسة العكسية للمركبة الفضائية الغريبة المحطمة (بما في ذلك مواد يفترض ان تعافى في روزويل)، ودراسة شاغليها (حية أو ميتة)، وصناعة الطائرات على أساس التكنولوجيا الغريبة.\nصناعة الطائرات على أساس التكنولوجيا الغريبة\n==============\nمتى يقوم Qos بالتفاوض على كيفية عمل الشبكة؟\nويمكن أن تتوافق الشبكة أو البروتوكول الذي يدعم جودة الخدمات على عقد المرور مع تطبيق البرمجيات والقدرة الاحتياطية في عقد الشبكة، على سبيل المثال خلال مرحلة إقامة الدورات. وهي يمكن أن تحقق رصدا لمستوى الأداء خلال الدورة، على سبيل المثال معدل البيانات والتأخير، والتحكم ديناميكيا عن طريق جدولة الأولويات في عقد الشبكة. وقد تفرج عن القدرة الاحتياطية خلال مرحلة الهدم.\nمرحلة إقامة الدورات\n==============\nما هو أحد الشروط للتجارة الشبكية المتنوعة؟\nجودة الخدمة قد تكون مطلوبة لأنواع معينة من حركة مرور الشبكة، على سبيل المثال :\nجودة الخدمة\n==============\nكم عدد قوائم الانتظار الموجودة على أجهزة توجيه المختلفة؟\\n\nالموجهات لدعم DiffServ استخدام قوائم متعددة للحزم في انتظار انتقال من عرض النطاق الترددي مقيدة (على سبيل المثال، منطقة واسعة) واجهات. راوتر الباعة يوفر قدرات مختلفة لتكوين هذا السلوك، لتشمل عددا من قوائم معتمدة، والأولويات النسبية لقوائم الانتظار، وعرض النطاق الترددي المخصصة لكل قائمة انتظار.\nمتعددة\n==============\n"
]
],
[
[
"### Imports",
"_____no_output_____"
]
],
[
[
"import re\nimport nltk\nimport time\nimport numpy as np\nimport tkseem as tk\nimport tensorflow as tf\nimport matplotlib.ticker as ticker\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"### Tokenization",
"_____no_output_____"
]
],
[
[
"qa_tokenizer = tk.WordTokenizer()\nqa_tokenizer.train('train_questions.txt')\nprint('Vocab size ', qa_tokenizer.vocab_size)\n\ncx_tokenizer = tk.WordTokenizer()\ncx_tokenizer.train('train_contexts.txt')\nprint('Vocab size ', cx_tokenizer.vocab_size)\n\ntrain_inp_data = qa_tokenizer.encode_sentences(train_data['qas'])\ntrain_tar_data = cx_tokenizer.encode_sentences(train_data['ctx'])\ntrain_tar_lbls = train_data['lbl']\ntrain_inp_data.shape, train_tar_data.shape",
"Training WordTokenizer ...\nVocab size 8883\nTraining WordTokenizer ...\nVocab size 10000\n"
]
],
[
[
"### Create Dataset",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 64\nBUFFER_SIZE = len(train_inp_data)\n\ndataset = tf.data.Dataset.from_tensor_slices((train_inp_data, train_tar_data, train_tar_lbls)).shuffle(BUFFER_SIZE)\ndataset = dataset.batch(BATCH_SIZE, drop_remainder=True)",
"_____no_output_____"
]
],
[
[
"### Create Encoder and Decoder",
"_____no_output_____"
]
],
[
[
"class Encoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):\n super(Encoder, self).__init__()\n self.batch_sz = batch_sz\n self.enc_units = enc_units\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.enc_units,\n recurrent_initializer='glorot_uniform')\n\n def call(self, x, hidden):\n x = self.embedding(x)\n output = self.gru(x, initial_state = hidden)\n return output\n\n def initialize_hidden_state(self):\n return tf.zeros((self.batch_sz, self.enc_units))\n \nclass Decoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, dec_units, output_sz):\n super(Decoder, self).__init__()\n self.dec_units = dec_units\n self.embedding_dim = embedding_dim\n self.output_sz = output_sz\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.dec_units,\n return_sequences=False,\n recurrent_initializer='glorot_uniform')\n self.fc11 = tf.keras.layers.Dense(embedding_dim)\n self.fc12 = tf.keras.layers.Dense(output_sz)\n\n self.fc21 = tf.keras.layers.Dense(embedding_dim)\n self.fc22 = tf.keras.layers.Dense(output_sz)\n \n def call(self, x, hidden):\n x = self.embedding(x)\n x = self.gru(x, initial_state = hidden)\n x1 = self.fc11(x)\n x2 = self.fc21(x)\n\n x1 = self.fc12(x1)\n x2 = self.fc22(x2)\n return [x1, x2]\n\ndef loss_fn(true, pred):\n cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n return (cross_entropy(true[:,0:1], pred[0]) + cross_entropy(true[:,1:2], pred[1]))/2",
"_____no_output_____"
]
],
[
[
"### Training",
"_____no_output_____"
]
],
[
[
"units = 1024\nembedding_dim = 256\nmax_length_inp = train_inp_data.shape[1]\nmax_length_tar = train_tar_data.shape[1]\nvocab_tar_size = cx_tokenizer.vocab_size\nvocab_inp_size = qa_tokenizer.vocab_size\nsteps_per_epoch = len(train_inp_data) // BATCH_SIZE\ndecoder = Decoder(vocab_tar_size, embedding_dim, units, max_length_tar)\nencoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)\n\noptim = tf.optimizers.Adam()",
"_____no_output_____"
],
[
"epochs = 25\nfor epoch in range(epochs):\n enc_hidden = encoder.initialize_hidden_state()\n epoch_loss = 0\n \n for idx, (inp, tar, true) in enumerate(dataset):\n with tf.GradientTape() as tape:\n hidden = encoder(inp, enc_hidden)\n pred = decoder(tar, hidden)\n loss = loss_fn(true, pred)\n variables = decoder.trainable_variables + encoder.trainable_variables\n gradients = tape.gradient(loss, variables)\n optim.apply_gradients(zip(gradients, variables))\n epoch_loss += loss.numpy()\n print(f\"Epoch {epoch} loss: {epoch_loss/steps_per_epoch:.3f}\")",
"Epoch 0 loss: 4.386\nEpoch 1 loss: 4.264\nEpoch 2 loss: 4.238\nEpoch 3 loss: 4.105\nEpoch 4 loss: 3.932\nEpoch 5 loss: 3.758\nEpoch 6 loss: 3.643\nEpoch 7 loss: 3.548\nEpoch 8 loss: 3.456\nEpoch 9 loss: 3.382\nEpoch 10 loss: 3.285\nEpoch 11 loss: 3.215\nEpoch 12 loss: 3.141\nEpoch 13 loss: 3.047\nEpoch 14 loss: 2.916\nEpoch 15 loss: 2.831\nEpoch 16 loss: 2.748\nEpoch 17 loss: 2.614\nEpoch 18 loss: 2.462\nEpoch 19 loss: 2.306\nEpoch 20 loss: 2.126\nEpoch 21 loss: 1.944\nEpoch 22 loss: 1.770\nEpoch 23 loss: 1.637\nEpoch 24 loss: 1.414\n"
]
],
[
[
"### evaluation",
"_____no_output_____"
]
],
[
[
"def answer(question_txt, context_txt, answer_txt_tru):\n question = qa_tokenizer.encode_sentences([question_txt], out_length = max_length_inp)\n context = cx_tokenizer.encode_sentences([context_txt], out_length = max_length_tar)\n question = tf.convert_to_tensor(question)\n context = tf.convert_to_tensor(context)\n result = ''\n\n hidden = [tf.zeros((1, units))]\n enc_hidden = encoder(question, hidden)\n pred = decoder(context, enc_hidden)\n\n start = tf.argmax(pred[0], axis = -1).numpy()[0]\n end = tf.argmax(pred[1], axis = -1).numpy()[0]\n \n if start >= len(context_txt.split()):\n start = len(context_txt.split()) - 1\n if end >= len(context_txt.split()):\n end = len(context_txt.split()) - 1\n \n # if one word prediction\n if end == start:\n end += 1\n answer_txt = (' ').join(context_txt.split()[start:end])\n \n print(\"Question : \", question_txt)\n print(\"Context : \",context_txt)\n print(\"Pred Answer : \",answer_txt)\n print(\"True Answer : \", answer_txt_tru)\n print(\"======================\")",
"_____no_output_____"
],
[
"answer(\"في أي عام توفي وليام ؟\", \"توفي وليام في عام 1990\", \"1990\")\nanswer(\"ماهي عاصمة البحرين ؟\", \"عاصمة البحرين هي المنامة\", \"المنامة\")\nanswer(\"في أي دولة ولد جون ؟\", \"ولد في فرنسا عام 1988\", \"فرنسا\")\nanswer(\"أين تركت الهاتف ؟\", \"تركت الهاتف فوق الطاولة\", \"فوق الطاولة\")",
"Question : في أي عام توفي وليام ؟\nContext : توفي وليام في عام 1990\nPred Answer : 1990\nTrue Answer : 1990\n======================\nQuestion : ماهي عاصمة البحرين ؟\nContext : عاصمة البحرين هي المنامة\nPred Answer : المنامة\nTrue Answer : المنامة\n======================\nQuestion : في أي دولة ولد جون ؟\nContext : ولد في فرنسا عام 1988\nPred Answer : 1988\nTrue Answer : فرنسا\n======================\nQuestion : أين تركت الهاتف ؟\nContext : تركت الهاتف فوق الطاولة\nPred Answer : الطاولة\nTrue Answer : فوق الطاولة\n======================\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7ed5c404e014ef45af48a741e254ba22697401b | 11,392 | ipynb | Jupyter Notebook | example.ipynb | kaspermunch/geneinfo | e9c4622a6131913902ec4a348ca4e5245bfe3cdd | [
"MIT"
] | 2 | 2021-09-13T16:27:46.000Z | 2022-01-10T13:47:45.000Z | example.ipynb | kaspermunch/geneinfo | e9c4622a6131913902ec4a348ca4e5245bfe3cdd | [
"MIT"
] | null | null | null | example.ipynb | kaspermunch/geneinfo | e9c4622a6131913902ec4a348ca4e5245bfe3cdd | [
"MIT"
] | null | null | null | 34.731707 | 972 | 0.591731 | [
[
[
"import matplotlib.pyplot as plt\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('retina', 'png')\n\nimport numpy as np\n\nimport seaborn as sns\nsns.set_style('white')\n\nimport mpld3\n\nimport mygene\nfrom geneinfo import geneinfo, geneplot, connect_mygene\n\nmg = mygene.MyGeneInfo()\nconnect_mygene(mg)",
"/Users/kmt/anaconda3/envs/simons/lib/python3.6/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n"
]
],
[
[
"## Single gene name",
"_____no_output_____"
]
],
[
[
"geneinfo('USP4')",
"_____no_output_____"
]
],
[
[
"## List of names",
"_____no_output_____"
]
],
[
[
"geneinfo(['LARS2', 'XCR1'])",
"_____no_output_____"
]
],
[
[
"## Get all protein coding genes in a (hg38) region",
"_____no_output_____"
]
],
[
[
"for gene in mg.query('q=chr2:49500000-50000000 AND type_of_gene:protein-coding', species='human', fetch_all=True):\n geneinfo(gene['symbol'])",
"Fetching 4 gene(s) . . .\n"
]
],
[
[
"## Plot data over gene annotation",
"_____no_output_____"
]
],
[
[
"chrom, start, end = 'chr3', 49500000, 50600000\nax = geneplot(chrom, start, end, figsize=(10, 5))\n\nax.plot(np.linspace(start, end, 1000), np.random.random(1000), 'o') ;\n\nmpld3.display()",
"Fetching 61 gene(s) . . .\n"
],
[
"geneinfo(['HYAL3', 'IFRD2'])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7ed61fc69a7076e345c131ce7bb68acd261f7d4 | 26,579 | ipynb | Jupyter Notebook | Neural Networks/Convolutional Neural Networks.ipynb | PeterJamesNee/Examples | b390de887d238508d7e7ee7245404a85bd2eab6d | [
"MIT"
] | null | null | null | Neural Networks/Convolutional Neural Networks.ipynb | PeterJamesNee/Examples | b390de887d238508d7e7ee7245404a85bd2eab6d | [
"MIT"
] | null | null | null | Neural Networks/Convolutional Neural Networks.ipynb | PeterJamesNee/Examples | b390de887d238508d7e7ee7245404a85bd2eab6d | [
"MIT"
] | null | null | null | 92.933566 | 1,932 | 0.680951 | [
[
[
"# Convolutional Neural Networks\n\nIn this notebook we will implement a convolutional neural network. Rather than doing everything from scratch we will make use of [TensorFlow 2](https://www.tensorflow.org/) and the [Keras](https://keras.io) high level interface.",
"_____no_output_____"
],
[
"## Installing TensorFlow and Keras\n\nTensorFlow and Keras are not included with the base Anaconda install, but can be easily installed by running the following commands on the Anaconda Command Prompt/terminal window:\n```\nconda install notebook jupyterlab nb_conda_kernels\nconda create -n tf tensorflow ipykernel mkl\n```\nOnce this has been done, you should be able to select the `Python [conda env:tf]` kernel from the Kernel->Change Kernel menu item at the top of this notebook. Then, we import TensorFlow package:",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
]
],
[
[
"## Creating a simple network with TensorFlow\n\nWe will start by creating a very simple fully connected feedforward network using TensorFlow/Keras. The network will mimic the one we implemented previously, but TensorFlow/Keras will take care of most of the details for us.\n\n### MNIST Dataset\n\nFirst, let us load the MNIST digits dataset that we will be using to train our network. This is available directly within Keras:",
"_____no_output_____"
]
],
[
[
"(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()",
"_____no_output_____"
]
],
[
[
"The data comes as a set of integers in the range [0,255] representing the shade of gray of a given pixel. Let's first rescale them to be in the range [0,1]:",
"_____no_output_____"
]
],
[
[
"x_train, x_test = x_train / 255.0, x_test / 255.0",
"_____no_output_____"
]
],
[
[
"Now we can build a neural network model using Keras. This uses a very simple high-level modular structure where we only have the specify the layers in our model and the properties of each layer. The layers we will have are as follows:\n1. Input layer: This will be a 28x28 matrix of numbers.\n2. `Flatten` layer: Convert our 28x28 pixel image into an array of size 784.\n3. `Dense` layer: a fully-connected layer of the type we have been using up to now. We will use 30 neurons and the sigmoid activation function.\n4. `Dense` layer: fully-connected output layer. ",
"_____no_output_____"
]
],
[
[
"model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(30, activation='sigmoid'),\n tf.keras.layers.Dense(10, activation='softsign')\n])",
"_____no_output_____"
],
[
"model.compile(optimizer='adam',\n loss='mean_squared_logarithmic_error',\n metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.fit(x_train, y_train, epochs=5)",
"Epoch 1/5\n1875/1875 [==============================] - 1s 776us/step - loss: 1.7309 - accuracy: 0.0955\nEpoch 2/5\n1875/1875 [==============================] - 2s 840us/step - loss: 1.6526 - accuracy: 0.0987\nEpoch 3/5\n1875/1875 [==============================] - 2s 892us/step - loss: 1.6396 - accuracy: 0.09870s - loss: 1.6406 - accuracy: 0.09\nEpoch 4/5\n1875/1875 [==============================] - 2s 1ms/step - loss: 1.6327 - accuracy: 0.0987: 0s - los\nEpoch 5/5\n1875/1875 [==============================] - 2s 1ms/step - loss: 1.6282 - accuracy: 0.0987\n"
],
[
"model.evaluate(x_test, y_test)",
"313/313 [==============================] - 0s 719us/step - loss: 0.1253 - accuracy: 0.9629\n"
]
],
[
[
"#### Exercises\nExperiment with this network:\n1. Change the number of neurons in the hidden layer.\n2. Add more hidden layers.\n3. Change the activation function in the hidden layer to `relu`.\n4. Change the activation in the output layer to `softmax`.\nHow does the performance of your network change with these modifications?\n\n#### Task\nImplement the neural network in \"[Gradient-based learning applied to document recognition](http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf)\", by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner.",
"_____no_output_____"
]
],
[
[
"model = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(6,5, activation='sigmoid',input_shape=(28, 28,1)),\n tf.keras.layers.MaxPooling2D(pool_size=(2,2)),\n tf.keras.layers.Conv2D(16,5, activation='sigmoid'),\n tf.keras.layers.MaxPooling2D(pool_size=(2,2)),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(84, activation='sigmoid'),\n tf.keras.layers.Dense(10, activation='softmax')\n])",
"_____no_output_____"
],
[
"model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.fit(x_train, y_train, epochs=5)",
"Epoch 1/5\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7ed6fbd07aab9829aa5ee4cbb1712ad832cef27 | 72,671 | ipynb | Jupyter Notebook | Abundances.ipynb | ErikHogenbirk/DMPlots | e2ccbe08732db79cb2f37f5f8c7099c7d55202a5 | [
"Unlicense"
] | 3 | 2018-07-30T09:09:03.000Z | 2020-12-02T15:45:02.000Z | Abundances.ipynb | ErikHogenbirk/DMPlots | e2ccbe08732db79cb2f37f5f8c7099c7d55202a5 | [
"Unlicense"
] | 1 | 2018-05-08T12:06:58.000Z | 2018-05-14T12:44:28.000Z | Abundances.ipynb | ErikHogenbirk/DMPlots | e2ccbe08732db79cb2f37f5f8c7099c7d55202a5 | [
"Unlicense"
] | null | null | null | 266.194139 | 37,664 | 0.910308 | [
[
[
"## Introduction",
"_____no_output_____"
],
[
"This notebook was intended to show the nuclear isotope abundance dependent on baryon density. I abandoned it later though. If you want to pick it up, go ahead. No guarantees all rights reserved and you are responsible for your interpretation of all this, etc.",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rc('font', size=18)\nplt.rcParams['figure.figsize'] = (10.0, 7.0)",
"_____no_output_____"
]
],
[
[
"## Read data",
"_____no_output_____"
]
],
[
[
"def read_datathief(fn):\n data = np.loadtxt(fn, converters={0: lambda x: x[:-1]})\n return data[:, 0], data[:, 1]",
"_____no_output_____"
],
[
"ab = {}\nelems = ['he4', 'd2', 'he3', 'li3']\nab['x_he4'], ab['he4'] = read_datathief('data/abundances/He4.txt')\nab['x_d2'], ab['d2'] = read_datathief('data/abundances/D2.txt')\nab['x_he3'], ab['he3'] = read_datathief('data/abundances/He3.txt')\nab['x_li3'], ab['li3'] = read_datathief('data/abundances/Li.txt')",
"_____no_output_____"
],
[
"ab['li3_c'] = 1.58e-10\nab['li3_c_d'] = 0.3e-10\nab['d2_c'] = 2.53e-5\nab['d2_c_d'] = 0.04e-5\nab['he3_c'] = 1.1e-5\nab['he3_c_d'] = 0.2e-5\nab['he4_c'] = 0.2449\nab['he4_c_d'] = 0.004\n",
"_____no_output_____"
]
],
[
[
"## Plots",
"_____no_output_____"
],
[
"### All on one",
"_____no_output_____"
]
],
[
[
"for el in elems:\n plt.plot(ab['x_' + el], ab[el])\nplt.xscale('log')\nplt.yscale('log')\nplt.xlabel('Baryon fraction')\nplt.ylabel('Abundance')\n# plt.ylim(1e-10, 1)",
"_____no_output_____"
]
],
[
[
"### Fancy plot",
"_____no_output_____"
]
],
[
[
"f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True)\naxs = [ax1, ax2, ax3]\nc = {el: 'C%d' % i for i, el in enumerate(elems) }\nplanck_ab = 0.02230\nfor el in elems:\n for ax in axs:\n ax.plot(ab['x_' + el], ab[el], color=c[el], lw=3)\n ax.axhline(ab[el+'_c'], color=c[el])\n ax.axvline(planck_ab)\nax1.set_ylim(0.22, 0.26)\nax2.set_ylim(1e-6, 1e-3)\nax3.set_ylim(1e-10, 1e-8)\nfor ax in axs:\n ax.set_yscale('log')\n ax.set_xscale('log')\n\nfor ax in (ax1, ax2):\n ax.spines['bottom'].set_visible(False)\n\nax = ax1\nax.xaxis.tick_bottom()\nax.xaxis.tick_top()\nax.tick_params(labeltop='off')\n\nplt.sca(ax2)\nplt.tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom='off', # ticks along the bottom edge are off\n top='off', # ticks along the top edge are off\n labelbottom='off')\n\nfor ax in (ax2, ax3):\n ax.spines['top'].set_visible(False)\n# ax.tick_params(labeltop='off') # don't put tick labels at the top\n\n\n\nd = .01 # how big to make the diagonal lines in axes coordinates\n# arguments to pass to plot, just so we don't keep repeating them\nax = ax1\nkwargs = dict(transform=ax.transAxes, color='k', clip_on=False)\nax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal\nax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal\n\nax = ax2\nkwargs = dict(transform=ax.transAxes, color='k', clip_on=False)\nax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal\nax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal\n\nkwargs.update(transform=ax.transAxes) # switch to the bottom axes\nax.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal\nax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal\n\nax = ax3\nkwargs.update(transform=ax.transAxes) # switch to the bottom axes\nax.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal\nax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal\n\nplt.xlim(4e-3, 3e-2)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7ed763b1d3c2c186119b1b31db195bd59c425cd | 21,131 | ipynb | Jupyter Notebook | Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb | ahouseholder/machine-learning-for-security-professionals | adc5b8ab29965857cb2cc606e15110269b5fb8e5 | [
"BSD-3-Clause"
] | 1 | 2017-12-30T18:03:29.000Z | 2017-12-30T18:03:29.000Z | Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb | ahouseholder/machine-learning-for-security-professionals | adc5b8ab29965857cb2cc606e15110269b5fb8e5 | [
"BSD-3-Clause"
] | null | null | null | Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb | ahouseholder/machine-learning-for-security-professionals | adc5b8ab29965857cb2cc606e15110269b5fb8e5 | [
"BSD-3-Clause"
] | 1 | 2021-11-03T13:29:50.000Z | 2021-11-03T13:29:50.000Z | 39.350093 | 452 | 0.605556 | [
[
[
"<img src=\"../../img/logo_white_bkg_small.png\" align=\"left\" /> \n# Supervised Learning\nThis worksheet covers concepts covered in the second part of day 2 - Feature Engineering. It should take no more than 40-60 minutes to complete. Please raise your hand if you get stuck. \n\n## Import the Libraries\nFor this exercise, we will be using:\n* Pandas (http://pandas.pydata.org/pandas-docs/stable/)\n* Numpy (https://docs.scipy.org/doc/numpy/reference/)\n* Matplotlib (http://matplotlib.org/api/pyplot_api.html)\n* Scikit-learn (http://scikit-learn.org/stable/documentation.html)\n* YellowBrick (http://www.scikit-yb.org/en/latest/)\n* Seaborn (https://seaborn.pydata.org)\n* Lime (https://github.com/marcotcr/lime)",
"_____no_output_____"
]
],
[
[
"# Load Libraries - Make sure to run this cell!\nimport pandas as pd\nimport numpy as np\nimport re\nfrom collections import Counter\nfrom sklearn import feature_extraction, tree, model_selection, metrics\nfrom yellowbrick.classifier import ClassificationReport\nfrom yellowbrick.classifier import ConfusionMatrix\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport lime\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Worksheet - DGA Detection using Machine Learning\n\nThis worksheet is a step-by-step guide on how to detect domains that were generated using \"Domain Generation Algorithm\" (DGA). We will walk you through the process of transforming raw domain strings to Machine Learning features and creating a decision tree classifer which you will use to determine whether a given domain is legit or not. Once you have implemented the classifier, the worksheet will walk you through evaluating your model. \n\nOverview 2 main steps:\n\n1. **Feature Engineering** - from raw domain strings to numeric Machine Learning features using DataFrame manipulations\n2. **Machine Learning Classification** - predict whether a domain is legit or not using a Decision Tree Classifier\n\n\n \n\n**DGA - Background**\n\n\"Various families of malware use domain generation\nalgorithms (DGAs) to generate a large number of pseudo-random\ndomain names to connect to a command and control (C2) server.\nIn order to block DGA C2 traffic, security organizations must\nfirst discover the algorithm by reverse engineering malware\nsamples, then generate a list of domains for a given seed. The\ndomains are then either preregistered, sink-holed or published\nin a DNS blacklist. This process is not only tedious, but can\nbe readily circumvented by malware authors. An alternative\napproach to stop malware from using DGAs is to intercept DNS\nqueries on a network and predict whether domains are DGA\ngenerated. Much of the previous work in DGA detection is based\non finding groupings of like domains and using their statistical\nproperties to determine if they are DGA generated. However,\nthese techniques are run over large time windows and cannot be\nused for real-time detection and prevention. In addition, many of\nthese techniques also use contextual information such as passive\nDNS and aggregations of all NXDomains throughout a network.\nSuch requirements are not only costly to integrate, they may not\nbe possible due to real-world constraints of many systems (such\nas endpoint detection). An alternative to these systems is a much\nharder problem: detect DGA generation on a per domain basis\nwith no information except for the domain name. Previous work\nto solve this harder problem exhibits poor performance and many\nof these systems rely heavily on manual creation of features;\na time consuming process that can easily be circumvented by\nmalware authors...\" \n[Citation: Woodbridge et. al 2016: \"Predicting Domain Generation Algorithms with Long Short-Term Memory Networks\"]\n\nA better alternative for real-world deployment would be to use \"featureless deep learning\" - We have a separate notebook where you can see how this can be implemented!\n\n**However, let's learn the basics first!!!**\n",
"_____no_output_____"
],
[
"## Feature Engineering",
"_____no_output_____"
],
[
"#### Breakpoint: Load Features and Labels\n\nIf you got stuck in Part 1, please simply load the feature matrix we prepared for you, so you can move on to Part 2 and train a Decision Tree Classifier.",
"_____no_output_____"
]
],
[
[
"df_final = pd.read_csv('../../Data/dga_features_final_df.csv')\nprint(df_final.isDGA.value_counts())\ndf_final.head()",
"_____no_output_____"
],
[
"# Load dictionary of common english words from part 1\nfrom six.moves import cPickle as pickle\nwith open('../../Data/d_common_en_words' + '.pickle', 'rb') as f:\n d = pickle.load(f)",
"_____no_output_____"
]
],
[
[
"## Part 2 - Machine Learning\n\nTo learn simple classification procedures using [sklearn](http://scikit-learn.org/stable/) we have split the work flow into 5 steps.",
"_____no_output_____"
],
[
"### Step 1: Prepare Feature matrix and ```target``` vector containing the URL labels\n\n- In statistics, the feature matrix is often referred to as ```X```\n- target is a vector containing the labels for each URL (often also called *y* in statistics)\n- In sklearn both the input and target can either be a pandas DataFrame/Series or numpy array/vector respectively (can't be lists!)\n\nTasks:\n- assign 'isDGA' column to a pandas Series named 'target'\n- drop 'isDGA' column from ```dga``` DataFrame and name the resulting pandas DataFrame 'feature_matrix'",
"_____no_output_____"
]
],
[
[
"#Your code here ...",
"_____no_output_____"
]
],
[
[
"### Step 2: Simple Cross-Validation\n\nTasks:\n- split your feature matrix X and target vector into train and test subsets using sklearn [model_selection.train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)",
"_____no_output_____"
]
],
[
[
"# Simple Cross-Validation: Split the data set into training and test data\n#Your code here ...",
"_____no_output_____"
]
],
[
[
"### Step 3: Train the model and make a prediction\n\nFinally, we have prepared and segmented the data. Let's start classifying!! \n\nTasks:\n\n- Use the sklearn [tree.DecisionTreeClassfier()](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html), create a decision tree with standard parameters, and train it using the ```.fit()``` function with ```X_train``` and ```target_train``` data.\n- Next, pull a few random rows from the data and see if your classifier got it correct.\n\nIf you are interested in trying a real unknown domain, you'll have to create a function to generate the features for that domain before you run it through the classifier (see function ```is_dga``` a few cells below). ",
"_____no_output_____"
]
],
[
[
"# Train the decision tree based on the entropy criterion\n\n#Your code here ...",
"_____no_output_____"
],
[
"# For simplicity let's just copy the needed function in here again\n\ndef H_entropy (x):\n # Calculate Shannon Entropy\n prob = [ float(x.count(c)) / len(x) for c in dict.fromkeys(list(x)) ] \n H = - sum([ p * np.log2(p) for p in prob ]) \n return H\n\ndef vowel_consonant_ratio (x):\n # Calculate vowel to consonant ratio\n x = x.lower()\n vowels_pattern = re.compile('([aeiou])')\n consonants_pattern = re.compile('([b-df-hj-np-tv-z])')\n vowels = re.findall(vowels_pattern, x)\n consonants = re.findall(consonants_pattern, x)\n try:\n ratio = len(vowels) / len(consonants)\n except: # catch zero devision exception \n ratio = 0 \n return ratio\n\n# ngrams: Implementation according to Schiavoni 2014: \"Phoenix: DGA-based Botnet Tracking and Intelligence\"\n# http://s2lab.isg.rhul.ac.uk/papers/files/dimva2014.pdf\n\ndef ngrams(word, n):\n # Extract all ngrams and return a regular Python list\n # Input word: can be a simple string or a list of strings\n # Input n: Can be one integer or a list of integers \n # if you want to extract multipe ngrams and have them all in one list\n \n l_ngrams = []\n if isinstance(word, list):\n for w in word:\n if isinstance(n, list):\n for curr_n in n:\n ngrams = [w[i:i+curr_n] for i in range(0,len(w)-curr_n+1)]\n l_ngrams.extend(ngrams)\n else:\n ngrams = [w[i:i+n] for i in range(0,len(w)-n+1)]\n l_ngrams.extend(ngrams)\n else:\n if isinstance(n, list):\n for curr_n in n:\n ngrams = [word[i:i+curr_n] for i in range(0,len(word)-curr_n+1)]\n l_ngrams.extend(ngrams)\n else:\n ngrams = [word[i:i+n] for i in range(0,len(word)-n+1)]\n l_ngrams.extend(ngrams)\n# print(l_ngrams)\n return l_ngrams\n\ndef ngram_feature(domain, d, n):\n # Input is your domain string or list of domain strings\n # a dictionary object d that contains the count for most common english words\n # finally you n either as int list or simple int defining the ngram length\n \n # Core magic: Looks up domain ngrams in english dictionary ngrams and sums up the \n # respective english dictionary counts for the respective domain ngram\n # sum is normalized\n \n l_ngrams = ngrams(domain, n)\n# print(l_ngrams)\n count_sum=0\n for ngram in l_ngrams:\n if d[ngram]:\n count_sum+=d[ngram]\n try:\n feature = count_sum/(len(domain)-n+1)\n except:\n feature = 0\n return feature\n \ndef average_ngram_feature(l_ngram_feature):\n # input is a list of calls to ngram_feature(domain, d, n)\n # usually you would use various n values, like 1,2,3...\n return sum(l_ngram_feature)/len(l_ngram_feature)",
"_____no_output_____"
],
[
"def is_dga(domain, clf, d):\n # Function that takes new domain string, trained model 'clf' as input and\n # dictionary d of most common english words\n # returns prediction\n \n domain_features = np.empty([1,5])\n # order of features is ['length', 'digits', 'entropy', 'vowel-cons', 'ngrams']\n domain_features[0,0] = len(domain)\n pattern = re.compile('([0-9])')\n domain_features[0,1] = len(re.findall(pattern, domain))\n domain_features[0,2] = H_entropy(domain)\n domain_features[0,3] = vowel_consonant_ratio(domain)\n domain_features[0,4] = average_ngram_feature([ngram_feature(domain, d, 1), \n ngram_feature(domain, d, 2), \n ngram_feature(domain, d, 3)])\n pred = clf.predict(domain_features)\n return pred[0]\n\n\nprint('Predictions of domain %s is [0 means legit and 1 dga]: ' %('spardeingeld'), is_dga('spardeingeld', clf, d)) \nprint('Predictions of domain %s is [0 means legit and 1 dga]: ' %('google'), is_dga('google', clf, d)) \nprint('Predictions of domain %s is [0 means legit and 1 dga]: ' %('1vxznov16031kjxneqjk1rtofi6'), is_dga('1vxznov16031kjxneqjk1rtofi6', clf, d)) \nprint('Predictions of domain %s is [0 means legit and 1 dga]: ' %('lthmqglxwmrwex'), is_dga('lthmqglxwmrwex', clf, d)) \n",
"_____no_output_____"
]
],
[
[
"### Step 4: Assess model accuracy with simple cross-validation\n\nTasks:\n- Make predictions for all your data. Call the ```.predict()``` method on the clf with your training data ```X_train``` and store the results in a variable called ```target_pred```.\n- Use sklearn [metrics.accuracy_score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) to determine your models accuracy. Detailed Instruction:\n - Use your trained model to predict the labels of your test data ```X_test```. Run ```.predict()``` method on the clf with your test data ```X_test``` and store the results in a variable called ```target_pred```.. \n - Then calculate the accuracy using ```target_test``` (which are the true labels/groundtruth) AND your models predictions on the test portion ```target_pred``` as inputs. The advantage here is to see how your model performs on new data it has not been seen during the training phase. The fair approach here is a simple **cross-validation**!\n \n- Print out the confusion matrix using [metrics.confusion_matrix](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html)\n- Use Yellowbrick to visualize the classification report and confusion matrix. (http://www.scikit-yb.org/en/latest/examples/modelselect.html#common-metrics-for-evaluating-classifiers)",
"_____no_output_____"
]
],
[
[
"# fair approach: make prediction on test data portion\n#Your code here ...",
"_____no_output_____"
],
[
"# Classification Report...neat summary\n#Your code here ...",
"_____no_output_____"
]
],
[
[
"### Step 5: Assess model accuracy with k-fold cross-validation\n\nTasks:\n- Partition the dataset into *k* different subsets\n- Create *k* different models by training on *k-1* subsets and testing on the remaining subsets\n- Measure the performance on each of the models and take the average measure.\n\n*Short-Cut*\nAll of these steps can be easily achieved by simply using sklearn's [model_selection.KFold()](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html) and [model_selection.cross_val_score()](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) functions.",
"_____no_output_____"
]
],
[
[
"#Your code here ...",
"_____no_output_____"
]
],
[
[
"#### (Optional) Visualizing your Tree\nAs an optional step, you can actually visualize your tree. The following code will generate a graph of your decision tree. You will need graphviz (http://www.graphviz.org) and pydotplus (or pydot) installed for this to work.\nThe Griffon VM has this installed already, but if you try this on a Mac, or Linux machine you will need to install graphviz.",
"_____no_output_____"
]
],
[
[
"# These libraries are used to visualize the decision tree and require that you have GraphViz\n# and pydot or pydotplus installed on your computer.\n\nfrom sklearn.externals.six import StringIO \nfrom IPython.core.display import Image\nimport pydotplus as pydot\n\n\ndot_data = StringIO() \ntree.export_graphviz(clf, out_file=dot_data, \n feature_names=['length', 'digits', 'entropy', 'vowel-cons', 'ngrams'],\n filled=True, rounded=True, \n special_characters=True) \n\ngraph = pydot.graph_from_dot_data(dot_data.getvalue()) \nImage(graph.create_png())\n",
"_____no_output_____"
]
],
[
[
"## Other Models\nNow that you've built a Decision Tree, let's try out two other classifiers and see how they perform on this data. For this next exercise, create classifiers using:\n\n* Support Vector Machine\n* Random Forest\n* K-Nearest Neighbors (http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) \n\nOnce you've done that, run the various performance metrics to determine which classifier works best.",
"_____no_output_____"
]
],
[
[
"from sklearn import svm\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"#Create the Random Forest Classifier\n",
"_____no_output_____"
],
[
"#Next, create the SVM classifier\n",
"_____no_output_____"
],
[
"#Finally the knn\n",
"_____no_output_____"
]
],
[
[
"## Explain a Prediction\nIn the example below, you can use LIME to explain how a classifier arrived at its prediction. Try running LIME with the various classifiers you've created and various rows to see how it functions. ",
"_____no_output_____"
]
],
[
[
"import lime.lime_tabular\nexplainer = lime.lime_tabular.LimeTabularExplainer(feature_matrix_train, \n feature_names=['length', 'digits', 'entropy', 'vowel-cons', 'ngrams'], \n class_names=['legit', 'isDGA'], \n discretize_continuous=False)",
"_____no_output_____"
],
[
"exp = explainer.explain_instance(feature_matrix_test.iloc[5], \n random_forest_clf.predict_proba, \n num_features=5)",
"_____no_output_____"
],
[
"exp.show_in_notebook(show_table=True, show_all=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7ed7f649cb4107fa715b12748612ccfb15c3c19 | 418,563 | ipynb | Jupyter Notebook | assignment1/knn.ipynb | kamikat/cs231n | 0346efaeddd4084d1e5370a8640a160a1cc77ea7 | [
"WTFPL"
] | null | null | null | assignment1/knn.ipynb | kamikat/cs231n | 0346efaeddd4084d1e5370a8640a160a1cc77ea7 | [
"WTFPL"
] | null | null | null | assignment1/knn.ipynb | kamikat/cs231n | 0346efaeddd4084d1e5370a8640a160a1cc77ea7 | [
"WTFPL"
] | null | null | null | 627.530735 | 291,414 | 0.933728 | [
[
[
"# k-Nearest Neighbor (kNN) exercise\n\n*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*\n\nThe kNN classifier consists of two stages:\n\n- During training, the classifier takes the training data and simply remembers it\n- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\n- The value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.",
"_____no_output_____"
]
],
[
[
"# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint 'Training data shape: ', X_train.shape\nprint 'Training labels shape: ', y_train.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape",
"Training data shape: (50000, 32, 32, 3)\nTraining labels shape: (50000,)\nTest data shape: (10000, 32, 32, 3)\nTest labels shape: (10000,)\n"
],
[
"# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()",
"_____no_output_____"
],
[
"# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = range(num_training)\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = range(num_test)\nX_test = X_test[mask]\ny_test = y_test[mask]",
"_____no_output_____"
],
[
"X_train.shape, X_test.shape",
"_____no_output_____"
],
[
"# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint X_train.shape, X_test.shape",
"(5000, 3072) (500, 3072)\n"
],
[
"from cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\n1. First we must compute the distances between all test examples and all train examples. \n2. Given these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.\n\nFirst, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.",
"_____no_output_____"
]
],
[
[
"# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint dists.shape",
"(500, 5000)\n"
],
[
"# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\n- What in the data is the cause behind the distinctly bright rows?\n- What causes the columns?",
"_____no_output_____"
],
[
"**Your Answer**:\n\n- The test image is too bright or too dark\n- The training data is too bright or too dark\n\n",
"_____no_output_____"
]
],
[
[
"# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)",
"Got 137 / 500 correct => accuracy: 0.274000\n"
]
],
[
[
"You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:",
"_____no_output_____"
]
],
[
[
"y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)",
"Got 139 / 500 correct => accuracy: 0.278000\n"
]
],
[
[
"You should expect to see a slightly better performance than with `k = 1`.",
"_____no_output_____"
]
],
[
[
"# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'",
"Difference was: 0.000000\nGood! The distance matrices are the same\n"
],
[
"# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'",
"Difference was: 0.000000\nGood! The distance matrices are the same\n"
],
[
"# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint 'Two loop version took %f seconds' % two_loop_time\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint 'One loop version took %f seconds' % one_loop_time\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint 'No loop version took %f seconds' % no_loop_time\n\n# you should see significantly faster performance with the fully vectorized implementation",
"Two loop version took 27.158314 seconds\nOne loop version took 40.179075 seconds\nNo loop version took 0.529196 seconds\n"
]
],
[
[
"### Cross-validation\n\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.",
"_____no_output_____"
]
],
[
[
"num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\nX_train_folds = np.split(X_train, num_folds)\ny_train_folds = np.split(y_train, num_folds)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = { k: [] for k in k_choices }\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\nfor i in range(num_folds):\n classifier = KNearestNeighbor()\n X_test = X_train_folds[i]\n y_test = y_train_folds[i]\n X_train = np.concatenate([ fold for j, fold in enumerate(X_train_folds) if j != i ])\n y_train = np.concatenate([ fold for j, fold in enumerate(y_train_folds) if j != i ])\n classifier.train(X_train, y_train)\n dists = classifier.compute_distances_no_loops(X_test)\n for k in k_choices:\n predict = classifier.predict_labels(dists, k=k)\n num_correct = np.sum(predict == y_test)\n accuracy = float(num_correct) / X_test.shape[0]\n k_to_accuracies[k] += [accuracy]\n \n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print 'k = %d, accuracy = %f' % (k, accuracy)",
"k = 1, accuracy = 0.248438\nk = 1, accuracy = 0.237500\nk = 1, accuracy = 0.229687\nk = 1, accuracy = 0.285938\nk = 1, accuracy = 0.270313\nk = 3, accuracy = 0.228125\nk = 3, accuracy = 0.232813\nk = 3, accuracy = 0.240625\nk = 3, accuracy = 0.251563\nk = 3, accuracy = 0.248438\nk = 5, accuracy = 0.234375\nk = 5, accuracy = 0.245312\nk = 5, accuracy = 0.257812\nk = 5, accuracy = 0.271875\nk = 5, accuracy = 0.253125\nk = 8, accuracy = 0.231250\nk = 8, accuracy = 0.248438\nk = 8, accuracy = 0.281250\nk = 8, accuracy = 0.271875\nk = 8, accuracy = 0.260937\nk = 10, accuracy = 0.231250\nk = 10, accuracy = 0.257812\nk = 10, accuracy = 0.275000\nk = 10, accuracy = 0.276562\nk = 10, accuracy = 0.253125\nk = 12, accuracy = 0.225000\nk = 12, accuracy = 0.259375\nk = 12, accuracy = 0.279687\nk = 12, accuracy = 0.275000\nk = 12, accuracy = 0.242188\nk = 15, accuracy = 0.234375\nk = 15, accuracy = 0.268750\nk = 15, accuracy = 0.284375\nk = 15, accuracy = 0.257812\nk = 15, accuracy = 0.253125\nk = 20, accuracy = 0.245312\nk = 20, accuracy = 0.265625\nk = 20, accuracy = 0.290625\nk = 20, accuracy = 0.284375\nk = 20, accuracy = 0.253125\nk = 50, accuracy = 0.235937\nk = 50, accuracy = 0.243750\nk = 50, accuracy = 0.271875\nk = 50, accuracy = 0.242188\nk = 50, accuracy = 0.254688\nk = 100, accuracy = 0.240625\nk = 100, accuracy = 0.239063\nk = 100, accuracy = 0.254688\nk = 100, accuracy = 0.246875\nk = 100, accuracy = 0.246875\n"
],
[
"# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()",
"_____no_output_____"
],
[
"# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 20\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)",
"Got 162 / 500 correct => accuracy: 0.324000\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7ed8269495f35697473519737765357a3f794e3 | 77,501 | ipynb | Jupyter Notebook | Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb | siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics | f2f1e22f2d578c59f833f8f3c8b4523d91286e9e | [
"MIT"
] | 3 | 2020-03-24T12:58:37.000Z | 2020-08-03T17:22:35.000Z | Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb | siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics | f2f1e22f2d578c59f833f8f3c8b4523d91286e9e | [
"MIT"
] | null | null | null | Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb | siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics | f2f1e22f2d578c59f833f8f3c8b4523d91286e9e | [
"MIT"
] | 1 | 2021-10-19T23:59:37.000Z | 2021-10-19T23:59:37.000Z | 232.735736 | 36,292 | 0.909859 | [
[
[
"## Monte Carlo - Forecasting Stock Prices - Part I",
"_____no_output_____"
],
[
"*Suggested Answers follow (usually there are multiple ways to solve a problem in Python).*",
"_____no_output_____"
],
[
"Load the data for Microsoft (‘MSFT’) for the period ‘2000-1-1’ until today.",
"_____no_output_____"
]
],
[
[
"import numpy as np \nimport pandas as pd \nfrom pandas_datareader import data as wb \nimport matplotlib.pyplot as plt \nfrom scipy.stats import norm\n%matplotlib inline\n\ndata = pd.read_csv('D:/Python/MSFT_2000.csv', index_col = 'Date')",
"_____no_output_____"
]
],
[
[
"Use the .pct_change() method to obtain the log returns of Microsoft for the designated period.",
"_____no_output_____"
]
],
[
[
"log_returns = np.log(1 + data.pct_change())",
"_____no_output_____"
],
[
"log_returns.tail()",
"_____no_output_____"
],
[
"data.plot(figsize=(10, 6));",
"_____no_output_____"
],
[
"log_returns.plot(figsize = (10, 6))",
"_____no_output_____"
]
],
[
[
"Assign the mean value of the log returns to a variable, called “U”, and their variance to a variable, called “var”. ",
"_____no_output_____"
]
],
[
[
"u = log_returns.mean()\nu",
"_____no_output_____"
],
[
"var = log_returns.var()\nvar",
"_____no_output_____"
]
],
[
[
"Calculate the drift, using the following formula: \n\n$$\ndrift = u - \\frac{1}{2} \\cdot var\n$$",
"_____no_output_____"
]
],
[
[
"drift = u - (0.5 * var)\ndrift",
"_____no_output_____"
]
],
[
[
"Store the standard deviation of the log returns in a variable, called “stdev”.",
"_____no_output_____"
]
],
[
[
"stdev = log_returns.std()\nstdev",
"_____no_output_____"
]
],
[
[
"******",
"_____no_output_____"
],
[
"Repeat this exercise for any stock of interest to you. :)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7ed90553fbd9750c08b7565518807694c470c8d | 14,850 | ipynb | Jupyter Notebook | movie_recommendation_with_LightFM_friends_WEBAPP.ipynb | LukasSteffensen/movielens-imdb-exploration | a7de353b3a065a98820175ef58ac4621e72a9b2b | [
"MIT"
] | 30 | 2020-01-15T20:38:32.000Z | 2021-12-25T06:55:07.000Z | movie_recommendation_with_LightFM_friends_WEBAPP.ipynb | audreyanneguindon/movielens-imdb-exploration | a7de353b3a065a98820175ef58ac4621e72a9b2b | [
"MIT"
] | 5 | 2019-12-11T10:37:46.000Z | 2020-01-10T18:35:17.000Z | movie_recommendation_with_LightFM_friends_WEBAPP.ipynb | audreyanneguindon/movielens-imdb-exploration | a7de353b3a065a98820175ef58ac4621e72a9b2b | [
"MIT"
] | 15 | 2020-04-06T23:24:34.000Z | 2021-08-22T02:04:30.000Z | 30.121704 | 657 | 0.572593 | [
[
[
"# Siamese Neural Network Recommendation for Friends (for Website)",
"_____no_output_____"
],
[
"This notebook presents the final code that will be used for the Movinder [website](https://movinder.herokuapp.com/) when `Get recommendation with SiameseNN!` is selected by user.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport json\nimport datetime, time\nfrom sklearn.model_selection import train_test_split\nimport itertools\nimport os\nimport zipfile\nimport random\nimport numpy as np\n\nimport requests\nimport matplotlib.pyplot as plt\n\nimport scipy.sparse as sp\nfrom sklearn.metrics import roc_auc_score",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## (1) Read data",
"_____no_output_____"
]
],
[
[
"movies = json.load(open('movies.json'))\nfriends = json.load(open('friends.json'))\nratings = json.load(open('ratings.json')) \nsoup_movie_features = sp.load_npz('soup_movie_features_11.npz')\nsoup_movie_features = soup_movie_features.toarray()",
"_____no_output_____"
]
],
[
[
"## (1.2) Simulate new friend's input",
"_____no_output_____"
],
[
"The new group of friends will need to provide information that will be later used for training the model and predicting the ratings they will give to other movies. The friends will have a new id `new_friend_id`. They will provide a rating specified in the dictionary with the following keys: `movie_id_ml` (id of the movie rated), `rating` (rating of that movie on the scale from 1 to 5), and `friend_id` that will be the friends id specified as `new_friend_id`. In addition to this rating information, the users will have to provide to the system the information that includes their average age in the group `friends_age` and gender `friends_gender`.",
"_____no_output_____"
]
],
[
[
"new_friend_id = len(friends)",
"_____no_output_____"
],
[
"new_ratings = [{'movie_id_ml': 302.0, 'rating': 4.0, 'friend_id': new_friend_id},\n {'movie_id_ml': 304.0, 'rating': 4.0, 'friend_id': new_friend_id},\n {'movie_id_ml': 307.0, 'rating': 4.0, 'friend_id': new_friend_id}]\nnew_ratings",
"_____no_output_____"
],
[
"new_friend = {'friend_id': new_friend_id, 'friends_age': 25.5, 'friends_gender': 0.375}\nnew_friend",
"_____no_output_____"
],
[
"# extend the existing data with this new information\nfriends.append(new_friend)\nratings.extend(new_ratings)",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## (2) Train the LightFM Model",
"_____no_output_____"
],
[
"We will be using the [LightFM](http://lyst.github.io/lightfm/docs/index.html) implementation of SiameseNN to train our model using the user and item (i.e. movie) features. First, we create `scipy.sparse` matrices from raw data and they can be used to fit the LightFM model.",
"_____no_output_____"
]
],
[
[
"from lightfm.data import Dataset\nfrom lightfm import LightFM\nfrom lightfm.evaluation import precision_at_k\nfrom lightfm.evaluation import auc_score",
"_____no_output_____"
]
],
[
[
"## (2.1) Build ID mappings",
"_____no_output_____"
],
[
"We create a mapping between the user and item ids from our input data to indices that will be internally used by this model. This needs to be done since the LightFM works with user and items ids that are consecutive non-negative integers. Using `dataset.fit` we assign internal numerical id to every user and item we passed in.",
"_____no_output_____"
]
],
[
[
"dataset = Dataset()\n\nitem_str_for_eval = \"x['title'],x['release'], x['unknown'], x['action'], x['adventure'],x['animation'], x['childrens'], x['comedy'], x['crime'], x['documentary'], x['drama'], x['fantasy'], x['noir'], x['horror'], x['musical'],x['mystery'], x['romance'], x['scifi'], x['thriller'], x['war'], x['western'], *soup_movie_features[x['soup_id']]\"\nfriend_str_for_eval = \"x['friends_age'], x['friends_gender']\"\n\n",
"_____no_output_____"
],
[
"dataset.fit(users=(int(x['friend_id']) for x in friends),\n items=(int(x['movie_id_ml']) for x in movies),\n item_features=(eval(\"(\"+item_str_for_eval+\")\") for x in movies),\n user_features=((eval(friend_str_for_eval)) for x in friends))\n\nnum_friends, num_items = dataset.interactions_shape()\nprint(f'Mappings - Num friends: {num_friends}, num_items {num_items}.')",
"Mappings - Num friends: 192, num_items 1251.\n"
]
],
[
[
"## (2.2) Build the interactions and feature matrices",
"_____no_output_____"
],
[
"The `interactions` matrix contains interactions between `friend_id` and `movie_id_ml`. It puts 1 if friends `friend_id` rated movie `movie_id_ml`, and 0 otherwise.",
"_____no_output_____"
]
],
[
[
"(interactions, weights) = dataset.build_interactions(((int(x['friend_id']), int(x['movie_id_ml']))\n for x in ratings))\n\nprint(repr(interactions))",
"<192x1251 sparse matrix of type '<class 'numpy.int32'>'\n\twith 59123 stored elements in COOrdinate format>\n"
]
],
[
[
"The `item_features` is also a sparse matrix that contains movie ids with their corresponding features. In the item features, we include the following features: movie title, when it was released, all genres it belongs to, and vectorized representation of movie keywords, cast members, and countries it was released in.",
"_____no_output_____"
]
],
[
[
"item_features = dataset.build_item_features(((x['movie_id_ml'], \n [eval(\"(\"+item_str_for_eval+\")\")]) for x in movies) )\nprint(repr(item_features))",
"<1251x2487 sparse matrix of type '<class 'numpy.float32'>'\n\twith 2502 stored elements in Compressed Sparse Row format>\n"
]
],
[
[
"The `user_features` is also a sparse matrix that contains movie ids with their corresponding features. The user features include their age, and gender.",
"_____no_output_____"
]
],
[
[
"user_features = dataset.build_user_features(((x['friend_id'], \n [eval(friend_str_for_eval)]) for x in friends) )\nprint(repr(user_features))",
"<192x342 sparse matrix of type '<class 'numpy.float32'>'\n\twith 384 stored elements in Compressed Sparse Row format>\n"
]
],
[
[
"## (2.3) Building a model",
"_____no_output_____"
],
[
"After some hyperparameters tuning, we end up to having the best model performance with the following values:\n\n- Epocks = 150\n- Learning rate = 0.015\n- Max sampled = 11\n- Loss type = WARP\n\nReferences:\n- The WARP (Weighted Approximate-Rank Pairwise) lso for implicit feedback learning-rank. Originally implemented in [WSABIE paper](http://www.thespermwhale.com/jaseweston/papers/wsabie-ijcai.pdf).\n- Extension to apply to recommendation settings in the 2013 k-order statistic loss [paper](http://www.ee.columbia.edu/~ronw/pubs/recsys2013-kaos.pdf) in the form of the k-OS WARP loss, also implemented in LightFM.",
"_____no_output_____"
]
],
[
[
"epochs = 150\nlr = 0.015\nmax_sampled = 11\n\nloss_type = \"warp\" # \"bpr\"\n\n\nmodel = LightFM(learning_rate=lr, loss=loss_type, max_sampled=max_sampled)\n\nmodel.fit_partial(interactions, epochs=epochs, user_features=user_features, item_features=item_features)\ntrain_precision = precision_at_k(model, interactions, k=10, user_features=user_features, item_features=item_features).mean()\n\ntrain_auc = auc_score(model, interactions, user_features=user_features, item_features=item_features).mean()\n\nprint(f'Precision: {train_precision}, AUC: {train_auc}')\n",
"Precision: 0.9588541984558105, AUC: 0.9013209342956543\n"
],
[
"def predict_top_k_movies(model, friends_id, k):\n n_users, n_movies = train.shape\n if use_features:\n prediction = model.predict(friends_id, np.arange(n_movies), user_features=friends_features, item_features=item_features)#predict(model, user_id, np.arange(n_movies))\n else:\n prediction = model.predict(friends_id, np.arange(n_movies))#predict(model, user_id, np.arange(n_movies))\n \n movie_ids = np.arange(train.shape[1])\n return movie_ids[np.argsort(-prediction)][:k]",
"_____no_output_____"
],
[
"dfm = pd.DataFrame(movies)\ndfm = dfm.sort_values(by=\"movie_id_ml\")",
"_____no_output_____"
],
[
"k = 10\nfriends_id = new_friend_id\nmovie_ids = np.array(dfm.movie_id_ml.unique())#np.array(list(df_movies.movie_id_ml.unique())) #np.arange(interactions.shape[1])\nprint(movie_ids.shape)\n\nn_users, n_items = interactions.shape\n\nscores = model.predict(friends_id, np.arange(n_items), user_features=user_features, item_features=item_features)\n# scores = model.predict(friends_id, np.arange(n_items))\n\nknown_positives = movie_ids[interactions.tocsr()[friends_id].indices]\ntop_items = movie_ids[np.argsort(-scores)]\n\nprint(f\"Friends {friends_id}\")\nprint(\" Known positives:\")\n\nfor x in known_positives[:k]:\n print(f\" {x} | {dfm[dfm.movie_id_ml==x]['title'].iloc[0]}\" )\n \nprint(\" Recommended:\")\nfor x in top_items[:k]:\n print(f\" {x} | {dfm[dfm.movie_id_ml==x]['title'].iloc[0]}\" )",
"(1251,)\nFriends 191\n Known positives:\n 301 | in & out\n 302 | l.a. confidential\n 307 | the devil's advocate\n Recommended:\n 48 | hoop dreams\n 292 | rosewood\n 255 | my best friend's wedding\n 286 | the english patient\n 284 | tin cup\n 299 | hoodlum\n 125 | phenomenon\n 1 | toy story\n 315 | apt pupil\n 7 | twelve monkeys\n"
]
],
[
[
"This is an example of recommended movies output that will be used in the website to give users a movie recommendation based on the information they supplied to the model.\n\nMovinder website: [https://movinder.herokuapp.com/](https://movinder.herokuapp.com/).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7ed9409bb469bba7c74afd0c43a7ad0aa8e1e60 | 131,969 | ipynb | Jupyter Notebook | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe | 7e771783769816f37de44077177152175aecc2b7 | [
"MIT"
] | null | null | null | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe | 7e771783769816f37de44077177152175aecc2b7 | [
"MIT"
] | null | null | null | notebooks/curate_bouts-s_b1253_21-plotly-ephys.ipynb | zekearneodo/ceciestunepipe | 7e771783769816f37de44077177152175aecc2b7 | [
"MIT"
] | null | null | null | 45.272384 | 195 | 0.450136 | [
[
[
"## Searching for bouts for a day of ephys recording\n- microphone wav file is first exported in sglx_pipe-dev-sort-bouts-s_b1253_21-20210614\n- bouts are extracted in searchbout_s_b1253_21-ephys",
"_____no_output_____"
]
],
[
[
"import os\nimport glob\nimport socket\nimport logging\nimport pickle\nimport numpy as np\nimport pandas as pd\nfrom scipy.io import wavfile\nfrom scipy import signal\n\n### Fuck matplotlib, I'm using poltly now\nfrom plotly.subplots import make_subplots\nimport plotly.graph_objects as go\n\nfrom importlib import reload\n\nlogger = logging.getLogger()\nhandler = logging.StreamHandler()\nformatter = logging.Formatter(\n '%(asctime)s %(name)-12s %(levelname)-8s %(message)s')\nhandler.setFormatter(formatter)\nlogger.addHandler(handler)\nlogger.setLevel(logging.INFO)\n\nlogger.info('Running on {}'.format(socket.gethostname()))",
"2021-08-28 00:02:25,421 root INFO Running on pakhi\n"
],
[
"from ceciestunepipe.file import bcistructure as et\nfrom ceciestunepipe.util.sound import boutsearch as bs",
"_____no_output_____"
]
],
[
[
"### Get the file locations for a session (day) of recordings",
"_____no_output_____"
]
],
[
[
"reload(et)\nsess_par = {'bird': 's_b1253_21',\n 'sess': '2021-07-18',\n 'sort': 2}\n\n\nexp_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], ephys_software='sglx')\n\nraw_folder = exp_struct['folders']['sglx']\n\nderived_folder = exp_struct['folders']['derived']\n\nbouts_folder = os.path.join(derived_folder, 'bouts_ceciestunepipe')\n\nsess_bouts_file = os.path.join(bouts_folder, 'bout_sess_auto.pickle')\nsess_bouts_curated_file = os.path.join(bouts_folder, 'bout_curated.pickle')\n\nos.makedirs(bouts_folder, exist_ok=True)",
"_____no_output_____"
],
[
"exp_struct['folders']",
"_____no_output_____"
]
],
[
[
"### load concatenated the files of the session",
"_____no_output_____"
]
],
[
[
"def read_session_auto_bouts(exp_struct):\n # list all files of the session\n # read into list of pandas dataframes and concatenate\n # read the search parameters of the first session\n # return the big pd and the search params\n derived_folder = exp_struct['folders']['derived']\n bout_pd_files = et.get_sgl_files_epochs(derived_folder, file_filter='bout_auto.pickle')\n search_params_files = et.get_sgl_files_epochs(derived_folder, file_filter='bout_search_params.pickle')\n print(bout_pd_files)\n hparams=None\n with open(search_params_files[0], 'rb') as fh:\n hparams = pickle.load(fh)\n \n bout_pd = pd.concat([pd.read_pickle(p) for p in bout_pd_files[:]])\n \n return bout_pd, hparams\n\nbout_pd, hparams = read_session_auto_bouts(exp_struct)",
"['/mnt/sphere/speech_bci/derived_data/s_b1253_21/2021-07-18/sglx/0610_g0/bout_auto.pickle', '/mnt/sphere/speech_bci/derived_data/s_b1253_21/2021-07-18/sglx/1615_g0/bout_auto.pickle']\n"
],
[
"bout_pd['file'].values",
"_____no_output_____"
]
],
[
[
"###### if it wasnt saved (which is a bad mistake), read the sampling rate from the first file in the session",
"_____no_output_____"
]
],
[
[
"def sample_rate_from_wav(wav_path):\n x, sample_rate = wavfile.read(wav_path)\n return sample_rate\n\nif hparams['sample_rate'] is None:\n one_wav_path = bpd.loc[0, 'file']\n logger.info('Sample rate not saved in parameters dict, searching it in ' + one_wav_path)\n hparams['sample_rate'] = sample_rate_from_wav(one_wav_path)",
"_____no_output_____"
],
[
"def cleanup(bout_pd: pd.DataFrame):\n ## check for empty waveforms (how woudld THAT happen???)\n bout_pd['valid_waveform'] = bout_pd['waveform'].apply(lambda x: (False if x.size==0 else True))\n \n # valid is & of all the validated criteria\n bout_pd['valid'] = bout_pd['valid_waveform']\n \n ## fill in the epoch\n bout_pd['epoch'] = bout_pd['file'].apply(lambda x: et.split_path(x)[-2])\n \n # drop not valid and reset index\n bout_pd.drop(bout_pd[bout_pd['valid']==False].index, inplace=True)\n bout_pd.reset_index(drop=True, inplace=True)\n \n # set all to 'confusing' (unchecked) and 'bout_check' false (not a bout)\n bout_pd['confusing'] = True\n bout_pd['bout_check'] = False\n\ncleanup(bout_pd)",
"2021-08-28 00:04:37,654 numexpr.utils INFO Note: NumExpr detected 32 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\n2021-08-28 00:04:37,655 numexpr.utils INFO NumExpr defaulting to 8 threads.\n"
],
[
"bout_pd",
"_____no_output_____"
],
[
"reload(et)",
"_____no_output_____"
]
],
[
[
"#### compute the spectrograms",
"_____no_output_____"
]
],
[
[
"bout_pd['spectrogram'] = bout_pd['waveform'].apply(lambda x: bs.gimmepower(x, hparams)[2])\nlogger.info('saving bout pandas with spectrogram to ' + sess_bouts_file)\nbout_pd.to_pickle(sess_bouts_file)",
"2021-08-28 00:05:31,806 root INFO saving bout pandas with spectrogram to /mnt/sphere/speech_bci/derived_data/s_b1253_21/2021-07-18/sglx/bouts_ceciestunepipe/bout_auto.pickle\n"
],
[
"bout_pd.head(2)",
"_____no_output_____"
],
[
"bout_pd['file'][0]",
"_____no_output_____"
]
],
[
[
"## inspect the bouts and curate them",
"_____no_output_____"
],
[
"#### visualize one bout",
"_____no_output_____"
]
],
[
[
"bout_pd.iloc[0]",
"_____no_output_____"
],
[
"import plotly.express as px\nimport plotly.graph_objects as go\n\nfrom ipywidgets import widgets",
"_____no_output_____"
],
[
"def viz_one_bout(df: pd.Series, sub_sample=1):\n # get the power and the spectrogram\n sxx = df['spectrogram'][:, ::sub_sample]\n x = df['waveform'][::sub_sample]\n \n # the trace\n tr_waveform = go.Scatter(y=x)\n figwidg_waveform = go.FigureWidget(data=[tr_waveform],\n layout= {'height': 300,'width':1000})\n\n # the spectrogram\n fig_spectrogram = px.imshow(sxx, \n labels={}, \n color_continuous_scale='Inferno',\n aspect='auto')\n\n fig_spectrogram.update_layout(width=1000, height=300, coloraxis_showscale=False)\n fig_spectrogram.update_xaxes(showticklabels=False)\n fig_spectrogram.update_yaxes(showticklabels=False)\n \n \n figwidg_spectrogram = go.FigureWidget(fig_spectrogram)\n \n display(widgets.VBox([figwidg_waveform,\n figwidg_spectrogram]))\n \n\nviz_one_bout(bout_pd.iloc[24])",
"_____no_output_____"
],
[
"bout_pd.head(2)",
"_____no_output_____"
]
],
[
[
"## use it in a widget\n",
"_____no_output_____"
],
[
"#### add a 'confusing' label, for not/sure/mixed.\nwe want to avoid having things we are not sure of in the training dataset",
"_____no_output_____"
]
],
[
[
"bout_pd.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"## Set confusing by default, will only be False once asserted bout/or not\nbout_pd['confusing'] = True\nbout_pd['bout_check'] = False",
"_____no_output_____"
],
[
"### Create a counter object (count goes 1:1 to DataFrame index)\nfrom traitlets import CInt, link\n\nclass Counter(widgets.DOMWidget):\n value = CInt(0)\n value.tag(sync=True)",
"_____no_output_____"
],
[
"class VizBout():\n def __init__(self, hparams, bouts_pd):\n self.bout = None\n self.bouts_pd = bouts_pd\n self.bout_series = None\n self.is_bout = None\n self.is_confusing = None\n \n self.bout_counter = None\n self.bout_id = None\n \n self.buttons = {}\n self.m_pick = None\n \n \n self.fig_waveform = None\n self.fig_spectrogram = None\n \n self.figwidg_waveform = None\n self.figwidg_spectrogram = None\n \n self.fig_width = 2\n self.sub_sample = 10\n \n self.x = None\n self.sxx = None\n self.tr_waveform = None\n \n self.s_f = hparams['sample_rate']\n \n self.init_fig()\n self.init_widget()\n self.show()\n \n def init_fig(self):\n # the trace\n self.tr_waveform = go.Scatter(y=np.zeros(500))\n self.figwidg_waveform = go.FigureWidget(data=[self.tr_waveform],\n layout={'width': 1000, 'height':300})\n \n # the spectrogram\n self.fig_spectrogram = px.imshow(np.random.rand(500, 500), \n labels={}, \n color_continuous_scale='Inferno',\n aspect='auto')\n \n self.fig_spectrogram.update_layout(width=1000, height=300, coloraxis_showscale=False)\n self.fig_spectrogram.update_xaxes(showticklabels=False)\n self.fig_spectrogram.update_yaxes(showticklabels=False)\n self.figwidg_spectrogram = go.FigureWidget(self.fig_spectrogram)\n \n \n def init_widget(self):\n # declare elements\n # lay them out\n #\n \n self.bout_counter = Counter()\n self.is_bout = widgets.Checkbox(description='is bout')\n self.is_confusing = widgets.Checkbox(description='Not sure or mixed')\n \n self.buttons['Next'] = widgets.Button(description=\"Next\", button_style='info',\n icon='plus') \n self.buttons['Prev'] = widgets.Button(description=\"Prev\", button_style='warning',\n icon='minus')\n self.buttons['Check'] = widgets.Button(description=\"Check\", button_style='success', \n icon='check')\n self.buttons['Uncheck'] = widgets.Button(description=\"Uncheck\", button_style='danger',\n icon='wrong')\n \n [b.on_click(self.button_click) for b in self.buttons.values()]\n \n left_box = widgets.VBox([self.buttons['Prev'], self.buttons['Uncheck']])\n right_box = widgets.VBox([self.buttons['Next'], self.buttons['Check']])\n button_box = widgets.HBox([left_box, right_box])\n\n self.m_pick = widgets.IntSlider(value=0, min=0, max=self.bouts_pd.index.size-1,step=1, \n description=\"Bout candidate index\")\n \n \n control_box = widgets.HBox([button_box,\n widgets.VBox([self.is_bout, self.is_confusing]),\n self.m_pick])\n \n link((self.m_pick, 'value'), (self.bout_counter, 'value'))\n\n self.update_bout()\n \n self.is_bout.observe(self.bout_checked, names='value')\n self.is_confusing.observe(self.confusing_checked, names='value')\n self.m_pick.observe(self.slider_change, names='value')\n \n all_containers = widgets.VBox([control_box, \n self.figwidg_waveform, self.figwidg_spectrogram])\n display(all_containers)\n# display(button_box)\n# display(self.m_pick)\n# display(self.is_bout)\n# display(self.fig)\n \n def button_click(self, button): \n self.bout_id = self.bout_counter.value\n curr_bout = self.bout_counter\n \n if button.description == 'Next':\n curr_bout.value += 1\n elif button.description == 'Prev':\n curr_bout.value -= 1\n elif button.description == 'Check':\n self.bouts_pd.loc[self.bout_id, 'bout_check'] = True\n self.bouts_pd.loc[self.bout_id, 'confusing'] = False\n curr_bout.value += 1\n elif button.description == 'Uncheck':\n self.bouts_pd.loc[self.bout_id, 'bout_check'] = False\n self.bouts_pd.loc[self.bout_id, 'confusing'] = False\n curr_bout.value += 1\n \n # handle the edges of the counter\n if curr_bout.value > self.m_pick.max:\n curr_bout.value = 0\n \n if curr_bout.value < self.m_pick.min:\n curr_bout.value = self.m_pick.max\n \n def slider_change(self, change):\n #logger.info('slider changed')\n #self.bout_counter = change.new\n #clear_output(True)\n self.update_bout()\n self.show()\n \n def bout_checked(self, bc):\n# print \"bout checked\"\n# print bc['new']\n# print self.motiff\n self.bouts_pd.loc[self.bout_id, 'bout_check'] = bc['new']\n \n def confusing_checked(self, bc):\n# print \"bout checked\"\n# print bc['new']\n# print self.motiff\n self.bouts_pd.loc[self.bout_id, 'confusing'] = bc['new']\n \n def update_bout(self):\n self.bout_id = self.bout_counter.value\n self.bout_series = self.bouts_pd.iloc[self.bout_id]\n \n self.is_bout.value = bool(self.bout_series['bout_check'])\n self.is_confusing.value = bool(self.bout_series['confusing'])\n \n self.x = self.bout_series['waveform'][::self.sub_sample]\n self.sxx = self.bout_series['spectrogram'][::self.sub_sample]\n \n def show(self):\n #self.fig.clf()\n #self.init_fig()\n # update\n# self.update_bout()\n #plot\n #logger.info('showing')\n \n # Show the figures\n with self.figwidg_waveform.batch_update():\n self.figwidg_waveform.data[0].y = self.x\n self.figwidg_waveform.data[0].x = np.arange(self.x.size) * self.sub_sample / self.s_f \n \n with self.figwidg_spectrogram.batch_update():\n self.figwidg_spectrogram.data[0].z = np.sqrt(self.sxx[::-1])\n \n \n\nviz_bout = VizBout(hparams, bout_pd)",
"_____no_output_____"
],
[
"np.where(viz_bout.bouts_pd['bout_check']==True)[0].size",
"_____no_output_____"
]
],
[
[
"### save it",
"_____no_output_____"
]
],
[
[
"hparams",
"_____no_output_____"
],
[
"### get the curated file path\n##save to the curated file path\nviz_bout.bouts_pd.to_pickle(sess_bouts_curated_file)\nlogger.info('saved curated bout pandas to pickle {}'.format(sess_bouts_curated_file))",
"2021-08-28 00:18:56,404 root INFO saved curated bout pandas to pickle /mnt/sphere/speech_bci/derived_data/s_b1253_21/2021-07-18/sglx/bouts_ceciestunepipe/bout_curated.pickle\n"
],
[
"viz_bout.bouts_pd['file'][0]",
"_____no_output_____"
],
[
"viz_bout.bouts_pd.head(5)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7ed95f7f4ec79a505d1003b6f0ab52e0ef62b99 | 2,896 | ipynb | Jupyter Notebook | examples/jupyter/demo221_lumped_bddc.ipynb | AdelekeBankole/LFAToolkit.jl | 2cd297b075ce6df5e89899a6cba977b63bcec61e | [
"BSD-2-Clause"
] | null | null | null | examples/jupyter/demo221_lumped_bddc.ipynb | AdelekeBankole/LFAToolkit.jl | 2cd297b075ce6df5e89899a6cba977b63bcec61e | [
"BSD-2-Clause"
] | null | null | null | examples/jupyter/demo221_lumped_bddc.ipynb | AdelekeBankole/LFAToolkit.jl | 2cd297b075ce6df5e89899a6cba977b63bcec61e | [
"BSD-2-Clause"
] | null | null | null | 24.133333 | 97 | 0.507597 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7ed98907122cf8e17b9b0913cdf9a4e5561c928 | 252,380 | ipynb | Jupyter Notebook | SimpleCNN-ShreyaRESNET50.ipynb | Daniel-Wu/HydraML | b94711f7370b5a984e47523a50cfc83eaad01edb | [
"MIT"
] | null | null | null | SimpleCNN-ShreyaRESNET50.ipynb | Daniel-Wu/HydraML | b94711f7370b5a984e47523a50cfc83eaad01edb | [
"MIT"
] | null | null | null | SimpleCNN-ShreyaRESNET50.ipynb | Daniel-Wu/HydraML | b94711f7370b5a984e47523a50cfc83eaad01edb | [
"MIT"
] | null | null | null | 260.454076 | 121,956 | 0.878172 | [
[
[
"# Simple CNN on dataset",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport keras\nimport matplotlib.pyplot as plt\nimport os \nos.environ['CUDA_VISIBLE_DEVICES'] = '1'\n\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.models import Sequential, Model\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom keras.layers import Activation, Dropout, Flatten, Dense, BatchNormalization, GlobalAveragePooling2D\nfrom keras import backend as K\nfrom keras.utils import multi_gpu_model\n",
"_____no_output_____"
],
[
"# dimensions of our images.\nimg_width, img_height = 224, 224\n\ntrain_data_dir = '../dataset/train'\nvalidation_data_dir = '../dataset/test'\nnb_train_samples = 194\nnb_validation_samples = 49\nepochs = 50\nbatch_size = 4\n\nif K.image_data_format() == 'channels_first':\n input_shape = (3, img_width, img_height)\nelse:\n input_shape = (img_width, img_height, 3)\n",
"_____no_output_____"
],
[
"from keras.applications.resnet import ResNet50\nimport numpy as np\n\nbase_model = ResNet50(weights='imagenet', input_shape=(img_width, img_height, 3), include_top = False, pooling='max')\n\n#adam = keras.optimizers.Adam(lr=1e-4)\n#model.compile(loss='binary_crossentropy',\n# optimizer=adam,\n# metrics=['accuracy'])\n\nx = base_model.output\n# x = Flatten()(x)\nx = Dense(512, activation='relu')(x)\nx = BatchNormalization()(x)\npredictions = Dense(1, activation='sigmoid')(x)\n\n# this is the model we will train\nmodel = Model(inputs=base_model.input, outputs=predictions)\n\n\nadam = keras.optimizers.Adam(lr=1e-4)\nmodel.compile(loss='binary_crossentropy',\n optimizer=adam,\n metrics=['accuracy'])\n\nmodel.summary()",
"WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:2041: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n\nDownloading data from https://github.com/keras-team/keras-applications/releases/download/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5\n94773248/94765736 [==============================] - 11s 0us/step\nModel: \"model_3\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_3 (InputLayer) (None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_3[0][0] \n__________________________________________________________________________________________________\nconv1_conv (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nconv1_bn (BatchNormalization) (None, 112, 112, 64) 256 conv1_conv[0][0] \n__________________________________________________________________________________________________\nconv1_relu (Activation) (None, 112, 112, 64) 0 conv1_bn[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 conv1_relu[0][0] \n__________________________________________________________________________________________________\npool1_pool (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_conv (Conv2D) (None, 56, 56, 64) 4160 pool1_pool[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_relu (Activation (None, 56, 56, 64) 0 conv2_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_relu (Activation (None, 56, 56, 64) 0 conv2_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_conv (Conv2D) (None, 56, 56, 256) 16640 pool1_pool[0][0] \n__________________________________________________________________________________________________\nconv2_block1_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_add (Add) (None, 56, 56, 256) 0 conv2_block1_0_bn[0][0] \n conv2_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_out (Activation) (None, 56, 56, 256) 0 conv2_block1_add[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_conv (Conv2D) (None, 56, 56, 64) 16448 conv2_block1_out[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_relu (Activation (None, 56, 56, 64) 0 conv2_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_relu (Activation (None, 56, 56, 64) 0 conv2_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_add (Add) (None, 56, 56, 256) 0 conv2_block1_out[0][0] \n conv2_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_out (Activation) (None, 56, 56, 256) 0 conv2_block2_add[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_conv (Conv2D) (None, 56, 56, 64) 16448 conv2_block2_out[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_relu (Activation (None, 56, 56, 64) 0 conv2_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_relu (Activation (None, 56, 56, 64) 0 conv2_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_add (Add) (None, 56, 56, 256) 0 conv2_block2_out[0][0] \n conv2_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_out (Activation) (None, 56, 56, 256) 0 conv2_block3_add[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_conv (Conv2D) (None, 28, 28, 128) 32896 conv2_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_relu (Activation (None, 28, 28, 128) 0 conv3_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_relu (Activation (None, 28, 28, 128) 0 conv3_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_conv (Conv2D) (None, 28, 28, 512) 131584 conv2_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block1_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_add (Add) (None, 28, 28, 512) 0 conv3_block1_0_bn[0][0] \n conv3_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_out (Activation) (None, 28, 28, 512) 0 conv3_block1_add[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block1_out[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_relu (Activation (None, 28, 28, 128) 0 conv3_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_relu (Activation (None, 28, 28, 128) 0 conv3_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_add (Add) (None, 28, 28, 512) 0 conv3_block1_out[0][0] \n conv3_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_out (Activation) (None, 28, 28, 512) 0 conv3_block2_add[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block2_out[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_relu (Activation (None, 28, 28, 128) 0 conv3_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_relu (Activation (None, 28, 28, 128) 0 conv3_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_add (Add) (None, 28, 28, 512) 0 conv3_block2_out[0][0] \n conv3_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_out (Activation) (None, 28, 28, 512) 0 conv3_block3_add[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_relu (Activation (None, 28, 28, 128) 0 conv3_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_relu (Activation (None, 28, 28, 128) 0 conv3_block4_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block4_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block4_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_add (Add) (None, 28, 28, 512) 0 conv3_block3_out[0][0] \n conv3_block4_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_out (Activation) (None, 28, 28, 512) 0 conv3_block4_add[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_conv (Conv2D) (None, 14, 14, 256) 131328 conv3_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_relu (Activation (None, 14, 14, 256) 0 conv4_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_relu (Activation (None, 14, 14, 256) 0 conv4_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_conv (Conv2D) (None, 14, 14, 1024) 525312 conv3_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block1_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_add (Add) (None, 14, 14, 1024) 0 conv4_block1_0_bn[0][0] \n conv4_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_out (Activation) (None, 14, 14, 1024) 0 conv4_block1_add[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block1_out[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_relu (Activation (None, 14, 14, 256) 0 conv4_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_relu (Activation (None, 14, 14, 256) 0 conv4_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_add (Add) (None, 14, 14, 1024) 0 conv4_block1_out[0][0] \n conv4_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_out (Activation) (None, 14, 14, 1024) 0 conv4_block2_add[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block2_out[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_relu (Activation (None, 14, 14, 256) 0 conv4_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_relu (Activation (None, 14, 14, 256) 0 conv4_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_add (Add) (None, 14, 14, 1024) 0 conv4_block2_out[0][0] \n conv4_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_out (Activation) (None, 14, 14, 1024) 0 conv4_block3_add[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block3_out[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_relu (Activation (None, 14, 14, 256) 0 conv4_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_relu (Activation (None, 14, 14, 256) 0 conv4_block4_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block4_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block4_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_add (Add) (None, 14, 14, 1024) 0 conv4_block3_out[0][0] \n conv4_block4_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_out (Activation) (None, 14, 14, 1024) 0 conv4_block4_add[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_relu (Activation (None, 14, 14, 256) 0 conv4_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_relu (Activation (None, 14, 14, 256) 0 conv4_block5_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block5_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block5_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_add (Add) (None, 14, 14, 1024) 0 conv4_block4_out[0][0] \n conv4_block5_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_out (Activation) (None, 14, 14, 1024) 0 conv4_block5_add[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block5_out[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_relu (Activation (None, 14, 14, 256) 0 conv4_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_relu (Activation (None, 14, 14, 256) 0 conv4_block6_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block6_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block6_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_add (Add) (None, 14, 14, 1024) 0 conv4_block5_out[0][0] \n conv4_block6_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_out (Activation) (None, 14, 14, 1024) 0 conv4_block6_add[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_conv (Conv2D) (None, 7, 7, 512) 524800 conv4_block6_out[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_relu (Activation (None, 7, 7, 512) 0 conv5_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_relu (Activation (None, 7, 7, 512) 0 conv5_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_conv (Conv2D) (None, 7, 7, 2048) 2099200 conv4_block6_out[0][0] \n__________________________________________________________________________________________________\nconv5_block1_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_add (Add) (None, 7, 7, 2048) 0 conv5_block1_0_bn[0][0] \n conv5_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_out (Activation) (None, 7, 7, 2048) 0 conv5_block1_add[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_conv (Conv2D) (None, 7, 7, 512) 1049088 conv5_block1_out[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_relu (Activation (None, 7, 7, 512) 0 conv5_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_relu (Activation (None, 7, 7, 512) 0 conv5_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_add (Add) (None, 7, 7, 2048) 0 conv5_block1_out[0][0] \n conv5_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_out (Activation) (None, 7, 7, 2048) 0 conv5_block2_add[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_conv (Conv2D) (None, 7, 7, 512) 1049088 conv5_block2_out[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_relu (Activation (None, 7, 7, 512) 0 conv5_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_relu (Activation (None, 7, 7, 512) 0 conv5_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_add (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0] \n conv5_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_out (Activation) (None, 7, 7, 2048) 0 conv5_block3_add[0][0] \n__________________________________________________________________________________________________\nmax_pool (GlobalMaxPooling2D) (None, 2048) 0 conv5_block3_out[0][0] \n__________________________________________________________________________________________________\ndense_5 (Dense) (None, 512) 1049088 max_pool[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 512) 2048 dense_5[0][0] \n__________________________________________________________________________________________________\ndense_6 (Dense) (None, 1) 513 batch_normalization_3[0][0] \n==================================================================================================\nTotal params: 24,639,361\nTrainable params: 24,585,217\nNon-trainable params: 54,144\n__________________________________________________________________________________________________\n"
],
[
"# this is the augmentation configuration we will use for training\ntrain_datagen = ImageDataGenerator(\n rescale=1. / 255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n samplewise_center=True,\n samplewise_std_normalization=True,\n brightness_range=(0.1, 0.9))\n\n# this is the augmentation configuration we will use for testing:\n# only rescaling\ntest_datagen = ImageDataGenerator(rescale=1. / 255, samplewise_center=True,\n samplewise_std_normalization=True,)\n\ntrain_generator = train_datagen.flow_from_directory(\n train_data_dir,\n target_size=(img_width, img_height),\n batch_size=batch_size,\n class_mode='binary')\n\nvalidation_generator = test_datagen.flow_from_directory(\n validation_data_dir,\n target_size=(img_width, img_height),\n batch_size=batch_size,\n class_mode='binary')\n\n",
"Found 193 images belonging to 2 classes.\nFound 49 images belonging to 2 classes.\n"
],
[
"history = model.fit_generator(\n train_generator,\n steps_per_epoch=nb_train_samples // batch_size,\n epochs=epochs,\n validation_data=validation_generator,\n validation_steps=nb_validation_samples // batch_size)",
"Epoch 1/50\n48/48 [==============================] - 40s 829ms/step - loss: 0.7163 - acc: 0.5988 - val_loss: 1.1285 - val_acc: 0.6042\nEpoch 2/50\n48/48 [==============================] - 29s 598ms/step - loss: 0.6657 - acc: 0.6718 - val_loss: 1.1587 - val_acc: 0.5778\nEpoch 3/50\n48/48 [==============================] - 29s 596ms/step - loss: 0.5581 - acc: 0.7291 - val_loss: 0.6846 - val_acc: 0.7333\nEpoch 4/50\n48/48 [==============================] - 28s 590ms/step - loss: 0.4439 - acc: 0.7659 - val_loss: 0.8754 - val_acc: 0.6889\nEpoch 5/50\n48/48 [==============================] - 28s 580ms/step - loss: 0.3797 - acc: 0.8023 - val_loss: 1.7015 - val_acc: 0.6444\nEpoch 6/50\n48/48 [==============================] - 28s 577ms/step - loss: 0.4168 - acc: 0.8073 - val_loss: 1.0582 - val_acc: 0.6889\nEpoch 7/50\n48/48 [==============================] - 27s 561ms/step - loss: 0.2777 - acc: 0.8854 - val_loss: 0.8164 - val_acc: 0.6889\nEpoch 8/50\n48/48 [==============================] - 27s 552ms/step - loss: 0.3054 - acc: 0.8749 - val_loss: 1.0977 - val_acc: 0.6222\nEpoch 9/50\n48/48 [==============================] - 26s 537ms/step - loss: 0.3164 - acc: 0.8593 - val_loss: 0.7639 - val_acc: 0.6667\nEpoch 10/50\n48/48 [==============================] - 26s 547ms/step - loss: 0.3216 - acc: 0.8492 - val_loss: 1.0854 - val_acc: 0.6000\nEpoch 11/50\n48/48 [==============================] - 25s 524ms/step - loss: 0.2744 - acc: 0.9062 - val_loss: 0.6132 - val_acc: 0.8000\nEpoch 12/50\n48/48 [==============================] - 27s 552ms/step - loss: 0.3335 - acc: 0.8437 - val_loss: 1.1778 - val_acc: 0.6667\nEpoch 13/50\n48/48 [==============================] - 26s 532ms/step - loss: 0.2455 - acc: 0.8649 - val_loss: 1.0283 - val_acc: 0.6222\nEpoch 14/50\n48/48 [==============================] - 27s 554ms/step - loss: 0.2898 - acc: 0.8906 - val_loss: 0.8862 - val_acc: 0.7083\nEpoch 15/50\n48/48 [==============================] - 29s 614ms/step - loss: 0.2921 - acc: 0.8701 - val_loss: 0.8799 - val_acc: 0.7111\nEpoch 16/50\n48/48 [==============================] - 26s 549ms/step - loss: 0.2659 - acc: 0.8958 - val_loss: 1.0852 - val_acc: 0.6000\nEpoch 17/50\n48/48 [==============================] - 27s 553ms/step - loss: 0.3294 - acc: 0.8958 - val_loss: 0.8373 - val_acc: 0.7333\nEpoch 18/50\n48/48 [==============================] - 26s 532ms/step - loss: 0.1963 - acc: 0.9323 - val_loss: 1.3287 - val_acc: 0.6444\nEpoch 19/50\n48/48 [==============================] - 21s 444ms/step - loss: 0.1924 - acc: 0.9166 - val_loss: 1.2395 - val_acc: 0.7111\nEpoch 20/50\n48/48 [==============================] - 19s 399ms/step - loss: 0.2159 - acc: 0.9323 - val_loss: 1.0421 - val_acc: 0.6889\nEpoch 21/50\n48/48 [==============================] - 19s 403ms/step - loss: 0.3113 - acc: 0.8440 - val_loss: 1.8189 - val_acc: 0.7111\nEpoch 22/50\n48/48 [==============================] - 19s 400ms/step - loss: 0.2216 - acc: 0.9167 - val_loss: 1.1871 - val_acc: 0.6667\nEpoch 23/50\n48/48 [==============================] - 18s 379ms/step - loss: 0.2921 - acc: 0.8752 - val_loss: 1.0717 - val_acc: 0.7111\nEpoch 24/50\n48/48 [==============================] - 19s 391ms/step - loss: 0.3003 - acc: 0.8909 - val_loss: 0.7622 - val_acc: 0.7111\nEpoch 25/50\n48/48 [==============================] - 19s 396ms/step - loss: 0.2722 - acc: 0.8854 - val_loss: 0.7565 - val_acc: 0.6889\nEpoch 26/50\n48/48 [==============================] - 18s 377ms/step - loss: 0.2215 - acc: 0.8802 - val_loss: 1.0120 - val_acc: 0.6222\nEpoch 27/50\n48/48 [==============================] - 20s 410ms/step - loss: 0.2242 - acc: 0.9062 - val_loss: 0.9107 - val_acc: 0.6875\nEpoch 28/50\n48/48 [==============================] - 21s 442ms/step - loss: 0.2708 - acc: 0.8645 - val_loss: 0.7850 - val_acc: 0.6444\nEpoch 29/50\n48/48 [==============================] - 19s 402ms/step - loss: 0.2649 - acc: 0.8857 - val_loss: 0.6502 - val_acc: 0.7556\nEpoch 30/50\n48/48 [==============================] - 19s 399ms/step - loss: 0.1987 - acc: 0.9323 - val_loss: 0.8587 - val_acc: 0.5556\nEpoch 31/50\n48/48 [==============================] - 20s 410ms/step - loss: 0.1854 - acc: 0.9169 - val_loss: 0.8794 - val_acc: 0.6222\nEpoch 32/50\n48/48 [==============================] - 18s 378ms/step - loss: 0.2531 - acc: 0.8909 - val_loss: 0.9173 - val_acc: 0.7556\nEpoch 33/50\n48/48 [==============================] - 21s 429ms/step - loss: 0.2019 - acc: 0.9479 - val_loss: 1.1515 - val_acc: 0.6000\nEpoch 34/50\n48/48 [==============================] - 20s 415ms/step - loss: 0.1721 - acc: 0.9378 - val_loss: 0.9395 - val_acc: 0.6000\nEpoch 35/50\n48/48 [==============================] - 21s 434ms/step - loss: 0.2007 - acc: 0.9218 - val_loss: 1.1680 - val_acc: 0.7111\nEpoch 36/50\n48/48 [==============================] - 25s 523ms/step - loss: 0.2500 - acc: 0.9062 - val_loss: 0.6188 - val_acc: 0.7333\nEpoch 37/50\n48/48 [==============================] - 24s 500ms/step - loss: 0.2523 - acc: 0.9118 - val_loss: 0.7521 - val_acc: 0.7333\nEpoch 38/50\n48/48 [==============================] - 23s 482ms/step - loss: 0.1867 - acc: 0.9218 - val_loss: 0.8352 - val_acc: 0.7556\nEpoch 39/50\n48/48 [==============================] - 24s 498ms/step - loss: 0.2207 - acc: 0.8857 - val_loss: 0.7915 - val_acc: 0.6667\nEpoch 40/50\n48/48 [==============================] - 24s 503ms/step - loss: 0.2219 - acc: 0.8961 - val_loss: 0.9396 - val_acc: 0.7292\nEpoch 41/50\n48/48 [==============================] - 29s 607ms/step - loss: 0.1833 - acc: 0.9271 - val_loss: 0.7488 - val_acc: 0.6889\nEpoch 42/50\n48/48 [==============================] - 25s 512ms/step - loss: 0.1286 - acc: 0.9323 - val_loss: 1.0086 - val_acc: 0.5778\nEpoch 43/50\n48/48 [==============================] - 26s 534ms/step - loss: 0.2698 - acc: 0.8906 - val_loss: 0.7913 - val_acc: 0.7111\nEpoch 44/50\n48/48 [==============================] - 25s 527ms/step - loss: 0.1924 - acc: 0.9065 - val_loss: 1.1655 - val_acc: 0.6667\nEpoch 45/50\n48/48 [==============================] - 28s 591ms/step - loss: 0.2192 - acc: 0.9166 - val_loss: 0.7385 - val_acc: 0.6889\nEpoch 46/50\n48/48 [==============================] - 20s 424ms/step - loss: 0.2735 - acc: 0.9114 - val_loss: 1.0243 - val_acc: 0.5333\nEpoch 47/50\n48/48 [==============================] - 20s 410ms/step - loss: 0.2474 - acc: 0.9271 - val_loss: 0.6580 - val_acc: 0.6889\nEpoch 48/50\n48/48 [==============================] - 23s 471ms/step - loss: 0.2075 - acc: 0.9118 - val_loss: 0.6266 - val_acc: 0.6667\nEpoch 49/50\n48/48 [==============================] - 23s 477ms/step - loss: 0.2088 - acc: 0.9427 - val_loss: 0.7290 - val_acc: 0.7111\nEpoch 50/50\n48/48 [==============================] - 21s 437ms/step - loss: 0.1955 - acc: 0.9427 - val_loss: 0.9983 - val_acc: 0.6222\n"
],
[
"history.history.keys()",
"_____no_output_____"
],
[
"# summarize history for accuracy\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# summarize history for loss\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img\n\ndatagen = ImageDataGenerator(\n rotation_range=40,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest',\n samplewise_center=True,\n samplewise_std_normalization=True,\n brightness_range=(0.1, 0.9))\n\nimg = load_img('../datasets/train/N/1_30_1_231.jpg') # this is a PIL image\nx = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)\nx = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)\n\n# the .flow() command below generates batches of randomly transformed images\n# and saves the results to the `preview/` directory\ni = 0\nfor batch in datagen.flow(x, batch_size=1,\n save_to_dir='preview', save_prefix='cat', save_format='jpeg'):\n i += 1\n if i > 20:\n break # otherwise the generator would loop indefinitely",
"_____no_output_____"
],
[
"x_train = []\nfor i in range(25):\n x_batch, y_batch = next(train_generator)\n x_train.append(x_batch)\n \nx_test = []\nfor i in range(2):\n x_batch, y_batch = next(validation_generator)\n x_test.append(x_batch)\n ",
"_____no_output_____"
],
[
"img = load_img('../dataset/test/N/1_30_2_382.JPG') # this is a PIL image\nplt.imshow(img)",
"_____no_output_____"
],
[
"imgNames = os.listdir('../dataset/test/N')\nfor img in imgNames: \n print(\"HERE\")\n imgname = '../dataset/test/N/' + img; \n myimg = load_img(imgname) # this is a PIL image\n plt.imshow(myimg)\n plt.show()",
"_____no_output_____"
],
[
"x_train = np.concatenate(x_train)\nx_train.shape\n\nx_test = np.concatenate(x_test)\nx_test.shape",
"_____no_output_____"
],
[
"import shap\nimport numpy as np\n\n# select a set of background examples to take an expectation over\nbackground = x_train[np.random.choice(x_train.shape[0], 10, replace=True)]\n\n# explain predictions of the model on four images\ne = shap.DeepExplainer(model, background)\n# ...or pass tensors directly\n# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)\nshap_values = e.shap_values(x_test[1:2])\n\n# plot the feature attributions\nshap.image_plot(shap_values, -x_test[1:2])",
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
],
[
"x_test[1]",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7edbb830f9b98965425a4b25cf17b9a89da9f94 | 12,388 | ipynb | Jupyter Notebook | .ipynb_checkpoints/talkmap-checkpoint.ipynb | jialeishen/academicpages.github.io | 78805d4c330bb7f6168dfe693ec6e184e2046572 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/talkmap-checkpoint.ipynb | jialeishen/academicpages.github.io | 78805d4c330bb7f6168dfe693ec6e184e2046572 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/talkmap-checkpoint.ipynb | jialeishen/academicpages.github.io | 78805d4c330bb7f6168dfe693ec6e184e2046572 | [
"MIT"
] | 1 | 2018-01-24T01:35:57.000Z | 2018-01-24T01:35:57.000Z | 65.893617 | 1,009 | 0.641427 | [
[
[
"# Leaflet cluster map of talk locations\n\nRun this from the _talks/ directory, which contains .md files of all your talks. This scrapes the location YAML field from each .md file, geolocates it with geopy/Nominatim, and uses the getorg library to output data, HTML, and Javascript for a standalone cluster map.",
"_____no_output_____"
]
],
[
[
"!pip install getorg --upgrade\nimport glob\nimport getorg\nfrom geopy import Nominatim",
"Requirement already satisfied: getorg in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (0.3.1)\nRequirement already satisfied: geopy in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from getorg) (2.2.0)\nRequirement already satisfied: pygithub in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from getorg) (1.55)\nRequirement already satisfied: retrying in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from getorg) (1.3.3)\nRequirement already satisfied: geographiclib<2,>=1.49 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from geopy->getorg) (1.52)\nRequirement already satisfied: deprecated in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from pygithub->getorg) (1.2.13)\nRequirement already satisfied: requests>=2.14.0 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from pygithub->getorg) (2.22.0)\nRequirement already satisfied: pynacl>=1.4.0 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from pygithub->getorg) (1.5.0)\nRequirement already satisfied: pyjwt>=2.0 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from pygithub->getorg) (2.3.0)\nRequirement already satisfied: six>=1.7.0 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from retrying->getorg) (1.12.0)\nRequirement already satisfied: cffi>=1.4.1 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from pynacl>=1.4.0->pygithub->getorg) (1.12.3)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from requests>=2.14.0->pygithub->getorg) (1.25.3)\nRequirement already satisfied: certifi>=2017.4.17 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from requests>=2.14.0->pygithub->getorg) (2019.6.16)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from requests>=2.14.0->pygithub->getorg) (3.0.4)\nRequirement already satisfied: idna<2.9,>=2.5 in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from requests>=2.14.0->pygithub->getorg) (2.8)\nRequirement already satisfied: wrapt<2,>=1.10 in c:\\users\\jialei shen\\appdata\\roaming\\python\\python37\\site-packages (from deprecated->pygithub->getorg) (1.12.1)\nRequirement already satisfied: pycparser in c:\\users\\jialei shen\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages (from cffi>=1.4.1->pynacl>=1.4.0->pygithub->getorg) (2.19)\nIywidgets and ipyleaflet support disabled. You must be in a Jupyter notebook to use this feature.\nError raised:\nNo module named 'ipyleaflet'\nCheck that you have enabled ipyleaflet in Jupyter with:\n jupyter nbextension enable --py ipyleaflet\n"
],
[
"g = glob.glob(\"*.md\")",
"_____no_output_____"
],
[
"geocoder = Nominatim()\nlocation_dict = {}\nlocation = \"\"\npermalink = \"\"\ntitle = \"\"",
"_____no_output_____"
],
[
"\nfor file in g:\n with open(file, 'r') as f:\n lines = f.read()\n if lines.find('location: \"') > 1:\n loc_start = lines.find('location: \"') + 11\n lines_trim = lines[loc_start:]\n loc_end = lines_trim.find('\"')\n location = lines_trim[:loc_end]\n \n \n location_dict[location] = geocoder.geocode(location)\n print(location, \"\\n\", location_dict[location])\n",
"_____no_output_____"
],
[
"m = getorg.orgmap.create_map_obj()\ngetorg.orgmap.output_html_cluster_map(location_dict, folder_name=\"../talkmap\", hashed_usernames=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7edc89308b9d8b7d2bc54d366e3044015a37e3c | 52,796 | ipynb | Jupyter Notebook | .ipynb_checkpoints/test_0.3-checkpoint.ipynb | Nihal2000/sklearn-dev-0.0 | 3577c6015c7e5a2e9c3c7f0f80a38a1f5ee9c0a6 | [
"BSD-3-Clause"
] | null | null | null | .ipynb_checkpoints/test_0.3-checkpoint.ipynb | Nihal2000/sklearn-dev-0.0 | 3577c6015c7e5a2e9c3c7f0f80a38a1f5ee9c0a6 | [
"BSD-3-Clause"
] | null | null | null | .ipynb_checkpoints/test_0.3-checkpoint.ipynb | Nihal2000/sklearn-dev-0.0 | 3577c6015c7e5a2e9c3c7f0f80a38a1f5ee9c0a6 | [
"BSD-3-Clause"
] | 1 | 2021-05-09T06:47:23.000Z | 2021-05-09T06:47:23.000Z | 55.39979 | 15,420 | 0.561785 | [
[
[
"import sklearn\nprint(sklearn)\nfrom sklearn.tree import LinearDecisionTreeRegressor as ldtr\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import Lasso\nfrom sklearn.svm import LinearSVR\nfrom sklearn.svm import SVR\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.linear_model import SGDRegressor\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.feature_selection import SelectKBest, f_regression\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.ticker import FormatStrFormatter\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import mean_absolute_error\nfrom sklearn import datasets\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.datasets import make_regression\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import GradientBoostingRegressor\nplt.style.use('seaborn-whitegrid')",
"<module 'sklearn' from 'B:\\\\Research\\\\Decision Tree Regressor Combined with Linear Regressor\\\\code\\\\sklearn\\\\__init__.py'>\n"
],
[
"from sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"scaler = StandardScaler()",
"_____no_output_____"
],
[
"df= pd.read_excel('./Concrete/Concrete_Data.xls')\ndf.head()",
"_____no_output_____"
],
[
"df.corr()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"X = np.array(df.drop(['Concrete compressive strength(MPa, megapascals) '], axis= 1))\n# X = X[:, [0, 2, 6]]\ny = np.array(df['Concrete compressive strength(MPa, megapascals) '])\n\n\nX= scaler.fit_transform(X)\n\n# indices = X[:, 2] > -1\n# y= y[indices]\n# X= X[indices]\n\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)",
"_____no_output_____"
],
[
"# for i in range(X_train.shape[1]):\n# plt.scatter(X[:, i], y, color='black')\n# plt.show()",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
],
[
"print(type(X_train))\nprint(X_train.shape)\nprint(X_test.shape)\nprint(y_train.shape)\nprint(y_test.shape)",
"<class 'numpy.ndarray'>\n(824, 8)\n(206, 8)\n(824,)\n(206,)\n"
],
[
"reg1 = ldtr(max_depth= 2)\nreg2 = LinearRegression(n_jobs = -1)\nreg3 = LinearSVR()\nreg4 = SVR(kernel = 'rbf')\nreg5 = DecisionTreeRegressor(max_depth = 2)\nreg6 = Lasso()\nreg7 = RandomForestRegressor(max_depth = 2,n_jobs = -1)",
"_____no_output_____"
],
[
"reg1.fit(X_train, y_train)\nreg2.fit(X_train, y_train)\nreg3.fit(X_train, y_train)\nreg4.fit(X_train, y_train)\nreg5.fit(X_train, y_train)\nreg6.fit(X_train, y_train)\nreg7.fit(X_train, y_train)",
"_____no_output_____"
],
[
"reg2.coef_",
"_____no_output_____"
],
[
"y_pred1 = reg1.predict(X_test)\ny_pred2 = reg2.predict(X_test)\ny_pred3 = reg3.predict(X_test)\ny_pred4 = reg4.predict(X_test)\ny_pred5 = reg5.predict(X_test)\ny_pred6 = reg6.predict(X_test)\ny_pred7 = reg7.predict(X_test)",
"_____no_output_____"
],
[
"print('Mean squared error: %.2f'% np.sqrt(mean_squared_error(y_test, y_pred1)))\nprint('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred2))\nprint('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred3))\nprint('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred4))\nprint('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred5))\nprint('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred6))\nprint('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred7))",
"Mean squared error: 6.06\nMean squared error: 116.62\nMean squared error: 118.69\nMean squared error: 103.85\nMean squared error: 147.99\nMean squared error: 135.02\nMean squared error: 133.70\n"
],
[
"def regression_model(model):\n regressor = model\n regressor.fit(X_train,y_train)\n y_pred = regressor.predict(X_test)\n rmse = np.sqrt(mean_squared_error(y_test, y_pred))\n mae = mean_absolute_error(y_test, y_pred)\n return rmse,mae\n\n\nmodel_performance = pd.DataFrame(columns = [\"Model\",\"RMSE\",\"MAE\"])\nmodel_to_evaluate = [ldtr(max_depth= 1),\n LinearRegression(n_jobs = -1),\n SVR(kernel = 'rbf'),\n DecisionTreeRegressor(max_depth = 1, min_samples_leaf= 3, min_samples_split= 6),\n Lasso(),\n RandomForestRegressor(max_depth = 1,n_jobs = -1)]\n\nfor model in model_to_evaluate:\n rmse,mae = regression_model(model)\n model_performance = model_performance.append({\"Model\" : model,\"RMSE\" : rmse,\"MAE\": mae},ignore_index = True)",
"_____no_output_____"
],
[
"model_performance",
"_____no_output_____"
],
[
"labels = ['LDTR','LR','SVR','DT','Lasso','RFR']\n\nx = np.arange(len(labels))\nwidth = 0.3\n\nfig, ax = plt.subplots()\nrects1 = ax.bar(x - width/2, model_performance['RMSE'], width, label='RMSE')\nrects2 = ax.bar(x + width/2, model_performance['MAE'], width, label='MAE')\n\nax.set_ylabel('Error',fontweight = 'bold')\nax.set_title('RMSE and MAE for Concrete Dataset')\nax.set_xticks(x)\nax.set_xticklabels(labels,fontweight = 'bold')\n# ax.set_yticklabels([0.0,2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0])\nax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))\nax.legend()\n\n\nfig.tight_layout()\nplt.savefig('B:/concrete.png')\nplt.show()",
"_____no_output_____"
],
[
"for i in range(X_train.shape[1]):\n plt.scatter(X_train[:, i], y_train, color='black')\n plt.scatter(X_train[:, i], reg1.predict(X_train), color='blue')\n plt.scatter(X_train[:, i], reg6.predict(X_train), color='green')\n plt.show()",
"_____no_output_____"
],
[
"for i in range(X_train.shape[1]):\n plt.scatter(X_test[:, i], y_test, color='black')\n plt.scatter(X_test[:, i], reg1.predict(X_test), color='blue')\n plt.scatter(X_train[:, i], reg2.predict(X_train), color='green')\n plt.show()",
"_____no_output_____"
],
[
"np.sum(np.array([1, 2]) * np.array([1, 2]))",
"_____no_output_____"
],
[
"%%time\nreg1.fit(X_train, y_train)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7edc8abc81d6a60b90196f3aedd90d62b0f2a9f | 133,448 | ipynb | Jupyter Notebook | regression_analysis.ipynb | jonowens/a_yen_for_the_future | 838b5ba7a37bb83fe3804b294adc5d9e19e29a94 | [
"MIT"
] | null | null | null | regression_analysis.ipynb | jonowens/a_yen_for_the_future | 838b5ba7a37bb83fe3804b294adc5d9e19e29a94 | [
"MIT"
] | null | null | null | regression_analysis.ipynb | jonowens/a_yen_for_the_future | 838b5ba7a37bb83fe3804b294adc5d9e19e29a94 | [
"MIT"
] | null | null | null | 258.620155 | 58,043 | 0.739007 | [
[
[
"import numpy as np\nimport pandas as pd\nfrom pathlib import Path\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Regression Analysis: Seasonal Effects with Sklearn Linear Regression\nIn this notebook, you will build a SKLearn linear regression model to predict Yen futures (\"settle\") returns with *lagged* Yen futures returns. ",
"_____no_output_____"
]
],
[
[
"# Futures contract on the Yen-dollar exchange rate:\n# This is the continuous chain of the futures contracts that are 1 month to expiration\nyen_futures = pd.read_csv(\n Path(\"./data/yen.csv\"), index_col=\"Date\", infer_datetime_format=True, parse_dates=True\n)\nyen_futures.head()",
"_____no_output_____"
],
[
"# Trim the dataset to begin on January 1st, 1990\nyen_futures = yen_futures.loc[\"1990-01-01\":, :]\nyen_futures.head()",
"_____no_output_____"
]
],
[
[
"# Data Preparation",
"_____no_output_____"
],
[
"### Returns",
"_____no_output_____"
]
],
[
[
"# Create a series using \"Settle\" price percentage returns, drop any nan\"s, and check the results:\n# (Make sure to multiply the pct_change() results by 100)\n# In this case, you may have to replace inf, -inf values with np.nan\"s\nyen_futures['Return'] = (yen_futures['Settle'].pct_change() *100)\nyen_futures = yen_futures.replace(-np.inf, np.nan).dropna()\nyen_futures.tail()",
"_____no_output_____"
]
],
[
[
"### Lagged Returns ",
"_____no_output_____"
]
],
[
[
"# Create a lagged return using the shift function\nyen_futures['Lagged_Return'] = yen_futures['Return'].shift()\nyen_futures = yen_futures.dropna()\nyen_futures.tail()",
"_____no_output_____"
]
],
[
[
"### Train Test Split",
"_____no_output_____"
]
],
[
[
"# Create a train/test split for the data using 2018-2019 for testing and the rest for training\ntrain = yen_futures[:'2017']\ntest = yen_futures['2018':]",
"_____no_output_____"
],
[
"# Create four dataframes:\n# X_train (training set using just the independent variables), X_test (test set of just the independent variables)\n# Y_train (training set using just the \"y\" variable, i.e., \"Futures Return\"), Y_test (test set of just the \"y\" variable):\nX_train = train['Lagged_Return'].to_frame()\nX_test = test['Lagged_Return'].to_frame()\ny_train = train['Return']\ny_test = test['Return']\n",
"_____no_output_____"
],
[
"X_train\n",
"_____no_output_____"
]
],
[
[
"# Linear Regression Model",
"_____no_output_____"
]
],
[
[
"# Create a Linear Regression model and fit it to the training data\nfrom sklearn.linear_model import LinearRegression\n\n# Fit a SKLearn linear regression using just the training set (X_train, Y_train):\nmodel = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)\nmodel.fit(X_train, y_train)\n",
"_____no_output_____"
]
],
[
[
"# Make predictions using the Testing Data\n\nNote: We want to evaluate the model using data that it has never seen before, in this case: X_test.",
"_____no_output_____"
]
],
[
[
"# Make a prediction of \"y\" values using just the test dataset\npredictions = model.predict(X_test)\n",
"_____no_output_____"
],
[
"# Assemble actual y data (Y_test) with predicted y data (from just above) into two columns in a dataframe:\nresults = y_test.to_frame()\nresults['Predicted Return'] = predictions\nresults.head()\n",
"_____no_output_____"
],
[
"# Plot the first 20 predictions vs the true values\nresults[:20].plot(title='Return vs Predicted Return', subplots=True, figsize=(12,8))\n",
"_____no_output_____"
]
],
[
[
"# Out-of-Sample Performance\n\nEvaluate the model using \"out-of-sample\" data (X_test and y_test)",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_squared_error, r2_score\n# Calculate the mean_squared_error (MSE) on actual versus predicted test \"y\"\nmse = mean_squared_error(results['Return'], results['Predicted Return'])\n\n# Using that mean-squared-error, calculate the root-mean-squared error (RMSE):\nrmse = np.sqrt(mse)\nprint(f'Out-of-Sample Root Mean Squared Error (RMSE): {rmse}')\n",
"Out-of-Sample Root Mean Squared Error (RMSE): 0.4154832784856737\n"
]
],
[
[
"# In-Sample Performance\n\nEvaluate the model using in-sample data (X_train and y_train)",
"_____no_output_____"
]
],
[
[
"# Construct a dataframe using just the \"y\" training data:\ndf_in_sample_results = y_train.to_frame()\n\n# Add a column of \"in-sample\" predictions to that dataframe: \ndf_in_sample_results['In-Sample'] = model.predict(X_train)\n\n# Calculate in-sample mean_squared_error (for comparison to out-of-sample)\nmse = mean_squared_error(df_in_sample_results['Return'], df_in_sample_results['In-Sample'])\n\n# Calculate in-sample root mean_squared_error (for comparison to out-of-sample)\nrmse = np.sqrt(mse)\n\n# Output root mean squared error\nprint(f'In-Sample Root Mean Squared Error (RMSE): {rmse}')\n",
"In-Sample Root Mean Squared Error (RMSE): 0.5963660785073426\n"
]
],
[
[
"# Conclusions",
"_____no_output_____"
],
[
"Based on the analysis and outcomes, the following information is known.\n- The out-of-sample data shows a root mean squared error (RMSE) of 0.4154832784856737.\n- The in-sample data shows a root mean squared error (RMSE) of 0.5963660785073426.\n\nBased on a comparison of these two values, the results show the model performs better when trained on data it has not been trained on.\n\nIn summary, further analysis and testing is needed such as change the sample time frame to provide when training the model.\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7ede90181c7de5faea220743155d763a7ea77e8 | 943,140 | ipynb | Jupyter Notebook | StrokeClassification.ipynb | abhishek-kumar-code/HealthCare-StokePrediction | efcbde1f0d6d8a20ee1669b5ac9fd607c67d12e4 | [
"MIT"
] | 1 | 2021-05-15T09:06:53.000Z | 2021-05-15T09:06:53.000Z | StrokeClassification.ipynb | abhishek-kumar-code/HealthCare-StokePrediction | efcbde1f0d6d8a20ee1669b5ac9fd607c67d12e4 | [
"MIT"
] | null | null | null | StrokeClassification.ipynb | abhishek-kumar-code/HealthCare-StokePrediction | efcbde1f0d6d8a20ee1669b5ac9fd607c67d12e4 | [
"MIT"
] | 1 | 2019-03-07T10:43:46.000Z | 2019-03-07T10:43:46.000Z | 1,087.820069 | 257,364 | 0.953042 | [
[
[
"import pandas as pd\ndataframe = pd.read_csv(r\"C:\\Users\\Abhishek\\Desktop\\Computer Science Texas Tech University\\Fall 2018\\CS 5341 Pattern Recognition\\Healthcare Project\\train_2v.csv\")\n",
"_____no_output_____"
],
[
"print(dataframe.head(5))",
" id gender age hypertension heart_disease ever_married \\\n0 30669 Male 3.0 0 0 No \n1 30468 Male 58.0 1 0 Yes \n2 16523 Female 8.0 0 0 No \n3 56543 Female 70.0 0 0 Yes \n4 46136 Male 14.0 0 0 No \n\n work_type Residence_type avg_glucose_level bmi smoking_status \\\n0 children Rural 95.12 18.0 NaN \n1 Private Urban 87.96 39.2 never smoked \n2 Private Urban 110.89 17.6 NaN \n3 Private Rural 69.04 35.9 formerly smoked \n4 Never_worked Rural 161.28 19.1 NaN \n\n stroke \n0 0 \n1 0 \n2 0 \n3 0 \n4 0 \n"
],
[
"# take a look at the outcome variable stroke\nprint(dataframe['stroke'].value_counts())",
"0 42617\n1 783\nName: stroke, dtype: int64\n"
],
[
"print(dataframe.isnull().sum().sort_values(ascending=False).head())",
"smoking_status 13292\nbmi 1462\nstroke 0\navg_glucose_level 0\nResidence_type 0\ndtype: int64\n"
],
[
"# removing nans from bmi using mean or median\nprint(dataframe['bmi'].mean())\nprint(dataframe['bmi'].median())",
"28.605038390004545\n27.7\n"
],
[
"# decided to replace with mean\ndataframe['bmi'].fillna(dataframe['bmi'].mean(), inplace=True)",
"_____no_output_____"
],
[
"print(dataframe.isnull().sum().sort_values(ascending=False).head())",
"smoking_status 13292\nstroke 0\nbmi 0\navg_glucose_level 0\nResidence_type 0\ndtype: int64\n"
],
[
"dataframe = dataframe.dropna(subset=['smoking_status']) ",
"_____no_output_____"
],
[
"print(dataframe.isnull().sum().sort_values(ascending=False).head())",
"stroke 0\nsmoking_status 0\nbmi 0\navg_glucose_level 0\nResidence_type 0\ndtype: int64\n"
],
[
"dataframe_clean = dataframe\nprint (dataframe_clean.head(5))",
" id gender age hypertension heart_disease ever_married \\\n1 30468 Male 58.0 1 0 Yes \n3 56543 Female 70.0 0 0 Yes \n6 52800 Female 52.0 0 0 Yes \n7 41413 Female 75.0 0 1 Yes \n8 15266 Female 32.0 0 0 Yes \n\n work_type Residence_type avg_glucose_level bmi smoking_status \\\n1 Private Urban 87.96 39.2 never smoked \n3 Private Rural 69.04 35.9 formerly smoked \n6 Private Urban 77.59 17.7 formerly smoked \n7 Self-employed Rural 243.53 27.0 never smoked \n8 Private Rural 77.67 32.3 smokes \n\n stroke \n1 0 \n3 0 \n6 0 \n7 0 \n8 0 \n"
],
[
"def gendervalue(a):\n if a == 'Male':\n return 1\n if a == 'Female':\n return 0\n if a == 'Other':\n return 3\n \ndataframe_clean['gender'] = dataframe_clean['gender'].apply(gendervalue)\n \n\ndef work_type_class(a):\n if a == 'Private':\n return 0\n if a == 'Self-employed':\n return 1\n if a == 'children':\n return 2\n if a == 'Govt_job':\n return 3\n if a == 'Never_worked':\n return 4\n \ndataframe_clean['work_type'] = dataframe_clean['work_type'].apply(work_type_class)\n\n\ndef ever_married_value(a):\n if a == 'Yes':\n return 1\n if a == 'No':\n return 0\n \ndataframe_clean['ever_married'] = dataframe_clean['ever_married'].apply(ever_married_value)",
"_____no_output_____"
],
[
"def Residence_type_value(a):\n if a == 'Urban':\n return 1\n if a == 'Rural':\n return 0\n \ndataframe_clean['Residence_type'] = dataframe_clean['Residence_type'].apply(Residence_type_value)\n\ndef smoking_status_class(a):\n if a == 'never smoked':\n return 0\n if a == 'formerly smoked':\n return 1\n if a == 'smokes':\n return 2\n \ndataframe_clean['smoking_status'] = dataframe_clean['smoking_status'].apply(smoking_status_class)\n\nprint(dataframe_clean.head(5))",
" id gender age hypertension heart_disease ever_married work_type \\\n1 30468 1 58.0 1 0 1 0 \n3 56543 0 70.0 0 0 1 0 \n6 52800 0 52.0 0 0 1 0 \n7 41413 0 75.0 0 1 1 1 \n8 15266 0 32.0 0 0 1 0 \n\n Residence_type avg_glucose_level bmi smoking_status stroke \n1 1 87.96 39.2 0 0 \n3 0 69.04 35.9 1 0 \n6 1 77.59 17.7 1 0 \n7 0 243.53 27.0 0 0 \n8 0 77.67 32.3 2 0 \n"
],
[
"import numpy as np\nfrom pandas import Series, DataFrame\n\nfrom pandas.tools.plotting import scatter_matrix\n\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\nimport seaborn as sb",
"_____no_output_____"
],
[
"%matplotlib inline\nrcParams['figure.figsize']=10,8\nsb.set_style('whitegrid')",
"_____no_output_____"
],
[
"dataframe_clean.columns = ['id','gender','age','hypertension','heart_disease','ever_married','work_type','Residence_type','avg_glucose_level', 'bmi','smoking_status','stroke']\ndataframe_clean.index = dataframe_clean.id\nbmi = dataframe_clean['bmi']\nplt.xlabel('bmi')\nplt.ylabel('frequency')\nbmi.plot(kind ='hist')",
"_____no_output_____"
],
[
"avg_glucose_level = dataframe_clean['avg_glucose_level']\nplt.xlabel('bmi')\nplt.ylabel('frequency')\navg_glucose_level.plot(kind ='hist')",
"_____no_output_____"
],
[
"dataframe_clean.plot(kind='scatter', x='id', y='bmi', c=['darkgray'], s=1)",
"_____no_output_____"
],
[
"dataframe.plot(kind='scatter', x='id', y='avg_glucose_level', c=['darkgray'], s=1)",
"_____no_output_____"
],
[
"sb.regplot(x='id', y='bmi', data=dataframe_clean, scatter=True, scatter_kws={'s':2}) ",
"C:\\Users\\Abhishek\\Anaconda3\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n"
],
[
"sb.regplot(x='id', y='avg_glucose_level', data=dataframe_clean, scatter=True, scatter_kws={'s':2}) ",
"C:\\Users\\Abhishek\\Anaconda3\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n"
],
[
"from pandas import Series, DataFrame\n\nfrom pandas.tools.plotting import scatter_matrix",
"_____no_output_____"
],
[
"%matplotlib inline\nrcParams['figure.figsize']=14,7\nsb.set_style('whitegrid')",
"_____no_output_____"
],
[
"dataframe_new = dataframe.drop('id', 1)\nprint(dataframe_new.head(5))",
" gender age hypertension heart_disease ever_married work_type \\\nid \n30468 1 58.0 1 0 1 0 \n56543 0 70.0 0 0 1 0 \n52800 0 52.0 0 0 1 0 \n41413 0 75.0 0 1 1 1 \n15266 0 32.0 0 0 1 0 \n\n Residence_type avg_glucose_level bmi smoking_status stroke \nid \n30468 1 87.96 39.2 0 0 \n56543 0 69.04 35.9 1 0 \n52800 1 77.59 17.7 1 0 \n41413 0 243.53 27.0 0 0 \n15266 0 77.67 32.3 2 0 \n"
],
[
"dataframe_clean_new = dataframe_clean.drop('id', 1)\nprint(dataframe_clean_new.head(5))",
" gender age hypertension heart_disease ever_married work_type \\\nid \n30468 1 58.0 1 0 1 0 \n56543 0 70.0 0 0 1 0 \n52800 0 52.0 0 0 1 0 \n41413 0 75.0 0 1 1 1 \n15266 0 32.0 0 0 1 0 \n\n Residence_type avg_glucose_level bmi smoking_status stroke \nid \n30468 1 87.96 39.2 0 0 \n56543 0 69.04 35.9 1 0 \n52800 1 77.59 17.7 1 0 \n41413 0 243.53 27.0 0 0 \n15266 0 77.67 32.3 2 0 \n"
],
[
"green_diamond = dict(markerfacecolor='g', marker='D')\ndataframe_clean_new.boxplot(flierprops=green_diamond)\nplt.plot()",
"_____no_output_____"
],
[
"dataframe_clean.columns = ['id','gender','age','hypertension','heart_disease','ever_married','work_type','Residence_type','avg_glucose_level', 'bmi','smoking_status','stroke']\nX = dataframe_clean.iloc[:,0:10].values\nY = dataframe_clean.iloc[:,10].values\n\ndataframe_clean[:5]\nprint (X)",
"[[3.0468e+04 1.0000e+00 5.8000e+01 ... 1.0000e+00 8.7960e+01 3.9200e+01]\n [5.6543e+04 0.0000e+00 7.0000e+01 ... 0.0000e+00 6.9040e+01 3.5900e+01]\n [5.2800e+04 0.0000e+00 5.2000e+01 ... 1.0000e+00 7.7590e+01 1.7700e+01]\n ...\n [2.8375e+04 0.0000e+00 8.2000e+01 ... 1.0000e+00 9.1940e+01 2.8900e+01]\n [2.7973e+04 1.0000e+00 4.0000e+01 ... 1.0000e+00 9.9160e+01 3.3200e+01]\n [3.6271e+04 0.0000e+00 8.2000e+01 ... 1.0000e+00 7.9480e+01 2.0600e+01]]\n"
],
[
"pd.options.display.float_format = '{:,.1f}'.format\n\nX_df = pd.DataFrame(X)\n\nprint (X_df.describe())",
" 0 1 2 3 4 5 6 7 \\\ncount 30,108.0 30,108.0 30,108.0 30,108.0 30,108.0 30,108.0 30,108.0 30,108.0 \nmean 36,596.9 0.4 47.9 0.1 0.1 0.7 0.7 0.5 \nstd 21,119.3 0.5 18.8 0.3 0.2 0.4 1.1 0.5 \nmin 1.0 0.0 10.0 0.0 0.0 0.0 0.0 0.0 \n25% 18,254.2 0.0 33.0 0.0 0.0 0.0 0.0 0.0 \n50% 36,775.5 0.0 48.0 0.0 0.0 1.0 0.0 1.0 \n75% 54,861.8 1.0 62.0 0.0 0.0 1.0 1.0 1.0 \nmax 72,943.0 3.0 82.0 1.0 1.0 1.0 4.0 1.0 \n\n 8 9 \ncount 30,108.0 30,108.0 \nmean 107.2 30.0 \nstd 46.0 7.1 \nmin 55.0 10.1 \n25% 77.8 25.1 \n50% 92.4 28.7 \n75% 114.5 33.6 \nmax 291.1 92.0 \n"
],
[
"x = dataframe.drop('stroke', 1)\nprint(x.head(5))",
" id gender age hypertension heart_disease ever_married \\\nid \n30468 30468 1 58.0 1 0 1 \n56543 56543 0 70.0 0 0 1 \n52800 52800 0 52.0 0 0 1 \n41413 41413 0 75.0 0 1 1 \n15266 15266 0 32.0 0 0 1 \n\n work_type Residence_type avg_glucose_level bmi smoking_status \nid \n30468 0 1 88.0 39.2 0 \n56543 0 0 69.0 35.9 1 \n52800 0 1 77.6 17.7 1 \n41413 1 0 243.5 27.0 0 \n15266 0 0 77.7 32.3 2 \n"
],
[
"y = dataframe['stroke']\nprint(y.head(5))",
"id\n30468 0\n56543 0\n52800 0\n41413 0\n15266 0\nName: stroke, dtype: int64\n"
],
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\ndef plot_histogram_dv(x, y):\n plt.hist(list(x[y == 0]), alpha=0.5, label='Stroke=0')\n plt.hist(list(x[y == 1]), alpha=0.5, label='Stroke=1')\n plt.title(\"Histogram of '{var_name}' by Stroke Category\".format(var_name=x.name))\n plt.xlabel(\"Value\")\n plt.ylabel(\"Frequency\")\n plt.legend(loc='upper right')\n plt.show()\n\n\nplot_histogram_dv(x['age'], y)\nplot_histogram_dv(x['bmi'], y)\nplot_histogram_dv(x['hypertension'], y)\nplot_histogram_dv(x['avg_glucose_level'], y)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7edfb906dd3329dfe292866bad935c5de586296 | 41,112 | ipynb | Jupyter Notebook | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience | 81f27f84dc9f4c6ba9f7c8039f8ebeb87afe767c | [
"MIT"
] | null | null | null | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience | 81f27f84dc9f4c6ba9f7c8039f8ebeb87afe767c | [
"MIT"
] | null | null | null | Module5/Module5 - Lab6.ipynb | jacburge/pythonfordatascience | 81f27f84dc9f4c6ba9f7c8039f8ebeb87afe767c | [
"MIT"
] | null | null | null | 36.350133 | 1,882 | 0.461909 | [
[
[
"# DAT210x - Programming with Python for DS",
"_____no_output_____"
],
[
"## Module5- Lab6",
"_____no_output_____"
]
],
[
[
"import random, math\nimport pandas as pd\nimport numpy as np\nimport scipy.io\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.decomposition import PCA\nfrom sklearn import manifold\nfrom sklearn.neighbors import KNeighborsClassifier\n\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nmatplotlib.style.use('ggplot') # Look Pretty\n\n\n# Leave this alone until indicated:\nTest_PCA = False",
"_____no_output_____"
]
],
[
[
"### A Convenience Function",
"_____no_output_____"
],
[
"This method is for your visualization convenience only. You aren't expected to know how to put this together yourself, although you should be able to follow the code by now:",
"_____no_output_____"
]
],
[
[
"def Plot2DBoundary(DTrain, LTrain, DTest, LTest):\n # The dots are training samples (img not drawn), and the pics are testing samples (images drawn)\n # Play around with the K values. This is very controlled dataset so it should be able to get perfect classification on testing entries\n # Play with the K for isomap, play with the K for neighbors. \n\n fig = plt.figure()\n ax = fig.add_subplot(111)\n ax.set_title('Transformed Boundary, Image Space -> 2D')\n\n padding = 0.1 # Zoom out\n resolution = 1 # Don't get too detailed; smaller values (finer rez) will take longer to compute\n colors = ['blue','green','orange','red']\n\n\n # ------\n\n # Calculate the boundaries of the mesh grid. The mesh grid is\n # a standard grid (think graph paper), where each point will be\n # sent to the classifier (KNeighbors) to predict what class it\n # belongs to. This is why KNeighbors has to be trained against\n # 2D data, so we can produce this countour. Once we have the \n # label for each point on the grid, we can color it appropriately\n # and plot it.\n x_min, x_max = DTrain[:, 0].min(), DTrain[:, 0].max()\n y_min, y_max = DTrain[:, 1].min(), DTrain[:, 1].max()\n x_range = x_max - x_min\n y_range = y_max - y_min\n x_min -= x_range * padding\n y_min -= y_range * padding\n x_max += x_range * padding\n y_max += y_range * padding\n\n # Using the boundaries, actually make the 2D Grid Matrix:\n xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),\n np.arange(y_min, y_max, resolution))\n\n # What class does the classifier say about each spot on the chart?\n # The values stored in the matrix are the predictions of the model\n # at said location:\n Z = model.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n\n # Plot the mesh grid as a filled contour plot:\n plt.contourf(xx, yy, Z, cmap=plt.cm.terrain, z=-100)\n\n\n # ------\n\n # When plotting the testing images, used to validate if the algorithm\n # is functioning correctly, size them as 5% of the overall chart size\n x_size = x_range * 0.05\n y_size = y_range * 0.05\n\n # First, plot the images in your TEST dataset\n img_num = 0\n for index in LTest.index:\n # DTest is a regular NDArray, so you'll iterate over that 1 at a time.\n x0, y0 = DTest[img_num,0]-x_size/2., DTest[img_num,1]-y_size/2.\n x1, y1 = DTest[img_num,0]+x_size/2., DTest[img_num,1]+y_size/2.\n\n # DTest = our images isomap-transformed into 2D. But we still want\n # to plot the original image, so we look to the original, untouched\n # dataset (at index) to get the pixels:\n img = df.iloc[index,:].reshape(num_pixels, num_pixels)\n ax.imshow(img,\n aspect='auto',\n cmap=plt.cm.gray,\n interpolation='nearest',\n zorder=100000,\n extent=(x0, x1, y0, y1),\n alpha=0.8)\n img_num += 1\n\n\n # Plot your TRAINING points as well... as points rather than as images\n for label in range(len(np.unique(LTrain))):\n indices = np.where(LTrain == label)\n ax.scatter(DTrain[indices, 0], DTrain[indices, 1], c=colors[label], alpha=0.8, marker='o')\n\n # Plot\n plt.show() ",
"_____no_output_____"
]
],
[
[
"### The Assignment",
"_____no_output_____"
],
[
"Use the same code from Module4/assignment4.ipynb to load up the `face_data.mat` file into a dataframe called `df`. Be sure to calculate the `num_pixels` value, and to rotate the images to being right-side-up instead of sideways. This was demonstrated in the [Lab Assignment 4](https://github.com/authman/DAT210x/blob/master/Module4/assignment4.ipynb) code.",
"_____no_output_____"
]
],
[
[
"# .. your code here ..\nmat = scipy.io.loadmat('Datasets/face_data.mat')\ndf = pd.DataFrame(mat['images']).T\nnum_images, num_pixels = df.shape\nnum_pixels = int(math.sqrt(num_pixels))\n\n# Rotate the pictures, so we don't have to crane our necks:\nfor i in range(num_images):\n df.loc[i,:] = df.loc[i,:].reshape(num_pixels, num_pixels).T.reshape(-1)",
"_____no_output_____"
]
],
[
[
"Load up your face_labels dataset. It only has a single column, and you're only interested in that single column. You will have to slice the column out so that you have access to it as a \"Series\" rather than as a \"Dataframe\". This was discussed in the the \"Slicin'\" lecture of the \"Manipulating Data\" reading on the course website. Use an appropriate indexer to take care of that. Be sure to print out the labels and compare what you see to the raw `face_labels.csv` so you know you loaded it correctly.",
"_____no_output_____"
]
],
[
[
"# .. your code here ..\nface_labels = pd.read_csv('Datasets/face_labels.csv',header=None)\nlabel = face_labels.iloc[:, 0]\nlen(label)",
"_____no_output_____"
],
[
"df.shape\ndf.head()",
"_____no_output_____"
],
[
"y.head",
"_____no_output_____"
]
],
[
[
"Do `train_test_split`. Use the same code as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and the test_size to 0.15 (150%). Your labels are actually passed in as a series (instead of as an NDArray) so that you can access their underlying indices later on. This is necessary so you can find your samples in the original dataframe. The convenience methods we've written for you that handle drawing expect this, so that they can plot your testing data as images rather than as points:",
"_____no_output_____"
]
],
[
[
"# .. your code here ..\nx_train, x_test, y_train, y_test = train_test_split(df, label, test_size=0.15, random_state=7)",
"_____no_output_____"
]
],
[
[
"### Dimensionality Reduction",
"_____no_output_____"
]
],
[
[
"if Test_PCA:\n # INFO: PCA is used *before* KNeighbors to simplify your high dimensionality\n # image samples down to just 2 principal components! A lot of information\n # (variance) is lost during the process, as I'm sure you can imagine. But\n # you have to drop the dimension down to two, otherwise you wouldn't be able\n # to visualize a 2D decision surface / boundary. In the wild, you'd probably\n # leave in a lot more dimensions, which is better for higher accuracy, but\n # worse for visualizing the decision boundary;\n #\n # Your model should only be trained (fit) against the training data (data_train)\n # Once you've done this, you need use the model to transform both data_train\n # and data_test from their original high-D image feature space, down to 2D\n\n\n # TODO: Implement PCA here. ONLY train against your training data, but\n # transform both your training + test data, storing the results back into\n # data_train, and data_test.\n \n # .. your code here ..\n \n pca = PCA(n_components=2, svd_solver='full')\n pca.fit(x_train)\n\n data_train = pca.transform(x_train)\n data_test = pca.transform(x_test)\n\nelse:\n # INFO: Isomap is used *before* KNeighbors to simplify your high dimensionality\n # image samples down to just 2 components! A lot of information has been is\n # lost during the process, as I'm sure you can imagine. But if you have\n # non-linear data that can be represented on a 2D manifold, you probably will\n # be left with a far superior dataset to use for classification. Plus by\n # having the images in 2D space, you can plot them as well as visualize a 2D\n # decision surface / boundary. In the wild, you'd probably leave in a lot more\n # dimensions, which is better for higher accuracy, but worse for visualizing the\n # decision boundary;\n \n # Your model should only be trained (fit) against the training data (data_train)\n # Once you've done this, you need use the model to transform both data_train\n # and data_test from their original high-D image feature space, down to 2D\n\n \n # TODO: Implement Isomap here. ONLY train against your training data, but\n # transform both your training + test data, storing the results back into\n # data_train, and data_test.\n \n # .. your code here ..\n iso = manifold.Isomap(n_neighbors=5, n_components=2)\n iso.fit(x_train)\n data_train = iso.transform(x_train)\n data_test = iso.transform(x_test)",
"_____no_output_____"
]
],
[
[
"Implement `KNeighborsClassifier` here. You can use any K value from 1 through 20, so play around with it and attempt to get good accuracy. Fit the classifier against your training data and labels.",
"_____no_output_____"
]
],
[
[
"# .. your code here ..\nknn = KNeighborsClassifier(n_neighbors=3)\nknn = knn.fit(x_train, y_train)\nknn.score(x_test, y_test)",
"_____no_output_____"
]
],
[
[
"Calculate and display the accuracy of the testing set (data_test and label_test):",
"_____no_output_____"
]
],
[
[
"# .. your code here ..\nknn.score(x_test, y_test)\n\nscores = pd.DataFrame(columns=['n_neighbors', 'model_score'])\ntype(scores); scores.dtypes; scores.shape; scores.head(3)\nfor i in range(1, 21): # try K value from 1 through 20 in an attempt to find good accuracy\n score = KNeighborsClassifier(n_neighbors=i).fit(x_train, y_train).score(x_test, y_test)\n scores.loc[i-1] = [int(i), score] # or scores.loc[len(scores)] = [int(i), score]\n\nscores.model_score.unique() # | scores['model_score'] | scores.loc[:, 'model_score'] | scores.iloc[:, 1]\nscores.groupby('model_score').model_score.unique(); print(' ') # prints unique values with all decimal points\nscores.groupby('model_score').n_neighbors.unique(); print(' ')\nscores.groupby('model_score').n_neighbors.nunique()",
" \n \n"
],
[
"scores\n ",
"_____no_output_____"
]
],
[
[
"Let's chart the combined decision boundary, the training data as 2D plots, and the testing data as small images so we can visually validate performance:",
"_____no_output_____"
]
],
[
[
"Plot2DBoundary(x_train, x_train, y_test, y_test)",
"_____no_output_____"
]
],
[
[
"After submitting your answers, experiment with using using PCA instead of ISOMap. Are the results what you expected? Also try tinkering around with the test/train split percentage from 10-20%. Notice anything?",
"_____no_output_____"
]
],
[
[
"# .. your code changes above ..",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7edff291bc1c070edd950efa235d636b0e1854c | 104,683 | ipynb | Jupyter Notebook | src/Chapitre3/figure/Figure_HeatFlux.ipynb | antoinetavant/PhD_thesis_manuscript | 1fdaf99356f75abc488edf1f30b5dd65f22bcdca | [
"Unlicense"
] | null | null | null | src/Chapitre3/figure/Figure_HeatFlux.ipynb | antoinetavant/PhD_thesis_manuscript | 1fdaf99356f75abc488edf1f30b5dd65f22bcdca | [
"Unlicense"
] | null | null | null | src/Chapitre3/figure/Figure_HeatFlux.ipynb | antoinetavant/PhD_thesis_manuscript | 1fdaf99356f75abc488edf1f30b5dd65f22bcdca | [
"Unlicense"
] | null | null | null | 199.39619 | 30,288 | 0.905658 | [
[
[
"# PIC data",
"_____no_output_____"
]
],
[
[
"from astropy.constants import m_e, e, k_B\nk = k_B.value\nme = m_e.value\nq = e.value\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport json\n%matplotlib notebook\n\nfrom scipy.interpolate import interp1d\nfrom math import ceil\nplt.style.use(\"presentation\")",
"_____no_output_____"
],
[
"with open(\"NewPic1D.dat\", \"r\") as f:\n dataPIC = json.load(f)\n# with open(\"PIC_data.dat\", \"r\") as f:\n# dataPIC = json.load(f)\n \nwith open(\"NewPIC_EVDFs.dat\", \"r\") as f:\n data = json.load(f)\n# with open(\"PIC_EVDFs.dat\", \"r\") as f:\n# data = json.load(f)\n \nprint(data.keys())\nprint(\"~~~~~~~~~~~~~~~ \\n\")\nprint(data[\"info\"])\nprint(\"~~~~~~~~~~~~~~~ \\n\")\nprint(\"Run disponibles\")\nfor k in [\"0\",\"1\",\"2\"]:\n run = data[k]\n print(k,\" p = \",run[\"p\"], \"mTorr\")\n \ndx = dataPIC[\"0\"][\"dx\"]",
"dict_keys(['info', '0', '1', '2', '3', '4', '5'])\n~~~~~~~~~~~~~~~ \n\nInformations\np: pressur un mTorr\nprobnames: list of the names of the probes (also keys of the dict)\nprob_center: center of the prob\nin the prob data:\n position: the coords of the bin :ymin, ymax\n absciss: the velocities of the EVDFs (Vx, Vy, Vz)\n EVDF: the EVDFs (Vx, Vy, Vz)\n\n~~~~~~~~~~~~~~~ \n\nRun disponibles\n0 p = 0.1 mTorr\n1 p = 2.0 mTorr\n2 p = 10.0 mTorr\n"
],
[
"k = '0'\nprobnames = np.array(data[k][\"probnames\"])\nprob_center = np.array(data[k][\"prob_center\"])\nprob_y0 = np.array(data[k][\"prob_y0\"])\nprob_y1 = np.array(data[k][\"prob_y1\"])\n\nprint(probnames)\nprint(prob_center)\ndx = data[k][\"dx\"]*1000\n",
"['001' '002' '003' '004' '005' '006' '007' '008' '009' '010' '011' '012']\n[ 2.5 7.5 15. 25. 35. 50. 70. 90. 125. 175. 250. 450. ]\n"
],
[
"def returnxy(pn, k=\"1\"):\n a = np.array(data[k][pn]['absciss'])\n V = np.array(data[k][pn]['EVDF'])\n idenx = 1\n x = a[:,idenx]\n x = x**2*np.sign(x)*me/q/2\n y = V[:,idenx]\n \n index = np.argwhere(pn == probnames)[0][0]\n xcenter = prob_center[index]\n x0 = int(prob_y0[index])\n x1 = int(prob_y1[index])\n \n phi = np.array(dataPIC[k][\"phi\"])\n pc = interp1d(np.arange(len(phi)),phi)(xcenter)\n p0 = phi[x0]\n p1 = phi[x1]\n \n # p = phi[int(xcenter)]\n return x, y, pc , p0, p1",
"_____no_output_____"
],
[
"# plot \nplt.figure(figsize=(4.5,4))\nplt.subplots_adjust(left=0.17, bottom=0.17, right=0.99, top=0.925, wspace=0.05, hspace=0.25)\n\nft = 14\ns = 2.5\n\nfor Nprob in range(len(probnames)):\n \n x, y, phic, phi0, phi1 = returnxy(probnames[Nprob])\n # x, y, phic = returnxy(probnames[Nprob])\n y0sum = (y).max()\n T = np.sum(np.abs(x) * y)/y.sum()*2\n plt.scatter(phic, T)\n \nphi = np.array(dataPIC[k][\"phi\"])\nTe = np.array(dataPIC[k][\"Te2\"])\nplt.plot(phi, Te,linewidth=s, alpha=0.7,ls=\"--\" )\n\n# plt.legend( fontsize=ft,loc=(1,0.1 ))\nplt.legend(loc = 'lower left', fontsize=11)\nplt.grid(alpha=0.5)\nplt.ylabel(\"Te\", fontsize=ft)\nplt.xlabel(\"$\\phi$ [V]\", fontsize=ft)\n\n",
"/home/tavant/these/code/venv/stand/lib64/python3.7/site-packages/matplotlib/figure.py:2144: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False. \n warnings.warn(\"This figure was using constrained_layout==True, \"\nNo handles with labels found to put in legend.\n"
]
],
[
[
"# Heatflux from EVDF",
"_____no_output_____"
]
],
[
[
"k = \"0\"\nNprob = -1\nx, y, phic, phi0, phi1 = returnxy(probnames[Nprob], k=k)\nplt.figure(figsize=(4.5,4.5))\nplt.subplots_adjust(left=0.17, bottom=0.17, right=0.99, top=0.925, wspace=0.05, hspace=0.25)\n\nplt.plot(x,y)\n\nNprob = 1\nx, y, phic, phi0, phi1 = returnxy(probnames[Nprob], k=k)\nplt.plot(x,y)\n\nplt.yscale(\"log\")\nplt.vlines([phic,phic*1.3], 0.001,1e5)\nplt.ylim(bottom=10)",
"_____no_output_____"
],
[
"k = \"0\"\nNprob = 2\nx, y, phic, phi0, phi1 = returnxy(probnames[Nprob], k=k)\nplt.figure(figsize=(4.5,4.5))\nplt.subplots_adjust(left=0.17, bottom=0.17, right=0.99, top=0.925, wspace=0.05, hspace=0.25)\n\nplt.plot(x,y)\nplt.yscale(\"log\")\nplt.vlines([phic,phic*1.3], 0.001,1e5)\nplt.xlim(0, 20)\nplt.ylim(bottom=10)",
"_____no_output_____"
],
[
"from scipy.integrate import simps",
"_____no_output_____"
],
[
"def return_heat_flux(k=\"0\", Nprob=2, cut=True):\n \n x, y, phic, phi0, phi1 = returnxy(probnames[Nprob], k=k)\n y /= y.sum()\n if cut:\n mask = (x>phic) & (x<=1.1*phic)\n else:\n mask = (x>phic) \n \n \n heatflux = simps(0.5*x[mask]*y[mask], x[mask])\n \n flux = simps(y[mask], x[mask])\n \n x, y, phic, phi0, phi1 = returnxy(probnames[9], k=k)\n mask = (x>0) \n \n T = np.sum(np.abs(x[mask]) * y[mask])/y[mask].sum()*2 \n \n return heatflux/flux/T\n\nplt.figure()\n\nfor gamma, k in zip([1.6, 1.43, 1.41], [\"0\", \"1\", \"2\"]):\n \n plt.scatter(gamma, return_heat_flux(k, Nprob=3), c=\"k\", label=\"WITHOUT HIGH ENERGY TAIL\")\n plt.scatter(gamma, return_heat_flux(k, Nprob=3, cut=False), c=\"b\", label=\"WITH HIGH ENERGY TAIL\") \n \nplt.legend()",
"_____no_output_____"
]
],
[
[
"# conclusion\n\nWhen calculating the heat flux from the EVDF, we get that:\n$$\\frac{Q_e}{\\Gamma_e T_e} \\simeq 0.315$$\n\nWhich is very close from the theoritical value varing from 0.3 to 0.1.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7ee04ea1d9d08a8a7e3468d12c40a451508765f | 264,385 | ipynb | Jupyter Notebook | 0-Analysis/Capture_Spectrum.ipynb | villano-lab/nrSiCap | c9fa495d0ad65132bf392c3cd174e46a7def4e40 | [
"MIT"
] | null | null | null | 0-Analysis/Capture_Spectrum.ipynb | villano-lab/nrSiCap | c9fa495d0ad65132bf392c3cd174e46a7def4e40 | [
"MIT"
] | 7 | 2021-11-16T19:35:10.000Z | 2021-12-12T17:29:50.000Z | 0-Analysis/Capture_Spectrum.ipynb | villano-lab/nrSiCap | c9fa495d0ad65132bf392c3cd174e46a7def4e40 | [
"MIT"
] | null | null | null | 703.151596 | 139,968 | 0.94028 | [
[
[
"# Capture Spectrum\n\nBelow we will generate plots for both the real and simulated capture spectra. If you are re-generating the plots, please be patient, as both load large amounts of data.\n\n## Measured Spectra\n\n*(If you are not interested in the code itself, you can collapse it by selecting the cell and then clicking on the bar to its left. You will still be able to run it and view its output.)*\n\nMeasured spectra and reconstructed PuBe rate. \nFirst, the reconstructed spetra were calculated by dividing the measured count rates by the livetimes and estimated effficiences after applying quality cuts. \nThen, the background rate was subtracted from the overall rate, leaving the events due to the PuBe source.\n\nA simulated measured spectrum was built using Geant4. \nWe then compared this to our actual data using MCMC sampling (`fitting` directory) as well as a simpler integral method (`Integral_Method.ipynb`).",
"_____no_output_____"
]
],
[
[
"#Import libraries and settings\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\nexec(open(\"../python/nb_setup.py\").read())\nfrom constants import *\nimport R68_load as r68\nimport R68_efficiencies as eff\nmeas=r68.load_measured()\nimport R68_spec_tools as spec\nfrom matplotlib import *\nstyle.use('../mplstyles/stylelib/standard.mplstyle')\n\n#Turn off expected warnings\nlogging.getLogger('matplotlib.font_manager').disabled = True\nwarnings.filterwarnings(\"ignore\",category=RuntimeWarning)\n\nfig_w=7 #Used later for figure width\n\n# Binning setup\nEmax = 2000 #eVee\nEbins=np.linspace(0,Emax,201)\nEbins_ctr=(Ebins[:-1]+Ebins[1:])/2\nEfit_min=50#[eVee]\nEfit_max=1750#[eVee]\nspec_bounds=(np.digitize(Efit_min,Ebins)-1,np.digitize(Efit_max,Ebins)-1)\nEbins_ctr[slice(*spec_bounds)].shape\n\n#Measured\nN_meas_PuBe,_ = np.histogram(meas['PuBe']['E'],bins=Ebins)\nN_meas_Bkg,_ = np.histogram(meas['Bkg']['E'],bins=Ebins)\n#Count uncertainties are Poisson\ndN_meas_PuBe_Pois=np.sqrt(N_meas_PuBe)\ndN_meas_Bkg_Pois=np.sqrt(N_meas_Bkg)\n\n#Include uncertainty from efficiencies\ndN_meas_PuBe = N_meas_PuBe*np.sqrt( (dN_meas_PuBe_Pois/N_meas_PuBe)**2 + (eff.deff_write/eff.eff_write)**2 +\n (eff.dcutEffFit(Ebins_ctr)/eff.cutEffFit(Ebins_ctr))**2)\ndN_meas_Bkg = N_meas_Bkg*np.sqrt( (dN_meas_Bkg_Pois/N_meas_Bkg)**2 + (eff.deff_write_bkg/eff.eff_write_bkg)**2 +\n (eff.dcutEffFit_bkg(Ebins_ctr)/eff.cutEffFit_bkg(Ebins_ctr))**2)\n#Scaling factors\ntlive_ratio=meas['PuBe']['tlive']/meas['Bkg']['tlive']\nwriteEff_ratio=eff.eff_write/eff.eff_write_bkg\ndwriteEff_ratio=writeEff_ratio*np.sqrt( (eff.deff_write/eff.eff_write)**2 + (eff.deff_write_bkg/eff.eff_write_bkg)**2 )\n\ncutEff_ratio=eff.cutEffFit(Ebins_ctr)/eff.cutEffFit_bkg(Ebins_ctr)\ndcutEff_ratio = cutEff_ratio*np.sqrt( (eff.dcutEffFit(Ebins_ctr)/eff.cutEffFit(Ebins_ctr))**2 + \n (eff.dcutEffFit_bkg(Ebins_ctr)/eff.cutEffFit_bkg(Ebins_ctr))**2 )\n\nratio=tlive_ratio*writeEff_ratio*cutEff_ratio\ndratio=ratio*np.sqrt( (dwriteEff_ratio/writeEff_ratio)**2 +(dcutEff_ratio/cutEff_ratio)**2 )\n\n#Make sure any divide by 0s happened below threshold\nif (not np.all(np.isfinite(ratio[slice(*spec_bounds)]))) or (not np.all(np.isfinite(dratio[slice(*spec_bounds)]))):\n print('Error: Bad background scaling ratio in fit range.')\n\n#Bkg-subtracted measured PuBe signal\nN_bkg_scaled=N_meas_Bkg*ratio\ndN_bkg_scaled=N_bkg_scaled*np.sqrt( (dN_meas_Bkg/N_meas_Bkg)**2 + (dratio/ratio)**2 )\n\nN_meas = N_meas_PuBe - N_bkg_scaled\ndN_meas = np.sqrt( dN_meas_PuBe**2 + dN_bkg_scaled**2 ) #All errors are symmetric here when using the conservative cut eff fits\ndN_meas_stat = np.sqrt( dN_meas_PuBe_Pois**2 + (dN_meas_Bkg_Pois*ratio)**2 )\n\nDenom_PuBe = meas['PuBe']['tlive']*eff.eff_write*eff.cutEffFit(Ebins_ctr)*eff.trigEff(Ebins_ctr)\nR_meas_PuBe = N_meas_PuBe/Denom_PuBe\nDenom_Bkg = meas['Bkg']['tlive']*eff.eff_write_bkg*eff.cutEffFit_bkg(Ebins_ctr)*eff.trigEff(Ebins_ctr)\nR_meas_Bkg = N_meas_Bkg/Denom_Bkg\n\nR_meas = R_meas_PuBe - R_meas_Bkg\n\ndR_meas_stat_PuBe = R_meas_PuBe*dN_meas_PuBe_Pois/N_meas_PuBe\ndR_meas_PuBe = R_meas_PuBe*np.sqrt( (dN_meas_PuBe_Pois/N_meas_PuBe)**2 +\\\n (eff.deff_write/eff.eff_write)**2 +\\\n (eff.dcutEffFit(Ebins_ctr)/eff.cutEffFit(Ebins_ctr))**2 +\\\n (eff.dtrigEff(Ebins_ctr)/eff.trigEff(Ebins_ctr))**2 )\ndR_meas_stat_Bkg = R_meas_Bkg*dN_meas_Bkg_Pois/N_meas_Bkg\ndR_meas_Bkg = R_meas_Bkg*np.sqrt( (dN_meas_Bkg_Pois/N_meas_Bkg)**2 +\\\n (eff.deff_write_bkg/eff.eff_write_bkg)**2 +\\\n (eff.dcutEffFit_bkg(Ebins_ctr)/eff.cutEffFit_bkg(Ebins_ctr))**2 +\\\n (eff.dtrigEff(Ebins_ctr)/eff.trigEff(Ebins_ctr))**2 )\n\ndR_meas_stat = np.sqrt(dR_meas_stat_PuBe**2 + dR_meas_stat_Bkg**2)\ndR_meas = np.sqrt(dR_meas_PuBe**2 + dR_meas_Bkg**2)\n\nc_stat,dc_stat=spec.doBkgSub(r68.load_measured(verbose=False), Ebins, Efit_min, Efit_max, doEffsyst=False, doBurstLeaksyst=False, output='counts')\nc_syst,dc_syst=spec.doBkgSub(r68.load_measured(verbose=False), Ebins, Efit_min, Efit_max, doEffsyst=True, doBurstLeaksyst=False, output='counts')\nc_syst2,dc_syst2=spec.doBkgSub(r68.load_measured(verbose=False), Ebins, Efit_min, Efit_max, doEffsyst=True, doBurstLeaksyst=True, output='counts')\n\nr_stat,dr_stat=spec.doBkgSub(r68.load_measured(verbose=False), Ebins, Efit_min, Efit_max, doEffsyst=False, doBurstLeaksyst=False, output='reco-rate')\nr_syst,dr_syst=spec.doBkgSub(r68.load_measured(verbose=False), Ebins, Efit_min, Efit_max, doEffsyst=True, doBurstLeaksyst=False, output='reco-rate')\nr_syst2,dr_syst2=spec.doBkgSub(r68.load_measured(verbose=False), Ebins, Efit_min, Efit_max, doEffsyst=True, doBurstLeaksyst=True, output='reco-rate')\n\nfig,ax = plt.subplots(2,1,sharex=True,figsize=(16,16))\nfill_noise=ax[1].axvspan(0,50,color='r', alpha=0.25, label='No efficiency data',zorder=0)\n\ncthresh=Ebins_ctr>=50 #Only plot hists above threshold\n\n#Raw Count histograms\nax[0].set_title('Raw Counts, Statistical Uncertainties',size='40',pad='25')\n\nline_c_PuBe=ax[0].errorbar(Ebins_ctr,N_meas_PuBe,yerr=dN_meas_PuBe_Pois, drawstyle = 'steps-mid', linewidth=2)\nline_c_Bkg=ax[0].errorbar(Ebins_ctr,N_meas_Bkg,yerr=dN_meas_Bkg_Pois, drawstyle = 'steps-mid', linewidth=2)\nline_thresh=ax[0].axvline(50,linestyle='--',color='r', label='50 eV$_{ee}$ threshold',zorder=5)\n\nax[0].set_xlim(0,5e2)\n#ax[0].set_yscale('log')\nax[0].set_ylim(0,3e3)\nax[0].set_ylabel('Counts/bin')\n\n\nax[0].legend([line_c_PuBe, line_c_Bkg, line_thresh, fill_noise],\n [f\"PuBe, run time = {meas['PuBe']['tlive']/3600:.1f} hrs\",\n f\"Bkg, run time = {meas['Bkg']['tlive']/3600:.1f} hrs\",\n '50 eV$_{ee}$ threshold'],fontsize=30)\n \n#Reconstructed rate\n \n#Reverse errorbar order to be [lower,upper]\nax[1].set_title('Reconstructed Background-Subtracted Rate',size='40',pad='25')\nline_r_syst=ax[1].errorbar(Ebins_ctr[cthresh],60*r_syst2[cthresh],yerr=(60*dr_syst2[::-1])[:,cthresh],drawstyle = 'steps-mid', linewidth=2, label='Stat + Syst')\nline_r_stat=ax[1].errorbar(Ebins_ctr[cthresh],60*r_stat[cthresh],yerr=60*dr_stat[::-1][:,cthresh],drawstyle = 'steps-mid', linewidth=2, label='Stat')\n \nline_thresh=ax[1].axvline(50,linestyle='--',color='r', label='50 eV$_{ee}$ threshold',zorder=5)\nfill_noise=ax[1].axvspan(0,50,color='r', alpha=0.25, label='No efficiency data',zorder=0)\n \nax[1].legend([line_r_stat, line_r_syst, line_thresh, fill_noise],\n ['Stat', 'Stat + Syst', '50 eV$_{ee}$ threshold', 'No efficiency info'])\n \nax[1].set_ylim(0,3)\nax[1].set_xlim(0,500)\nax[1].set_ylabel('Counts/min/bin')\nax[1].set_xlabel('Energy [eV$_{ee}$]')\n\n\nplt.tight_layout()\nplt.savefig('../figures/meas_spec_reco_rate_pretty.pdf')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Overlaid histograms comparingthe yielded energy PDFs for Sorensen and Lindhard models,\nincluding the resolution of the current detector (see `Calibration.ipynb`).\nThe histograms are comprised of approximately simulated cascades.\nThe orange (front) filled histogram represents the Lindhard model while the blue (back) filled histogram represents the Sorensen model.\nIn the Sorensen model, many points are pushed to zero due to the presenceof a cutoff energy,\nleading to a peak in the first bin that is not present in the Lindhard model.\nFor both models, we use k = 0.178, and for the Sorensen model, we use q= 0.00075.\nThe solid-line unfilled histogram represents the Lindhard model with one fifth of the detector’s resolution,\nand the dashed unfilled histogram represents the Sorensen model with one fifth of the detector’s resolution.\n\nThis is not the data generated by Geant4 mentioned above, \nand is intended only to give an idea of what simulated data looks like. \nThis data is instead generated using [a software package we developed for simulating nuclear recoils](https://github.com/villano-lab/nrCascadeSim).\nThis package does not assume any particular source and does not rely on the same assumptions as Geant4.",
"_____no_output_____"
]
],
[
[
"#Import Libraries\nimport uproot\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatch\nplt.style.use('../mplstyles/stylelib/standard.mplstyle')\nfrom matplotlib.lines import Line2D\nfrom tabulate import tabulate\nimport sys\nsys.path.append('../python')\nimport nc_kinematics as nck\nimport lindhard as lin\nimport R68_yield as R68y\nfrom histogram_utils import histogramable as h\n\n#Select a file.\nfile = '../data/longsim.root'\n\n#It's running, there's just a lot of data. Please be patient!\n\nreal_Lind = np.ndarray.flatten(np.asarray(h(file)[0]))\nreal_Sor = np.ndarray.flatten(np.asarray(h(file,model='Sorenson')[0]))\nsmall_Lind = np.ndarray.flatten(np.asarray(h(file,scalefactor=0.2)[0]))\nsmall_Sor = np.ndarray.flatten(np.asarray(h(file,model='Sorenson',scalefactor=0.2)[0]))\n\nreal_Lind = real_Lind[real_Lind >= 0]\nreal_Sor = real_Sor[real_Sor >= 0]\nsmall_Lind = small_Lind[small_Lind >= 0]\nsmall_Sor = small_Sor[small_Sor >= 0]\n\ndef format_exponent(ax, axis='y'):\n\n # Change the ticklabel format to scientific format\n ax.ticklabel_format(axis=axis, style='sci', scilimits=(-2, 2))\n\n # Get the appropriate axis\n if axis == 'y':\n ax_axis = ax.yaxis\n x_pos = 0.0\n y_pos = 1.0\n horizontalalignment='left'\n verticalalignment='bottom'\n else:\n ax_axis = ax.xaxis\n x_pos = 1.0\n y_pos = -0.05\n horizontalalignment='right'\n verticalalignment='top'\n\n # Run plt.tight_layout() because otherwise the offset text doesn't update\n plt.tight_layout()\n\n # Get the offset value\n offset = ax_axis.get_offset_text().get_text()\n\n if len(offset) > 0:\n # Get that exponent value and change it into latex format\n minus_sign = u'\\u2212'\n expo = np.float(offset.replace(minus_sign, '-').split('e')[-1])\n offset_text = r'x$\\mathregular{10^{%d}}$' %expo\n\n # Turn off the offset text that's calculated automatically\n ax_axis.offsetText.set_visible(False)\n\n # Add in a text box at the top of the y axis\n ax.text(x_pos, y_pos, offset_text, transform=ax.transAxes,\n horizontalalignment=horizontalalignment,\n verticalalignment=verticalalignment,fontsize=30)\n return ax\n\nfig, ax = plt.subplots(figsize=(16,12))\n\nbinsize = 8 #bin width in eVee\nbins = np.arange(0,620,binsize)\n\nplt.hist(small_Lind,alpha=0.7,label='Small Res (1/5, Lindhard)',histtype='step',edgecolor='black',density='True',linewidth=2,bins=bins)\nplt.hist(small_Sor,alpha=0.7,label='Small Res (1/5, Sorenson)',histtype='step',edgecolor='black',linestyle='--',density='True',linewidth=2,bins=bins)\nplt.hist(real_Sor,alpha=0.6,label='Sorenson',histtype='step',fill=True,density='True',bins=bins,linewidth=3,edgecolor='navy',color='C0')\nplt.hist(real_Lind,alpha=0.6,label='Lindhard',histtype='step',fill=True,density='True',bins=bins,linewidth=3,edgecolor='#a30',color='C1')\n\nplt.xlabel(r\"Energy Yielded ($\\mathrm{eV}_{\\mathrm{ee}}$)\")\nplt.ylabel(\"Probability Distribution (eVee$^{-1}$)\")#Counts/(total counts * bin width)\")\n\nax = format_exponent(ax, axis='y')\nax.tick_params(axis='both',which='major')\n\nplt.xlim([0,None])\nplt.ylim([6e-13,6e-3]) #Make corner less awkward. Smallest starting value that will make the extra 0 go away\n\n#Legend\nLindPatch = mpatch.Patch(facecolor='C1',edgecolor='#a30',linewidth=3,label='Lindhard',alpha=0.6)\nSorPatch = mpatch.Patch(facecolor='C0',edgecolor='navy',linewidth=3,label='Sorenson',alpha=0.6)\nLindLine = Line2D([0],[0],alpha=0.7,color='black',label='Small Res (1/5, Lindhard)')\nSorLine = Line2D([0],[0],linestyle='--',alpha=0.7,color='black',label='Small Res (1/5, Sorenson)')\n\nplt.legend(handles=[LindPatch,SorPatch,LindLine,SorLine])\nplt.savefig('../figures/SorVsLin.pdf')\nplt.show()",
"dict_keys(['xx', 'yy', 'ex', 'ey'])\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7ee0d2507e599c4360a6fd322d2e5cfc01afc11 | 394,332 | ipynb | Jupyter Notebook | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat | a98f0e338e710f9df68e1d4ab3ddae2899e0001b | [
"Apache-2.0"
] | 7 | 2017-05-10T19:14:43.000Z | 2021-10-18T20:26:51.000Z | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | laserson/phip-stat | cbaa1d8b117ce9d7c358bc3c6131e9af7af0d582 | [
"Apache-2.0"
] | 21 | 2016-12-30T18:03:09.000Z | 2022-01-21T16:33:49.000Z | notebooks/phip_modeling/bayesian-modeling-stats.ipynb | lasersonlab/phip-stat | a98f0e338e710f9df68e1d4ab3ddae2899e0001b | [
"Apache-2.0"
] | 7 | 2016-12-30T18:10:27.000Z | 2020-10-08T13:47:34.000Z | 265.008065 | 35,214 | 0.9192 | [
[
[
"# Statistical exploration for Bayesian analysis of PhIP-seq",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
],
[
"cpm = pd.read_csv('/Users/laserson/tmp/phip_analysis/phip-9/cpm.tsv', sep='\\t', header=0, index_col=0)",
"_____no_output_____"
],
[
"upper_bound = sp.stats.scoreatpercentile(cpm.values.ravel(), 99.9)\nupper_bound",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(cpm.values.ravel(), bins=100, log=True)\n_ = ax.set(title='cpm')",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(np.log10(cpm.values.ravel() + 0.5), bins=100, log=False)\n_ = ax.set(title='log10(cpm + 0.5)')",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(np.log10(cpm.values.ravel() + 0.5), bins=100, log=True)\n_ = ax.set(title='log10(cpm + 0.5)')",
"_____no_output_____"
]
],
[
[
"Plot only the lowest 99.9% of the data",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\n_ = ax.hist(cpm.values.ravel()[cpm.values.ravel() <= upper_bound], bins=range(100), log=False)\n_ = ax.set(xlim=(0, 60))\n_ = ax.set(title='trimmed cpm')",
"_____no_output_____"
],
[
"trimmed_cpm = cpm.values.ravel()[cpm.values.ravel() <= upper_bound]\ntrimmed_cpm.mean(), trimmed_cpm.std()",
"_____no_output_____"
],
[
"means = cpm.apply(lambda x: x[x <= upper_bound].mean(), axis=1, raw=True)\n_, edges = np.histogram(means, bins=[sp.stats.scoreatpercentile(means, p) for p in np.linspace(0, 100, 10)])",
"_____no_output_____"
],
[
"def plot_hist(ax, a):\n h, e = np.histogram(a, bins=100, range=(0, upper_bound), density=True)\n ax.hlines(h, e[:-1], e[1:])",
"_____no_output_____"
],
[
"for i in range(len(edges[:-1])):\n left = edges[i]\n right = edges[i + 1]\n rows = (means >= left) & (means <= right)\n values = cpm[rows].values.ravel()\n fig, ax = plt.subplots()\n plot_hist(ax, values)\n ax.set(xlim=(0, 50), title='mean in ({}, {})'.format(left, right))",
"_____no_output_____"
]
],
[
[
"Do the slices look Poisson?",
"_____no_output_____"
]
],
[
[
"a = np.random.poisson(8, 10000)\nfig, ax = plt.subplots()\nplot_hist(ax, a)\nax.set(xlim=(0, 50))",
"_____no_output_____"
]
],
[
[
"For the most part. Maybe try NegBin just in case",
"_____no_output_____"
],
[
"What does the distribution of the trimmed means look like?",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nplot_hist(ax, means)\nax.set(xlim=(0, 50))",
"_____no_output_____"
],
[
"a = np.random.gamma(1, 10, 10000)\nfig, ax = plt.subplots()\nplot_hist(ax, a)\nax.set(xlim=(0, 50))",
"_____no_output_____"
],
[
"means.mean()",
"_____no_output_____"
]
],
[
[
"Following Anders and Huber, _Genome Biology_ 2010, compute some of their stats",
"_____no_output_____"
],
[
"Compute size factors",
"_____no_output_____"
]
],
[
[
"s = np.exp(np.median(np.log(cpm.values + 0.5) - np.log(cpm.values + 0.5).mean(axis=1).reshape((cpm.shape[0], 1)), axis=0))",
"_____no_output_____"
],
[
"_ = sns.distplot(s)",
"_____no_output_____"
],
[
"q = (cpm.values / s).mean(axis=1)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(q, bins=100, log=False)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(q, bins=100, log=True)",
"_____no_output_____"
],
[
"w = (cpm.values / s).std(axis=1, ddof=1)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(w, bins=100, log=True)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.scatter(q, w)",
"_____no_output_____"
],
[
"_ = sns.lmplot('q', 'w', pd.DataFrame({'q': q, 'w': w}))",
"_____no_output_____"
],
[
"list(zip(cpm.values.sum(axis=0), s))",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
],
[
"a = np.random.gamma(30, 1/30, 1000)",
"_____no_output_____"
],
[
"sns.distplot(a)",
"_____no_output_____"
]
],
[
[
"Proceeding with the following strategy/model\n\nTrim data to remove top 0.1% of count values. Compute mean of each row and use the means to fit a gamma distribution. Using these values, define a posterior on a rate for each clone, assuming Poisson stats for each cell. This means the posterior is also gamma distributed. Then compute the probability of seeing a more extreme value, weighted with the posterior on r_i.",
"_____no_output_____"
]
],
[
[
"import pystan",
"_____no_output_____"
],
[
"cpm = pd.read_csv('/Users/laserson/tmp/phip_analysis/phip-9/cpm.tsv', sep='\\t', header=0, index_col=0)",
"_____no_output_____"
],
[
"upper_bound = sp.stats.scoreatpercentile(cpm.values, 99.9)",
"_____no_output_____"
],
[
"trimmed_means = cpm.apply(lambda x: x[x <= upper_bound].mean(), axis=1, raw=True).values",
"_____no_output_____"
],
[
"brm = pystan.StanModel(model_name='background_rates', file='/Users/laserson/repos/bamophip/background_rates.stan')",
"INFO:pystan:COMPILING THE C++ CODE FOR MODEL background_rates_0a7b2d07f26077b0d278330f6aedefc9 NOW.\n"
],
[
"data = {\n 'num_clones': trimmed_means.shape[0],\n 'trimmed_means': trimmed_means\n}\nbr_fit = brm.sampling(data=data, iter=2000, chains=4)",
"_____no_output_____"
],
[
"br_fit",
"_____no_output_____"
],
[
"br_fit.plot()",
"_____no_output_____"
],
[
"alpha, beta, _ = br_fit.get_posterior_mean().mean(axis=1)",
"_____no_output_____"
],
[
"alpha, beta",
"_____no_output_____"
],
[
"h, e = np.histogram(np.random.gamma(alpha, 1 / beta, 50000), bins='auto', density=True)\nfig, ax = plt.subplots()\n_ = ax.hist(trimmed_means, bins=100, normed=True)\n_ = ax.hlines(h, e[:-1], e[1:])\n_ = ax.set(xlim=(0, 50))",
"_____no_output_____"
],
[
"# assumes the counts for each clone are Poisson distributed with the learned Gamma prior\n# Therefore, the posterior is Gamma distributed, and we use the expression for its expected value\ntrimmed_sums = cpm.apply(lambda x: x[x <= upper_bound].sum(), axis=1, raw=True).values\ntrimmed_sizes = cpm.apply(lambda x: (x <= upper_bound).sum(), axis=1, raw=True).values\nbackground_rates = (alpha + trimmed_sums) / (beta + trimmed_sizes)",
"_____no_output_____"
],
[
"# mlxp is \"minus log 10 pval\"\nmlxp = []\nfor i in range(cpm.shape[0]):\n mlxp.append(-sp.stats.poisson.logsf(cpm.values[i], background_rates[i]) / np.log(10))\nmlxp = np.asarray(mlxp)",
"/Users/laserson/miniconda3/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:899: RuntimeWarning: divide by zero encountered in log\n return log(self._sf(x, *args))\n"
],
[
"fig, ax = plt.subplots()\nh, e = np.histogram(10**(-mlxp.ravel()), bins='auto')\nax.hlines(h, e[:-1], e[1:])\nax.set(xlim=(0, 1))",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nfinite = np.isfinite(mlxp.ravel())\n_ = ax.hist(mlxp.ravel()[finite], bins=100, log=True)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nfinite = np.isfinite(mlxp.ravel())\n_ = ax.hist(np.log10(mlxp.ravel()[finite] + 0.5), bins=100, log=True)",
"_____no_output_____"
],
[
"old_pvals = pd.read_csv('/Users/laserson/tmp/phip_analysis/phip-9/pvals.tsv', sep='\\t', header=0, index_col=0)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nh, e = np.histogram(10**(-old_pvals.values.ravel()), bins='auto')\nax.hlines(h, e[:-1], e[1:])\nax.set(xlim=(0, 1))",
"_____no_output_____"
],
[
"(old_pvals.values.ravel() > 10).sum()",
"_____no_output_____"
],
[
"(mlxp > 10).sum()",
"_____no_output_____"
],
[
"len(mlxp.ravel())",
"_____no_output_____"
]
],
[
[
"Can we use scipy's MLE for the gamma parameters instead?",
"_____no_output_____"
]
],
[
[
"sp.stats.gamma.fit(trimmed_means)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.hist(sp.stats.gamma.rvs(a=0.3387, loc=0, scale=3.102, size=10000), bins=100)\n_ = ax.set(xlim=(0, 50))",
"_____no_output_____"
]
],
[
[
"Hmmm...doesn't appear to get the correct solution.",
"_____no_output_____"
],
[
"Alternatively, let's try optimizing the log likelihood ourselves",
"_____no_output_____"
]
],
[
[
"pos = trimmed_means > 0\nn = len(trimmed_means)\ns = trimmed_means[pos].sum()\nsl = np.log(trimmed_means[pos]).sum()\ndef ll(x):\n return -1 * (n * x[0] * np.log(x[1]) - n * sp.special.gammaln(x[0]) + (x[0] - 1) * sl - x[1] * s)",
"_____no_output_____"
],
[
"param = sp.optimize.minimize(ll, np.asarray([2, 1]), bounds=[(np.nextafter(0, 1), None), (np.nextafter(0, 1), None)])\nparam",
"_____no_output_____"
],
[
"param.x",
"_____no_output_____"
]
],
[
[
"SUCCESS!",
"_____no_output_____"
],
[
"Do the p-values have a correlation with the peptide abundance?",
"_____no_output_____"
]
],
[
[
"mlxp = pd.read_csv('/Users/laserson/tmp/phip_analysis/sjogrens/mlxp.tsv', sep='\\t', index_col=0, header=0)\ninputs = pd.read_csv('/Users/laserson/repos/phage_libraries_private/human90/inputs/human90-larman1-input.tsv', sep='\\t', index_col=0, header=0)",
"_____no_output_____"
],
[
"m = pd.merge(mlxp, inputs, left_index=True, right_index=True)",
"_____no_output_____"
],
[
"sample = 'Sjogrens.serum.Sjogrens.FS12-03967.20A20G.1'",
"_____no_output_____"
],
[
"sp.stats.pearsonr(10**(-m[sample]), m['input'])",
"_____no_output_____"
],
[
"sp.stats.spearmanr(10**(-m[sample]), m['input'])",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.scatter(10**(-m[sample]), m['input'])",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n_ = ax.scatter(m[sample], m['input'])",
"_____no_output_____"
],
[
"h, xe, ye = np.histogram2d(m[sample], m['input'], bins=100)\nfig, ax = plt.subplots()\n_ = ax.imshow(h)",
"_____no_output_____"
],
[
"np.histogram2d",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee19194c4cf6a7fd9a8bd2ca1cbdc721f8d92e | 179,931 | ipynb | Jupyter Notebook | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers | 74dd46c2d0e4bb70dcae627f97b99f5b67daa0f9 | [
"MIT"
] | 2 | 2020-03-09T20:58:41.000Z | 2020-05-17T14:30:16.000Z | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers | 74dd46c2d0e4bb70dcae627f97b99f5b67daa0f9 | [
"MIT"
] | null | null | null | 19_intent_classification.ipynb | evergreenllc2020/Deep-Learning-For-Hackers | 74dd46c2d0e4bb70dcae627f97b99f5b67daa0f9 | [
"MIT"
] | 1 | 2020-04-10T11:43:14.000Z | 2020-04-10T11:43:14.000Z | 143.9448 | 141,090 | 0.843468 | [
[
[
"<a href=\"https://colab.research.google.com/github/evergreenllc2020/Deep-Learning-For-Hackers/blob/master/19_intent_classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Intent Recognition with BERT using Keras and TensorFlow 2",
"_____no_output_____"
]
],
[
[
"!nvidia-smi",
"Sun Mar 1 05:57:39 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 440.48.02 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n| N/A 71C P0 31W / 70W | 0MiB / 15079MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n"
],
[
"!pip install tensorflow-gpu >> /dev/null",
"\u001b[31mERROR: tensorflow 1.15.0 has requirement tensorboard<1.16.0,>=1.15.0, but you'll have tensorboard 2.1.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: tensorflow 1.15.0 has requirement tensorflow-estimator==1.15.1, but you'll have tensorflow-estimator 2.1.0 which is incompatible.\u001b[0m\n"
],
[
"!pip install --upgrade grpcio >> /dev/null",
"\u001b[31mERROR: tensorflow 1.15.0 has requirement tensorboard<1.16.0,>=1.15.0, but you'll have tensorboard 2.1.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: tensorflow 1.15.0 has requirement tensorflow-estimator==1.15.1, but you'll have tensorflow-estimator 2.1.0 which is incompatible.\u001b[0m\n"
],
[
"!pip install tqdm >> /dev/null",
"_____no_output_____"
],
[
"!pip install bert-for-tf2 >> /dev/null",
"_____no_output_____"
],
[
"!pip install sentencepiece >> /dev/null",
"_____no_output_____"
],
[
"import os\nimport math\nimport datetime\n\nfrom tqdm import tqdm\n\nimport pandas as pd\nimport numpy as np\n\nimport tensorflow as tf\nfrom tensorflow import keras\n\nimport bert\nfrom bert import BertModelLayer\nfrom bert.loader import StockBertConfig, map_stock_config_to_params, load_stock_weights\nfrom bert.tokenization.bert_tokenization import FullTokenizer\n\nimport seaborn as sns\nfrom pylab import rcParams\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import MaxNLocator\nfrom matplotlib import rc\n\nfrom sklearn.metrics import confusion_matrix, classification_report\n\n%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nsns.set(style='whitegrid', palette='muted', font_scale=1.2)\n\nHAPPY_COLORS_PALETTE = [\"#01BEFE\", \"#FFDD00\", \"#FF7D00\", \"#FF006D\", \"#ADFF02\", \"#8F00FF\"]\n\nsns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE))\n\nrcParams['figure.figsize'] = 12, 8\n\nRANDOM_SEED = 42\n\nnp.random.seed(RANDOM_SEED)\ntf.random.set_seed(RANDOM_SEED)",
"_____no_output_____"
]
],
[
[
"# Data\n\nThe data contains various user queries categorized into seven intents. It is hosted on [GitHub](https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines) and is first presented in [this paper](https://arxiv.org/abs/1805.10190).",
"_____no_output_____"
]
],
[
[
"!gdown --id 1OlcvGWReJMuyYQuOZm149vHWwPtlboR6 --output train.csv\n!gdown --id 1Oi5cRlTybuIF2Fl5Bfsr-KkqrXrdt77w --output valid.csv\n!gdown --id 1ep9H6-HvhB4utJRLVcLzieWNUSG3P_uF --output test.csv",
"Downloading...\nFrom: https://drive.google.com/uc?id=1OlcvGWReJMuyYQuOZm149vHWwPtlboR6\nTo: /content/train.csv\n100% 799k/799k [00:00<00:00, 115MB/s]\nDownloading...\nFrom: https://drive.google.com/uc?id=1Oi5cRlTybuIF2Fl5Bfsr-KkqrXrdt77w\nTo: /content/valid.csv\n100% 43.3k/43.3k [00:00<00:00, 59.0MB/s]\nDownloading...\nFrom: https://drive.google.com/uc?id=1ep9H6-HvhB4utJRLVcLzieWNUSG3P_uF\nTo: /content/test.csv\n100% 43.1k/43.1k [00:00<00:00, 62.5MB/s]\n"
],
[
"train = pd.read_csv(\"train.csv\")\nvalid = pd.read_csv(\"valid.csv\")\ntest = pd.read_csv(\"test.csv\")",
"_____no_output_____"
],
[
"train = train.append(valid).reset_index(drop=True)",
"_____no_output_____"
],
[
"train.shape",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"chart = sns.countplot(train.intent, palette=HAPPY_COLORS_PALETTE)\nplt.title(\"Number of texts per intent\")\nchart.set_xticklabels(chart.get_xticklabels(), rotation=30, horizontalalignment='right');",
"_____no_output_____"
]
],
[
[
"# Intent Recognition with BERT",
"_____no_output_____"
]
],
[
[
"!wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip",
"--2020-03-01 05:59:09-- https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\nResolving storage.googleapis.com (storage.googleapis.com)... 172.217.214.128, 2607:f8b0:4001:c05::80\nConnecting to storage.googleapis.com (storage.googleapis.com)|172.217.214.128|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 407727028 (389M) [application/zip]\nSaving to: ‘uncased_L-12_H-768_A-12.zip’\n\nuncased_L-12_H-768_ 100%[===================>] 388.84M 229MB/s in 1.7s \n\n2020-03-01 05:59:11 (229 MB/s) - ‘uncased_L-12_H-768_A-12.zip’ saved [407727028/407727028]\n\n"
],
[
"!unzip uncased_L-12_H-768_A-12.zip",
"Archive: uncased_L-12_H-768_A-12.zip\n creating: uncased_L-12_H-768_A-12/\n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.meta \n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.data-00000-of-00001 \n inflating: uncased_L-12_H-768_A-12/vocab.txt \n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.index \n inflating: uncased_L-12_H-768_A-12/bert_config.json \n"
],
[
"os.makedirs(\"model\", exist_ok=True)",
"_____no_output_____"
],
[
"!mv uncased_L-12_H-768_A-12/ model",
"_____no_output_____"
],
[
"bert_model_name=\"uncased_L-12_H-768_A-12\"\n\nbert_ckpt_dir = os.path.join(\"model/\", bert_model_name)\nbert_ckpt_file = os.path.join(bert_ckpt_dir, \"bert_model.ckpt\")\nbert_config_file = os.path.join(bert_ckpt_dir, \"bert_config.json\")",
"_____no_output_____"
]
],
[
[
"## Preprocessing",
"_____no_output_____"
]
],
[
[
"class IntentDetectionData:\n DATA_COLUMN = \"text\"\n LABEL_COLUMN = \"intent\"\n\n def __init__(self, train, test, tokenizer: FullTokenizer, classes, max_seq_len=192):\n self.tokenizer = tokenizer\n self.max_seq_len = 0\n self.classes = classes\n \n ((self.train_x, self.train_y), (self.test_x, self.test_y)) = map(self._prepare, [train, test])\n\n print(\"max seq_len\", self.max_seq_len)\n self.max_seq_len = min(self.max_seq_len, max_seq_len)\n self.train_x, self.test_x = map(self._pad, [self.train_x, self.test_x])\n\n def _prepare(self, df):\n x, y = [], []\n \n for _, row in tqdm(df.iterrows()):\n text, label = row[IntentDetectionData.DATA_COLUMN], row[IntentDetectionData.LABEL_COLUMN]\n tokens = self.tokenizer.tokenize(text)\n tokens = [\"[CLS]\"] + tokens + [\"[SEP]\"]\n token_ids = self.tokenizer.convert_tokens_to_ids(tokens)\n self.max_seq_len = max(self.max_seq_len, len(token_ids))\n x.append(token_ids)\n y.append(self.classes.index(label))\n\n return np.array(x), np.array(y)\n\n def _pad(self, ids):\n x = []\n for input_ids in ids:\n input_ids = input_ids[:min(len(input_ids), self.max_seq_len - 2)]\n input_ids = input_ids + [0] * (self.max_seq_len - len(input_ids))\n x.append(np.array(input_ids))\n return np.array(x)",
"_____no_output_____"
],
[
"tokenizer = FullTokenizer(vocab_file=os.path.join(bert_ckpt_dir, \"vocab.txt\"))",
"_____no_output_____"
],
[
"tokenizer.tokenize(\"I can't wait to visit Bulgaria again!\")",
"_____no_output_____"
],
[
"tokens = tokenizer.tokenize(\"I can't wait to visit Bulgaria again!\")\ntokenizer.convert_tokens_to_ids(tokens)",
"_____no_output_____"
],
[
"def create_model(max_seq_len, bert_ckpt_file):\n\n with tf.io.gfile.GFile(bert_config_file, \"r\") as reader:\n bc = StockBertConfig.from_json_string(reader.read())\n bert_params = map_stock_config_to_params(bc)\n bert_params.adapter_size = None\n bert = BertModelLayer.from_params(bert_params, name=\"bert\")\n \n input_ids = keras.layers.Input(shape=(max_seq_len, ), dtype='int32', name=\"input_ids\")\n bert_output = bert(input_ids)\n\n print(\"bert shape\", bert_output.shape)\n\n cls_out = keras.layers.Lambda(lambda seq: seq[:, 0, :])(bert_output)\n cls_out = keras.layers.Dropout(0.5)(cls_out)\n logits = keras.layers.Dense(units=768, activation=\"tanh\")(cls_out)\n logits = keras.layers.Dropout(0.5)(logits)\n logits = keras.layers.Dense(units=len(classes), activation=\"softmax\")(logits)\n\n model = keras.Model(inputs=input_ids, outputs=logits)\n model.build(input_shape=(None, max_seq_len))\n\n load_stock_weights(bert, bert_ckpt_file)\n \n return model",
"_____no_output_____"
]
],
[
[
"## Training",
"_____no_output_____"
]
],
[
[
"classes = train.intent.unique().tolist()\n\ndata = IntentDetectionData(train, test, tokenizer, classes, max_seq_len=128)",
"13784it [00:04, 3325.56it/s]\n700it [00:00, 3586.61it/s]\n"
],
[
"data.train_x.shape",
"_____no_output_____"
],
[
"data.train_x[0]",
"_____no_output_____"
],
[
"data.train_y[0]",
"_____no_output_____"
],
[
"data.max_seq_len",
"_____no_output_____"
],
[
"model = create_model(data.max_seq_len, bert_ckpt_file)",
"bert shape (None, 38, 768)\nDone loading 196 BERT weights from: model/uncased_L-12_H-768_A-12/bert_model.ckpt into <bert.model.BertModelLayer object at 0x7fa2d6676208> (prefix:bert). Count of weights not found in the checkpoint was: [0]. Count of weights with mismatched shape: [0]\nUnused weights from checkpoint: \n\tbert/embeddings/token_type_embeddings\n\tbert/pooler/dense/bias\n\tbert/pooler/dense/kernel\n\tcls/predictions/output_bias\n\tcls/predictions/transform/LayerNorm/beta\n\tcls/predictions/transform/LayerNorm/gamma\n\tcls/predictions/transform/dense/bias\n\tcls/predictions/transform/dense/kernel\n\tcls/seq_relationship/output_bias\n\tcls/seq_relationship/output_weights\n"
],
[
"model.summary()",
"Model: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_ids (InputLayer) [(None, 38)] 0 \n_________________________________________________________________\nbert (BertModelLayer) (None, 38, 768) 108890112 \n_________________________________________________________________\nlambda (Lambda) (None, 768) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 768) 0 \n_________________________________________________________________\ndense (Dense) (None, 768) 590592 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 768) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 7) 5383 \n=================================================================\nTotal params: 109,486,087\nTrainable params: 109,486,087\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"model.compile(\n optimizer=keras.optimizers.Adam(1e-5),\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")]\n)",
"_____no_output_____"
],
[
"log_dir = \"log/intent_detection/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%s\")\ntensorboard_callback = keras.callbacks.TensorBoard(log_dir=log_dir)\n\nhistory = model.fit(\n x=data.train_x, \n y=data.train_y,\n validation_split=0.1,\n batch_size=16,\n shuffle=True,\n epochs=5,\n callbacks=[tensorboard_callback]\n)",
"Train on 12405 samples, validate on 1379 samples\nEpoch 1/5\n 5392/12405 [============>.................] - ETA: 2:51 - loss: 1.4535 - acc: 0.7361"
]
],
[
[
"## Evaluation",
"_____no_output_____"
]
],
[
[
"%load_ext tensorboard",
"_____no_output_____"
],
[
"%tensorboard --logdir log",
"_____no_output_____"
],
[
"ax = plt.figure().gca()\nax.xaxis.set_major_locator(MaxNLocator(integer=True))\n\nax.plot(history.history['loss'])\nax.plot(history.history['val_loss'])\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['train', 'test'])\nplt.title('Loss over training epochs')\nplt.show();",
"_____no_output_____"
],
[
"ax = plt.figure().gca()\nax.xaxis.set_major_locator(MaxNLocator(integer=True))\n\nax.plot(history.history['acc'])\nax.plot(history.history['val_acc'])\nplt.ylabel('Accuracy')\nplt.xlabel('Epoch')\nplt.legend(['train', 'test'])\nplt.title('Accuracy over training epochs')\nplt.show();",
"_____no_output_____"
],
[
"_, train_acc = model.evaluate(data.train_x, data.train_y)\n_, test_acc = model.evaluate(data.test_x, data.test_y)\n\nprint(\"train acc\", train_acc)\nprint(\"test acc\", test_acc)",
"_____no_output_____"
],
[
"y_pred = model.predict(data.test_x).argmax(axis=-1)",
"_____no_output_____"
],
[
"print(classification_report(data.test_y, y_pred, target_names=classes))",
"_____no_output_____"
],
[
"cm = confusion_matrix(data.test_y, y_pred)\ndf_cm = pd.DataFrame(cm, index=classes, columns=classes)",
"_____no_output_____"
],
[
"hmap = sns.heatmap(df_cm, annot=True, fmt=\"d\")\nhmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation=0, ha='right')\nhmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation=30, ha='right')\nplt.ylabel('True label')\nplt.xlabel('Predicted label');",
"_____no_output_____"
],
[
"sentences = [\n \"Play our song now\",\n \"Rate this book as awful\"\n]\n\npred_tokens = map(tokenizer.tokenize, sentences)\npred_tokens = map(lambda tok: [\"[CLS]\"] + tok + [\"[SEP]\"], pred_tokens)\npred_token_ids = list(map(tokenizer.convert_tokens_to_ids, pred_tokens))\n\npred_token_ids = map(lambda tids: tids +[0]*(data.max_seq_len-len(tids)),pred_token_ids)\npred_token_ids = np.array(list(pred_token_ids))\n\npredictions = model.predict(pred_token_ids).argmax(axis=-1)\n\nfor text, label in zip(sentences, predictions):\n print(\"text:\", text, \"\\nintent:\", classes[label])\n print()",
"_____no_output_____"
]
],
[
[
"# References\n\n- https://mccormickml.com/2019/07/22/BERT-fine-tuning/\n- https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines\n- https://jalammar.github.io/illustrated-bert/\n- https://towardsdatascience.com/bert-for-dummies-step-by-step-tutorial-fb90890ffe03\n- https://www.reddit.com/r/MachineLearning/comments/ao23cp/p_how_to_use_bert_in_kaggle_competitions_a/",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7ee210b4b5bdf5d299fedc4428ba46f8fca1d3d | 1,377 | ipynb | Jupyter Notebook | bio-bigdata-notebook/tests/notebooks/bio_informatics_course.ipynb | gbrammer/nbi-jupyter-docker-stacks | 0c9e62e8f496d58feec7c88f2bb66c68443621bc | [
"MIT"
] | 2 | 2020-10-04T19:38:23.000Z | 2021-04-23T17:26:25.000Z | bio-bigdata-notebook/tests/notebooks/bio_informatics_course.ipynb | gbrammer/nbi-jupyter-docker-stacks | 0c9e62e8f496d58feec7c88f2bb66c68443621bc | [
"MIT"
] | 8 | 2021-05-17T12:26:51.000Z | 2022-03-10T09:25:05.000Z | bio-bigdata-notebook/tests/notebooks/bio_informatics_course.ipynb | gbrammer/nbi-jupyter-docker-stacks | 0c9e62e8f496d58feec7c88f2bb66c68443621bc | [
"MIT"
] | 2 | 2020-04-27T06:40:11.000Z | 2021-03-04T10:09:17.000Z | 15.131868 | 34 | 0.47785 | [
[
[
"# Packages\nlibrary(apeglm)",
"_____no_output_____"
],
[
"library(vsn)",
"_____no_output_____"
],
[
"library(tidyverse)",
"_____no_output_____"
],
[
"library(DESeq2)",
"_____no_output_____"
],
[
"library(ggplot2)",
"_____no_output_____"
],
[
"library(hexbin)",
"_____no_output_____"
],
[
"library(BiocParallel)",
"_____no_output_____"
],
[
"library(magick)\nstr(magick::magick_config())",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee24df304855b47bd1fd83568c7aad36b92be3 | 2,738 | ipynb | Jupyter Notebook | scripts/scripts_ipynb/test.ipynb | Hoseung/pyRamAn | f9386fa5a9f045f98590039988d3cd50bc488dc2 | [
"MIT"
] | 1 | 2021-11-25T16:11:56.000Z | 2021-11-25T16:11:56.000Z | scripts/scripts_ipynb/test.ipynb | Hoseung/pyRamAn | f9386fa5a9f045f98590039988d3cd50bc488dc2 | [
"MIT"
] | 6 | 2020-02-17T13:44:43.000Z | 2020-06-25T15:35:05.000Z | scripts/scripts_ipynb/test.ipynb | Hoseung/pyRamAn | f9386fa5a9f045f98590039988d3cd50bc488dc2 | [
"MIT"
] | 1 | 2021-11-25T16:11:56.000Z | 2021-11-25T16:11:56.000Z | 21.904 | 88 | 0.409423 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7ee279ff577fb3971ca691dcff20184cd8676ae | 26,064 | ipynb | Jupyter Notebook | 6. Dateitypen.ipynb | softvis-research/BeLL | 1ea5aa2ca9303c5af462037555f3b2e9d530aa6a | [
"Apache-2.0"
] | 8 | 2019-11-01T12:03:55.000Z | 2021-07-01T10:28:36.000Z | 6. Dateitypen.ipynb | softvis-research/BeLL | 1ea5aa2ca9303c5af462037555f3b2e9d530aa6a | [
"Apache-2.0"
] | 1 | 2020-11-18T22:27:46.000Z | 2020-11-18T22:27:46.000Z | 6. Dateitypen.ipynb | softvis-research/BeLL | 1ea5aa2ca9303c5af462037555f3b2e9d530aa6a | [
"Apache-2.0"
] | 2 | 2019-10-21T10:49:36.000Z | 2021-07-01T10:28:39.000Z | 94.093863 | 12,216 | 0.65953 | [
[
[
"# Welche Dateitypen gibt es?",
"_____no_output_____"
],
[
"## 1. Verbindung zur Datenbank\nEs wird eine Verbindung zur Neo4j-Datenbank aufgebaut.",
"_____no_output_____"
]
],
[
[
"import py2neo\n\ngraph = py2neo.Graph(bolt=True, host='localhost', user='neo4j', password='neo4j')",
"_____no_output_____"
]
],
[
[
"## 2. Cypher-Abfrage\nEs wird eine Abfrage an die Datenbank gestellt. Das Ergebnis wird in einem Dataframe (pandas) gespeichert.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nquery =\"MATCH (f:Git:File) RETURN f.relativePath as relativePath\"\ndf = pd.DataFrame(graph.run(query).data())\n",
"_____no_output_____"
]
],
[
[
"## 3. Datenaufbereitung\nZur Kontrolle werden die ersten fünf Zeilen des Ergebnisses der Abfrage als Tabelle ausgegeben.",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"Der folgende Codeabschnitt extrahiert die verschiedenen Dateitypen entsprechend der Dateiendung und zählt deren Häufigkeit. Die Dateitypen werden in der Variable <font face=\"Courier\">datatype</font> und deren Häufigkeit in der Variable <font face=\"Courier\">frequency</font> gespeichert.",
"_____no_output_____"
]
],
[
[
"# Extrahiere Dateitypen aus Spalte des Dataframes.\ndatatypes = df['relativePath'].str.rsplit('.', 1).str[1]\n\n# Zähle die Anzahl der Dateitypen und bilde diese in einem Series-Objekt ab.\nseries = datatypes.value_counts()\n\n# Erzeuge zwei Listen aus dem Series-Objekt.\ndatatype = list(series.index)\nfrequency = list(series)\n\n# Erzeuge die Kategorie \"andere\", in der alle Dateitypen gesammelt werden, die weniger oder genau 20 mal auftauchen.\nandere = 0\nfor wert in frequency[:]:\n index = frequency.index(wert)\n if wert <= 20:\n andere += wert\n datatype.remove(datatype[index])\n frequency.remove(wert)\nfrequency.append(andere)\ndatatype.append(\"andere\")\n\nprint(frequency)\nprint(datatype)\n",
"[1383, 80, 41, 36, 21, 126]\n['java', 'html', 'class', 'gif', 'txt', 'andere']\n"
]
],
[
[
"## 4. Visualisierung\nDie Daten werden mittels eines Pie Charts visualisiert.\n\n",
"_____no_output_____"
]
],
[
[
"from IPython.display import display, HTML\n\nbase_html = \"\"\"\n<!DOCTYPE html>\n<html>\n <head>\n <script type=\"text/javascript\" src=\"http://kozea.github.com/pygal.js/javascripts/svg.jquery.js\"></script>\n <script type=\"text/javascript\" src=\"https://kozea.github.io/pygal.js/2.0.x/pygal-tooltips.min.js\"\"></script>\n </head>\n <body>\n <figure>\n {rendered_chart}\n </figure>\n </body>\n</html>\n\"\"\"",
"_____no_output_____"
],
[
"# Erstelle Pie Chart.\nimport pygal \npie_chart = pygal.Pie()\npie_chart.title = 'Dateitypen'\nfor einzelneDateitypen in datatype:\n index= datatype.index(einzelneDateitypen)\n anzahl=frequency[index]\n pie_chart.add(einzelneDateitypen, anzahl)\ndisplay(HTML(base_html.format(rendered_chart=pie_chart.render(is_unicode=True))))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7ee31db980ecec51c9383f5abfbcf5416a968a8 | 16,568 | ipynb | Jupyter Notebook | Basics.ipynb | Joe-mabus/Thanksgiving_Dinner | 2a61911059ce95b609736eb5e07443b4a4bf08b3 | [
"Unlicense"
] | null | null | null | Basics.ipynb | Joe-mabus/Thanksgiving_Dinner | 2a61911059ce95b609736eb5e07443b4a4bf08b3 | [
"Unlicense"
] | null | null | null | Basics.ipynb | Joe-mabus/Thanksgiving_Dinner | 2a61911059ce95b609736eb5e07443b4a4bf08b3 | [
"Unlicense"
] | null | null | null | 80.038647 | 1,702 | 0.664474 | [
[
[
"import pandas as pd \n\ndata = pd.read_csv(\"thanksgiving.csv\", encoding=\"Latin-1\")\n\n# print(data.head())\n\n# print(data.columns)\n\nPositive_survey_counts = data['Do you celebrate Thanksgiving?'].value_counts()\n\nPositive_surveys = data[data['Do you celebrate Thanksgiving?'] == 'Yes']\n\nMain_dish_counts = data['What is typically the main dish at your Thanksgiving dinner?'].value_counts()\n\nPositive_Tofurkey = data[data['What is typically the main dish at your Thanksgiving dinner?'] == 'Tofurkey']\n\nGravy_on_Tofurkey = Positive_Tofurkey['Do you typically have gravy?'].value_counts()\n\napple_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Apple'].isnull()\npumpkin_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pumpkin'].isnull()\npecan_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pecan'].isnull()\n\nno_pies = apple_isnull & pumpkin_isnull & pecan_isnull\n\ndef string_to_income(string):\n if pd.isnull(string):\n return None\n string = string.split(' ')[0]\n if string == 'Prefer':\n return None\n else:\n string = string.split(' ')[0]\n string = string.replace('$','')\n string = string.replace(',','')\n return int(string)\n\ndata['int_income'] = data['How much total combined money did all members of your HOUSEHOLD earn last year?'].apply(string_to_income)\n \n# print(data['int_income'].describe())\n\ndef string_to_age(string):\n if pd.isnull(string):\n return None\n else: \n string = string.split(' ')[0]\n string = string.replace('+',\"\")\n return int(string)\n \ndata['int_age'] = data['Age'].apply(string_to_age)\n\n# print(data['int_age'].value_counts())\n# print(data['int_age'].describe())",
"_____no_output_____"
],
[
"low_income = data['int_income'] < 150000\nhigh_income = data['int_income'] > 150000\n\nlow_travel_results = data[low_income]['How far will you travel for Thanksgiving?'].value_counts(normalize = True)\nhigh_travel_results = data[high_income]['How far will you travel for Thanksgiving?'].value_counts(normalize = True) \n \nprint(low_travel_results)\nprint(high_travel_results)",
"_____no_output_____"
],
[
"So according to the results, there is a higher percentage of people who will \nhave Thanksgiving at home who make more that 150000.\n\nThere is a higher percentage of people who will travel for Thanksgiving in the \nless that 150000 bracket.",
"_____no_output_____"
],
[
"Friendsgiving1 = data.pivot_table(\n index = \"Have you ever tried to meet up with hometown friends on Thanksgiving night?\", \n columns = 'Have you ever attended a \"Friendsgiving?\"', \n values = \"int_age\")\n\nFriendsgiving2 = data.pivot_table(\n index = \"Have you ever tried to meet up with hometown friends on Thanksgiving night?\", \n columns = 'Have you ever attended a \"Friendsgiving?\"', \n values = \"int_income\")\nprint(Friendsgiving1)\nprint(Friendsgiving2)",
"Have you ever attended a \"Friendsgiving?\" No Yes\nHave you ever tried to meet up with hometown fr... \nNo 42.283702 37.010526\nYes 41.475410 33.976744\nHave you ever attended a \"Friendsgiving?\" No Yes\nHave you ever tried to meet up with hometown fr... \nNo 78914.549654 72894.736842\nYes 78750.000000 66019.736842\n"
],
[
"Looks like the younger and poorer you are, the more likely you are to attend a \nfriendsgiving. ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee3885c95ea5cb8ba88b4937458ee64157d1a5 | 107,979 | ipynb | Jupyter Notebook | ionized/galactic_plane_continuum_21cm.ipynb | CambridgeUniversityPress/IntroductionInterstellarMedium | fbfe64c7d50d15da93ebf2fbc7d86d83cbf8941a | [
"CC0-1.0"
] | 3 | 2021-04-26T15:37:13.000Z | 2021-05-13T04:42:15.000Z | ionized/galactic_plane_continuum_21cm.ipynb | interstellarmedium/interstellarmedium.github.io | 0440a5bd80052ab87575e70fc39acd4bf8e225b3 | [
"CC0-1.0"
] | null | null | null | ionized/galactic_plane_continuum_21cm.ipynb | interstellarmedium/interstellarmedium.github.io | 0440a5bd80052ab87575e70fc39acd4bf8e225b3 | [
"CC0-1.0"
] | null | null | null | 749.854167 | 104,026 | 0.937404 | [
[
[
"## Introduction to the Interstellar Medium\n### Jonathan Williams",
"_____no_output_____"
],
[
"### Figure 6.3: portion of the Galactic plane in 21cm continuum showing bremsstrahlung and synchrotron sources",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom astropy.io import fits\nfrom astropy.wcs import WCS\nfrom astropy.visualization import (ImageNormalize, SqrtStretch, LogStretch, AsinhStretch)\n%matplotlib inline",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(14,7.5))\n\nhdu = fits.open('g330to340.i.fits')\nwcs1 = WCS(hdu[0])\nax1 = fig.add_subplot(111, projection=wcs1)\nim1 = hdu[0].data\nhd1 = hdu[0].header\nhdu.close()\n#print(hd1)\n\nimin, imax = 380, 730\nimcrop = im1[:, imin:imax]\n#print(imcrop.min(),imcrop.max())\n\nnorm = ImageNormalize(imcrop, vmin=-0.15, vmax=2.0, stretch=AsinhStretch(a=0.1))\nax1.imshow(imcrop, cmap='gray', origin='lower', norm=norm) \n\nax1.set_xlim(0,350)\nax1.set_ylim(0,180)\nplt.plot([0,350], [90,90], ls='dashed', color='white', lw=2)\nax1.text(82, 45, 'HII', color='white', fontsize=18, fontweight='normal')\nax1.text(316, 97, 'SNR', color='white', fontsize=18, fontweight='normal')\n\n# scale bar\ndx = hd1['CDELT1']\n#print(dx)\n# 40'' per pixel, make bar 1 deg = 90 pix\nxbar = 90\nx0 = 250\nx1 = x0 + xbar\ny0 = 12\ndy = 2\nax1.plot([x0,x1],[y0,y0], 'w-', lw=2)\nax1.plot([x0,x0],[y0-dy,y0+dy], 'w-', lw=2)\nax1.plot([x1,x1],[y0-dy,y0+dy], 'w-', lw=2)\n\n# this crashes binder\n#mpl.rc('text', usetex=True)\n#mpl.rcParams['text.latex.preamble']=[r\"\\usepackage{amsmath}\"]\n#ax1.text(0.5*(x0+x1), y0+1.5*dy, r'$\\boldsymbol{1^\\circ}$', color='white', fontsize=24, fontweight='heavy', ha='center')\n\n# but this works ok\nax1.text(0.5*(x0+x1), y0+1.5*dy, r'$1^\\circ$', color='white', fontsize=24, fontweight='heavy', ha='center')\nax1.text(0.03,0.91,'21cm continuum', {'color': 'w', 'fontsize': 28}, transform=ax1.transAxes)\n\nfor i in (0,1):\n ax1.coords[i].set_ticks_visible(False)\n ax1.coords[i].set_ticklabel_visible(False)\n ax1.coords[i].set_ticks_visible(False)\n ax1.coords[i].set_ticklabel_visible(False)\n ax1.coords[i].set_axislabel('')\n ax1.coords[i].set_axislabel('')\n\nplt.tight_layout()\nplt.savefig('galactic_plane_continuum_21cm.pdf')",
"-0.15332128 7.7325406\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e7ee40f4459a491f4eca8b134f95189cab0c016b | 39,369 | ipynb | Jupyter Notebook | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning | 31dc8ee2dc23b8a5f15337430b8a2c2f84b1749d | [
"MIT"
] | 1 | 2020-05-08T20:15:46.000Z | 2020-05-08T20:15:46.000Z | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning | 31dc8ee2dc23b8a5f15337430b8a2c2f84b1749d | [
"MIT"
] | null | null | null | tv-script-generation/dlnd_tv_script_generation.ipynb | duozhanggithub/Deep-Learning | 31dc8ee2dc23b8a5f15337430b8a2c2f84b1749d | [
"MIT"
] | null | null | null | 33.138889 | 556 | 0.570119 | [
[
[
"# TV Script Generation\nIn this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern).\n## Get the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"_____no_output_____"
]
],
[
[
"## Explore the Data\nPlay around with `view_sentence_range` to view different parts of the data.",
"_____no_output_____"
]
],
[
[
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Dataset Stats\nRoughly the number of unique words: 11492\nNumber of scenes: 262\nAverage number of sentences in each scene: 15.251908396946565\nNumber of lines: 4258\nAverage number of words in each line: 11.50164396430249\n\nThe sentences 0 to 10:\n\nMoe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.\nBart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.\nMoe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?\nMoe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.\nMoe_Szyslak: What's the matter Homer? You're not your normal effervescent self.\nHomer_Simpson: I got my problems, Moe. Give me another one.\nMoe_Szyslak: Homer, hey, you should not drink to forget your problems.\nBarney_Gumble: Yeah, you should only drink to enhance your social skills.\n\n"
]
],
[
[
"## Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\n\n### Lookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call `vocab_to_int`\n- Dictionary to go from the id to word, we'll call `int_to_vocab`\n\nReturn these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n text = list(set(text))\n #text_id = range(len(text))\n #int_to_vocab = dict(zip(text_id, text)) \n #vocab_to_int = dict(zip(text, text_id))\n int_to_vocab = {word_i: word for word_i, word in enumerate(text)}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n return vocab_to_int, int_to_vocab\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tests Passed\n"
]
],
[
[
"### Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\n\nImplement the function `token_lookup` to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\n\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"_____no_output_____"
]
],
[
[
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n keys = ['.', ',', '\"', ';', '!', '?', '(', ')', '--','\\n']\n values = ['||Period||','||Comma||','||Quotation_Mark||','||Semicolon||','||Exclamation_mark||','||Question_mark||','||Left_Parentheses||','||Right_Parentheses||','||Dash||','||Return||']\n token_lookup = dict(zip(keys,values))\n return token_lookup\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Tests Passed\n"
]
],
[
[
"## Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"_____no_output_____"
]
],
[
[
"# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"_____no_output_____"
]
],
[
[
"## Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\n\n### Check the Version of TensorFlow and Access to GPU",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"TensorFlow Version: 1.0.0\n"
]
],
[
[
"### Input\nImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.\n- Targets placeholder\n- Learning Rate placeholder\n\nReturn the placeholders in the following tuple `(Input, Targets, LearningRate)`",
"_____no_output_____"
]
],
[
[
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n Input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input') \n Targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets') \n LearningRate = tf.placeholder(dtype=tf.float32, name='learning_rate') \n return Input, Targets, LearningRate\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Tests Passed\n"
]
],
[
[
"### Build RNN Cell and Initialize\nStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).\n- The Rnn size should be set using `rnn_size`\n- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function\n - Apply the name \"initial_state\" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)\n\nReturn the cell and initial state in the following tuple `(Cell, InitialState)`",
"_____no_output_____"
]
],
[
[
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n #rnn_layers = 2\n \n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n Cell = tf.contrib.rnn.MultiRNNCell([lstm])\n #initial_state = Cell.zero_state(batch_size=tf.placeholder(dtype=tf.int32, shape=[]), dtype=tf.float32)\n InitialState = tf.identity(Cell.zero_state(batch_size, tf.float32), name = 'initial_state')\n #InitialState = tf.identity(initial_state, name='initial_state')\n return Cell, InitialState\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Tests Passed\n"
]
],
[
[
"### Word Embedding\nApply embedding to `input_data` using TensorFlow. Return the embedded sequence.",
"_____no_output_____"
]
],
[
[
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Tests Passed\n"
]
],
[
[
"### Build RNN\nYou created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.\n- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)\n - Apply the name \"final_state\" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)\n\nReturn the outputs and final_state state in the following tuple `(Outputs, FinalState)` ",
"_____no_output_____"
]
],
[
[
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n Outputs, Final_State = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n FinalState = tf.identity(Final_State, name='final_state')\n return Outputs, FinalState\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Tests Passed\n"
]
],
[
[
"### Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.\n- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.\n- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.\n\nReturn the logits and final state in the following tuple (Logits, FinalState) ",
"_____no_output_____"
]
],
[
[
"def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embedding = get_embed(input_data, vocab_size, embed_dim)\n Outputs, FinalState = build_rnn(cell, embedding)\n Logits = tf.contrib.layers.fully_connected(Outputs, vocab_size, activation_fn=None)\n return Logits, FinalState\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Tests Passed\n"
]
],
[
[
"### Batches\nImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:\n- The first element is a single batch of **input** with the shape `[batch size, sequence length]`\n- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`\n\nIf you can't fill the last batch with enough data, drop the last batch.\n\nFor exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)` would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n\n # Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n\n # Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\n\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, `1`. This is a common technique used when creating sequence batches, although it is rather unintuitive.",
"_____no_output_____"
]
],
[
[
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n n_batches = len(int_text)//(batch_size*seq_length)\n input_batch = np.array(int_text[0: n_batches * batch_size * seq_length])\n target_batch = np.array(int_text[1: n_batches * batch_size * seq_length])\n target_batch = np.append(target_batch, int_text[0])\n \n input_batchs = np.split(input_batch.reshape(batch_size, -1), n_batches, 1)\n target_batchs = np.split(target_batch.reshape(batch_size, -1), n_batches, 1)\n \n get_batches = list(zip(input_batchs, target_batchs))\n return np.array(get_batches)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Tests Passed\n"
]
],
[
[
"## Neural Network Training\n### Hyperparameters\nTune the following parameters:\n\n- Set `num_epochs` to the number of epochs.\n- Set `batch_size` to the batch size.\n- Set `rnn_size` to the size of the RNNs.\n- Set `embed_dim` to the size of the embedding.\n- Set `seq_length` to the length of sequence.\n- Set `learning_rate` to the learning rate.\n- Set `show_every_n_batches` to the number of batches the neural network should print progress.",
"_____no_output_____"
]
],
[
[
"# Number of Epochs\nnum_epochs = 100\n# Batch Size\nbatch_size = 156\n# RNN Size\nrnn_size = 600\n# Embedding Dimension Size\nembed_dim = 500\n# Sequence Length\nseq_length = 14\n# Learning Rate\nlearning_rate = 0.001\n# Show stats for every n number of batches\nshow_every_n_batches = 100\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"_____no_output_____"
]
],
[
[
"### Build the Graph\nBuild the graph using the neural network you implemented.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"_____no_output_____"
]
],
[
[
"## Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Epoch 0 Batch 0/31 train_loss = 8.825\nEpoch 3 Batch 7/31 train_loss = 5.159\nEpoch 6 Batch 14/31 train_loss = 4.528\nEpoch 9 Batch 21/31 train_loss = 4.046\nEpoch 12 Batch 28/31 train_loss = 3.626\nEpoch 16 Batch 4/31 train_loss = 3.317\nEpoch 19 Batch 11/31 train_loss = 3.031\nEpoch 22 Batch 18/31 train_loss = 2.765\nEpoch 25 Batch 25/31 train_loss = 2.474\nEpoch 29 Batch 1/31 train_loss = 2.178\nEpoch 32 Batch 8/31 train_loss = 2.101\nEpoch 35 Batch 15/31 train_loss = 1.774\nEpoch 38 Batch 22/31 train_loss = 1.655\nEpoch 41 Batch 29/31 train_loss = 1.581\nEpoch 45 Batch 5/31 train_loss = 1.388\nEpoch 48 Batch 12/31 train_loss = 1.260\nEpoch 51 Batch 19/31 train_loss = 1.038\nEpoch 54 Batch 26/31 train_loss = 1.010\nEpoch 58 Batch 2/31 train_loss = 0.891\nEpoch 61 Batch 9/31 train_loss = 0.773\nEpoch 64 Batch 16/31 train_loss = 0.718\nEpoch 67 Batch 23/31 train_loss = 0.642\nEpoch 70 Batch 30/31 train_loss = 0.591\nEpoch 74 Batch 6/31 train_loss = 0.534\nEpoch 77 Batch 13/31 train_loss = 0.482\nEpoch 80 Batch 20/31 train_loss = 0.438\nEpoch 83 Batch 27/31 train_loss = 0.359\nEpoch 87 Batch 3/31 train_loss = 0.369\nEpoch 90 Batch 10/31 train_loss = 0.338\nEpoch 93 Batch 17/31 train_loss = 0.300\nEpoch 96 Batch 24/31 train_loss = 0.291\nModel Trained and Saved\n"
]
],
[
[
"## Save Parameters\nSave `seq_length` and `save_dir` for generating a new TV script.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"_____no_output_____"
]
],
[
[
"# Checkpoint",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"_____no_output_____"
]
],
[
[
"## Implement Generate Functions\n### Get Tensors\nGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\n\nReturn the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)` ",
"_____no_output_____"
]
],
[
[
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n with loaded_graph.as_default() as g:\n InputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitialStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Tests Passed\n"
]
],
[
[
"### Choose Word\nImplement the `pick_word()` function to select the next word using `probabilities`.",
"_____no_output_____"
]
],
[
[
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n pick_word = np.random.choice(len(int_to_vocab), 1, p=probabilities)[0]\n pick_word = int_to_vocab.get(pick_word)\n return pick_word\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Tests Passed\n"
]
],
[
[
"## Generate TV Script\nThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.",
"_____no_output_____"
]
],
[
[
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"moe_szyslak: ah-ha, big mistake pal! hey moe, can you be the best book on you could never!\nhomer_simpson:(getting idea) but you're dea-d-d-dead.(three stooges scared sound)\ngrampa_simpson:(upbeat) i guess despite all sweet music, but then we pour it a beer at half something.\n\n\nlenny_leonard: hey, homer. r.\nhomer_simpson: moe, it's called!\nmoe_szyslak: guys, i'm gonna let him want to go to my dad\n\n\nhomer_simpson:(to moe) thirty cases of cough syrup. sign in the way.\nbarney_gumble: yeah, that's probably what i look at you, i'm too?\nmoe_szyslak: oh, here. the audience is still love over.\nmoe's_thoughts: this is kent brockman. and it begins,\" dear is to that!\nmoe_szyslak:(laughs) if you want to be back.\nvoice: excuse me, so you can either sit here in the back of my cruiser.\nhomer_simpson: well if i only got their secrets.\nlenny_leonard:(amiable) amanda\n"
]
],
[
[
"# The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\n# Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7ee48fb8979200b2b5204eff5c72942a392b09e | 51,829 | ipynb | Jupyter Notebook | house_prices/analysis12.ipynb | randat9/House_Prices | 3a7e51e1ac36aea0faabc61786652cf706b53c7e | [
"MIT"
] | null | null | null | house_prices/analysis12.ipynb | randat9/House_Prices | 3a7e51e1ac36aea0faabc61786652cf706b53c7e | [
"MIT"
] | null | null | null | house_prices/analysis12.ipynb | randat9/House_Prices | 3a7e51e1ac36aea0faabc61786652cf706b53c7e | [
"MIT"
] | null | null | null | 80.73053 | 6,686 | 0.447452 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport pandas_profiling as pp\nimport seaborn as sns\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import OneHotEncoder\n\nfrom functions.preprocessing import Imputer, CategoricalEncoder, remove_outliers\n\nfrom lazypredict.Supervised import LazyRegressor\n\nplt.style.use('ggplot')",
"_____no_output_____"
],
[
"def remove_empty_features(data, threshold):\n \"\"\"...\"\"\"\n cols_to_drop = [column for column in data.columns \n if data[column].isna().mean() > threshold]\n data = data.drop(columns = cols_to_drop)\n return data, cols_to_drop\n\ndef mapping_from_list(order):\n return {label: idx for idx, label in enumerate(order)}\n\ndef ordinal_feature(data: pd.DataFrame, dictionary: dict):\n \"\"\" Transform ordinal features\n\n Args:\n data (dataframe)\n dictionary (dict)\n\n Returns:\n data (dataframe): encoded dataframe\n \"\"\"\n data_copy = data.copy()\n for key,value in dictionary.items():\n data_copy[key] = data_copy[key].map(mapping_from_list(value))\n\n return data_copy",
"_____no_output_____"
],
[
"# Road raw training data\nraw_data = pd.read_csv('train.csv', index_col=0)\nraw_data.head(5)",
"_____no_output_____"
],
[
"options = {\n \"MSSubClass\": {\"strategy\": \"most_frequent\"},\n \"MSZoning\": {\"strategy\": \"most_frequent\"},\n \"LotFrontage\": {\"strategy\": \"mean\"},\n \"LotArea\": {\"strategy\": \"mean\"},\n \"Street\": {\"strategy\": \"most_frequent\"},\n \"Alley\": {\"strategy\": \"constant\", \"fill_value\": \"NoAccess\"},\n \"LotShape\": {\"strategy\": \"most_frequent\"},\n \"LandContour\": {\"strategy\": \"most_frequent\"},\n \"Utilities\": {\"strategy\": \"most_frequent\"},\n \"LotConfig\": {\"strategy\": \"most_frequent\"},\n \"LandSlope\": {\"strategy\": \"most_frequent\"},\n \"Neighborhood\": {\"strategy\": \"most_frequent\"},\n \"Condition1\": {\"strategy\": \"most_frequent\"},\n \"Condition2\": {\"strategy\": \"most_frequent\"},\n \"Electrical\": {\"strategy\": \"most_frequent\"},\n \"1stFlrSF\": {\"strategy\": \"mean\"},\n \"2ndFlrSF\": {\"strategy\": \"mean\"},\n \"LowQualFinSF\": {\"strategy\": \"mean\"},\n \"GrLivArea\": {\"strategy\": \"mean\"},\n \"BsmtFullBath\": {\"strategy\": \"median\"},\n \"BsmtHalfBath\": {\"strategy\": \"median\"},\n \"FullBath\": {\"strategy\": \"median\"},\n \"HalfBath\": {\"strategy\": \"median\"},\n \"BedroomAbvGr\": {\"strategy\": \"median\"},\n \"KitchenAbvGr\": {\"strategy\": \"median\"},\n \"KitchenQual\": {\"strategy\": \"most_frequent\"},\n \"TotRmsAbvGrd\": {\"strategy\": \"median\"},\n \"BldgType\": {\"strategy\": \"most_frequent\"},\n \"HouseStyle\": {\"strategy\": \"most_frequent\"},\n \"OverallQual\": {\"strategy\": \"median\"},\n \"OverallCond\": {\"strategy\": \"median\"},\n \"YearBuilt\": {\"strategy\": \"median\"},\n \"YearRemodAdd\": {\"strategy\": \"median\"},\n \"RoofStyle\": {\"strategy\": \"most_frequent\"},\n \"RoofMatl\": {\"strategy\": \"most_frequent\"},\n \"Exterior1st\": {\"strategy\": \"most_frequent\"},\n \"Exterior2nd\": {\"strategy\": \"most_frequent\"},\n \"MasVnrType\": {\"strategy\": \"constant\", \"fill_value\": \"None\"},\n \"MasVnrArea\": {\"strategy\": \"mean\"},\n \"ExterQual\": {\"strategy\": \"most_frequent\"},\n \"ExterCond\": {\"strategy\": \"most_frequent\"},\n \"Foundation\": {\"strategy\": \"most_frequent\"},\n \"BsmtQual\": {\"strategy\": \"constant\", \"fill_value\": \"NoBasement\"},\n \"BsmtCond\": {\"strategy\": \"constant\", \"fill_value\": \"NoBasement\"},\n \"BsmtExposure\": {\"strategy\": \"constant\", \"fill_value\": \"NoBasement\"},\n \"BsmtFinType1\": {\"strategy\": \"constant\", \"fill_value\": \"NoBasement\"},\n \"BsmtFinSF1\": {\"strategy\": \"mean\"},\n \"BsmtFinType2\": {\"strategy\": \"constant\", \"fill_value\": \"NoBasement\"},\n \"BsmtFinSF2\": {\"strategy\": \"mean\"},\n \"BsmtUnfSF\": {\"strategy\": \"mean\"},\n \"TotalBsmtSF\": {\"strategy\": \"mean\"},\n \"Heating\": {\"strategy\": \"most_frequent\"},\n \"HeatingQC\": {\"strategy\": \"most_frequent\"},\n \"CentralAir\": {\"strategy\": \"most_frequent\"},\n \"ScreenPorch\": {\"strategy\": \"mean\"},\n \"PoolArea\": {\"strategy\": \"mean\"},\n \"PoolQC\": {\"strategy\": \"constant\", \"fill_value\": \"NoPool\"},\n \"Fence\": {\"strategy\": \"constant\", \"fill_value\": \"NoFence\"},\n \"MiscFeature\": {\"strategy\": \"constant\", \"fill_value\": \"None\"},\n \"MiscVal\": {\"strategy\": \"mean\"},\n \"MoSold\": {\"strategy\": \"median\"},\n \"YrSold\": {\"strategy\": \"median\"},\n \"SaleType\": {\"strategy\": \"most_frequent\"},\n \"SaleCondition\": {\"strategy\": \"most_frequent\"},\n \"Functional\": {\"strategy\": \"most_frequent\"},\n \"Fireplaces\": {\"strategy\": \"most_frequent\"},\n \"FireplaceQu\": {\"strategy\": \"constant\", \"fill_value\": \"NoAccess\"},\n \"GarageType\": {\"strategy\": \"constant\", \"fill_value\": \"NoAccess\"},\n \"GarageYrBlt\": {\"strategy\": \"most_frequent\"},\n \"GarageFinish\": {\"strategy\": \"constant\", \"fill_value\": \"NoAccess\"},\n \"GarageCars\": {\"strategy\": \"most_frequent\"},\n \"GarageArea\": {\"strategy\": \"median\"},\n \"GarageQual\": {\"strategy\": \"constant\", \"fill_value\": \"NoAccess\"},\n \"GarageCond\": {\"strategy\": \"constant\", \"fill_value\": \"NoAccess\"},\n \"PavedDrive\": {\"strategy\": \"most_frequent\"},\n \"WoodDeckSF\": {\"strategy\": \"most_frequent\"},\n \"OpenPorchSF\": {\"strategy\": \"most_frequent\"},\n \"EnclosedPorch\": {\"strategy\": \"mean\"},\n \"3SsnPorch\": {\"strategy\": \"most_frequent\"},\n}",
"_____no_output_____"
],
[
"params = {\n \"threshold_empty_features\": 0.3,\n}\n\ncols_to_drop = {\n \"remove_empty_features\": []\n}\n\ncategorical_colums = ['Exterior1st', 'Foundation', 'MasVnrType', 'Neighborhood', \n 'PavedDrive', 'Electrical', 'MSSubClass', 'SaleCondition',\n 'GarageType', 'Exterior2nd', 'MSZoning', 'CentralAir', \n 'Street','Alley','LandContour','Utilities','LotConfig', 'LandSlope', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'BsmtFinType2', 'Heating', 'Functional', 'GarageCond', 'Fence', 'MiscFeature', 'SaleType']\n\n# Ordinal features options\nordinal_columns = ['HeatingQC', 'GarageQual', 'BsmtFinType1', 'ExterQual', \n 'GarageFinish', 'BsmtExposure', 'LotShape', 'OverallQual',\n 'BsmtQual', 'KitchenQual']\n\nordinal_mapping = {\n 'BsmtExposure': ['None', 'No', 'Mn', 'Av', 'Gd'],\n 'BsmtFinType1': ['None', 'Unf', 'LwQ', 'Rec', 'BLQ', 'ALQ', 'GLQ'],\n 'GarageFinish': ['None', 'Unf', 'RFn', 'Fin'],\n 'LotShape': ['IR3', 'IR2', 'IR1', 'Reg']\n}\n\nordinal_common = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC',\n 'KitchenQual', 'FireplaceQu', 'GarageQual', 'PoolQC']\nfor column in ordinal_common:\n ordinal_mapping[column] = ['None', 'Po', 'Fa', 'TA', 'Gd']",
"_____no_output_____"
],
[
"# Removing features with a lot of missing values\ndata, cols_to_drop[\"remove_empty_features\"] = remove_empty_features(\n raw_data, \n params[\"threshold_empty_features\"]\n)\n\n# Impute missing values\nimp = Imputer(options=options)\ndata = imp.fit_transform(raw_data)\n\n# HOTFIX\nfor key in imp.options:\n if isinstance(imp.options[key]['_fill'], np.integer):\n imp.options[key]['_fill'] = int(imp.options[key]['_fill'])\nimp.save_options('imputer_options.json')\n\n# Encoding categorical features\nce = CategoricalEncoder(categorical_colums)\ndata = ce.fit_transform(data)\n\n# Encoding ordinal features\ndata = ordinal_feature(data, ordinal_mapping)\n\n# data\ndata",
"_____no_output_____"
]
],
[
[
"## Model metrics before removing outliers",
"_____no_output_____"
]
],
[
[
"reg = LazyRegressor()\nX = data.drop(columns = [\"SalePrice\"])\ny = data[\"SalePrice\"]\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=42)\nmodels, _ = reg.fit(X_train, X_test, y_train, y_test)\nmodels",
"100%|██████████| 43/43 [00:36<00:00, 1.19it/s]\n"
]
],
[
[
"## Removing outliers",
"_____no_output_____"
]
],
[
[
"nan_columns = {column: data[column].isna().sum() for column in data.columns if data[column].isna().sum() > 0}\nnan_columns",
"_____no_output_____"
],
[
"data[\"PoolQC\"].sample(10)",
"_____no_output_____"
],
[
"ordinal_common = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC',\n 'KitchenQual', 'FireplaceQu', 'GarageQual', 'PoolQC']",
"_____no_output_____"
],
[
"outlier_removed_data = remove_outliers(data_no_empty_features, method=\"IsolationForest\", threshold=0.1, model_kwargs = {})",
"Model to detect outliers is IsolationForest with parameters {}\n"
],
[
"reg = LazyRegressor()\nX = outlier_removed_data.drop(columns = [\"SalePrice\"])\ny = outlier_removed_data[\"SalePrice\"]\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\nmodels, _ = reg.fit(X_train, X_test, y_train, y_test)\nmodels",
"_____no_output_____"
]
],
[
[
"## TODO:\n\n- Krzysiek:\n - funkcje zwracają indeksy i kolumny\n \n- kbdev\n - Encoding ordinal features as a class\n - fix np.int64 bug in json serialization\n - \n \n- miri\n - nie będzie jej (na 50%)\n \n- Patryk\n - zapis do pliku Encoder, konstruktor z pliku\n - PR \n \n```python\nour_encoder = OurOneHotEncoder(columns=...)\ndata = our_encoder.fit(data)\nour_encoder.save(file.json)\n \nour_encoder.from_file(file.json)\nour_encoder.transform(other_data)\n```\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7ee4c0b6ac20952088caee5c81d8ba2416d4de1 | 32,382 | ipynb | Jupyter Notebook | Big_Dreams.ipynb | Lore8614/Lore8614.github.io | 492cbdf0e443d5ffc1fbddc079ca3dc301c14485 | [
"MIT"
] | null | null | null | Big_Dreams.ipynb | Lore8614/Lore8614.github.io | 492cbdf0e443d5ffc1fbddc079ca3dc301c14485 | [
"MIT"
] | null | null | null | Big_Dreams.ipynb | Lore8614/Lore8614.github.io | 492cbdf0e443d5ffc1fbddc079ca3dc301c14485 | [
"MIT"
] | 1 | 2020-12-04T19:31:26.000Z | 2020-12-04T19:31:26.000Z | 36.548533 | 236 | 0.265518 | [
[
[
"<a href=\"https://colab.research.google.com/github/Lore8614/Lore8614.github.io/blob/master/Big_Dreams.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"%pwd\n%ls '/content/drive/My Drive/Machine Learning Final'",
"ls: cannot access '/content/drive/My Drive/Machine Learning Final': No such file or directory\n"
],
[
"pos_muts = pd.read_csv('/content/drive/My Drive/Machine Learning Final/H77_metadata.csv')\nfreqs = pd.read_csv('/content/drive/My Drive/Machine Learning Final/HCV1a_TsMutFreq_195.csv')\nmut_rate = pd.read_csv('/content/drive/My Drive/Machine Learning Final/Geller.mutation.rates_update.csv')\nfreqs.head()\n",
"_____no_output_____"
],
[
"mut_rate.head()",
"_____no_output_____"
],
[
"pos_muts.head()",
"_____no_output_____"
],
[
"# Start Calculating costs",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee57ccca319457adc283acd24079f764992616 | 8,596 | ipynb | Jupyter Notebook | notebooks/cross_validation_nested.ipynb | nish2612/scikit-learn-mooc | daa9945beddf3318ef20770bf44b77f1e747d7fa | [
"CC-BY-4.0"
] | 1 | 2021-06-05T01:22:12.000Z | 2021-06-05T01:22:12.000Z | notebooks/cross_validation_nested.ipynb | Mamane403/scikit-learn-mooc | cdfe0e9ac16b5d7fa4c8fb343141c10eb98828f4 | [
"CC-BY-4.0"
] | null | null | null | notebooks/cross_validation_nested.ipynb | Mamane403/scikit-learn-mooc | cdfe0e9ac16b5d7fa4c8fb343141c10eb98828f4 | [
"CC-BY-4.0"
] | null | null | null | 34.384 | 88 | 0.623779 | [
[
[
"# Nested cross-validation\n\nIn this notebook, we show a pattern called **nested cross-validation** which\nshould be used when you want to both evaluate a model and tune the\nmodel's hyperparameters.\n\nCross-validation is a powerful tool to evaluate the statistical performance\nof a model. It is also used to select the best model from a pool of models.\nThis pool of models can be the same family of predictor but with different\nparameters. In this case, we call this procedure **hyperparameter tuning**.\n\nWe could also imagine that we would like to choose among heterogeneous models\nthat will similarly use the cross-validation.\n\nBefore we go into details regarding the nested cross-validation, we will\nfirst recall the pattern used to fine tune a model's hyperparameters.\n\nLet's load the breast cancer dataset.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_breast_cancer\n\ndata, target = load_breast_cancer(return_X_y=True)",
"_____no_output_____"
]
],
[
[
"Now, we'll make a minimal example using the utility `GridSearchCV` to find\nthe best parameters via cross-validation.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV\nfrom sklearn.svm import SVC\n\nparam_grid = {\"C\": [0.1, 1, 10], \"gamma\": [.01, .1]}\nmodel_to_tune = SVC()\n\nsearch = GridSearchCV(estimator=model_to_tune, param_grid=param_grid,\n n_jobs=2)\nsearch.fit(data, target)",
"_____no_output_____"
]
],
[
[
"We recall that `GridSearchCV` will train a model with some specific parameter\non a training set and evaluate it on testing. However, this evaluation is\ndone via cross-validation using the `cv` parameter. This procedure is\nrepeated for all possible combinations of parameters given in `param_grid`.\n\nThe attribute `best_params_` will give us the best set of parameters that\nmaximize the mean score on the internal test sets.",
"_____no_output_____"
]
],
[
[
"print(f\"The best parameter found are: {search.best_params_}\")",
"_____no_output_____"
]
],
[
[
"We can now show the mean score obtained using the parameter `best_score_`.",
"_____no_output_____"
]
],
[
[
"print(f\"The mean score in CV is: {search.best_score_:.3f}\")",
"_____no_output_____"
]
],
[
[
"At this stage, one should be extremely careful using this score. The\nmisinterpretation would be the following: since the score was computed on a\ntest set, it could be considered our model's testing score.\n\nHowever, we should not forget that we used this score to pick-up the best\nmodel. It means that we used knowledge from the test set (i.e. test score) to\ndecide our model's training parameter.\n\nThus, this score is not a reasonable estimate of our testing error.\nIndeed, we can show that it will be too optimistic in practice. The good way\nis to use a \"nested\" cross-validation. We will use an inner cross-validation\ncorresponding to the previous procedure shown to optimize the\nhyperparameters. We will also include this procedure within an outer\ncross-validation, which will be used to estimate the testing error of\nour tuned model.\n\nIn this case, our inner cross-validation will always get the training set of\nthe outer cross-validation, making it possible to compute the testing\nscore on a completely independent set.\n\nWe will show below how we can create such nested cross-validation and obtain\nthe testing score.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import cross_val_score, KFold\n\n# Declare the inner and outer cross-validation\ninner_cv = KFold(n_splits=4, shuffle=True, random_state=0)\nouter_cv = KFold(n_splits=4, shuffle=True, random_state=0)\n\n# Inner cross-validation for parameter search\nmodel = GridSearchCV(\n estimator=model_to_tune, param_grid=param_grid, cv=inner_cv, n_jobs=2)\n\n# Outer cross-validation to compute the testing score\ntest_score = cross_val_score(model, data, target, cv=outer_cv, n_jobs=2)\nprint(f\"The mean score using nested cross-validation is: \"\n f\"{test_score.mean():.3f} +/- {test_score.std():.3f}\")",
"_____no_output_____"
]
],
[
[
"In the example above, the reported score is more trustful and should be close\nto production's expected statistical performance.\n\nWe will illustrate the difference between the nested and non-nested\ncross-validation scores to show that the latter one will be too optimistic in\npractice. In this regard, we will repeat several time the experiment and\nshuffle the data differently. Besides, we will store the score obtain with\nand without the nested cross-validation.",
"_____no_output_____"
]
],
[
[
"test_score_not_nested = []\ntest_score_nested = []\n\nN_TRIALS = 20\nfor i in range(N_TRIALS):\n inner_cv = KFold(n_splits=4, shuffle=True, random_state=i)\n outer_cv = KFold(n_splits=4, shuffle=True, random_state=i)\n\n # Non_nested parameter search and scoring\n model = GridSearchCV(estimator=model_to_tune, param_grid=param_grid,\n cv=inner_cv, n_jobs=2)\n model.fit(data, target)\n test_score_not_nested.append(model.best_score_)\n\n # Nested CV with parameter optimization\n test_score = cross_val_score(model, data, target, cv=outer_cv, n_jobs=2)\n test_score_nested.append(test_score.mean())",
"_____no_output_____"
]
],
[
[
"We can merge the data together and make a box plot of the two strategies.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nall_scores = {\n \"Not nested CV\": test_score_not_nested,\n \"Nested CV\": test_score_nested,\n}\nall_scores = pd.DataFrame(all_scores)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\ncolor = {\"whiskers\": \"black\", \"medians\": \"black\", \"caps\": \"black\"}\nall_scores.plot.box(color=color, vert=False)\nplt.xlabel(\"Accuracy\")\n_ = plt.title(\"Comparison of mean accuracy obtained on the test sets with\\n\"\n \"and without nested cross-validation\")",
"_____no_output_____"
]
],
[
[
"We observe that the model's statistical performance with the nested\ncross-validation is not as good as the non-nested cross-validation.\n\nAs a conclusion, when optimizing parts of the machine learning pipeline (e.g.\nhyperparameter, transform, etc.), one needs to use nested cross-validation to\nevaluate the statistical performance of the predictive model. Otherwise, the\nresults obtained without nested cross-validation are over-optimistic.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7ee597f5c9e452acaa48c9cab84b3fd67d1d1d8 | 55,621 | ipynb | Jupyter Notebook | Ref/find_eps_entropy/fig1_nvar20_g05_nseq5k.ipynb | danhtaihoang/e-machine | 9ff075ce1e476b8136da291b05abb34c71a4df9d | [
"MIT"
] | null | null | null | Ref/find_eps_entropy/fig1_nvar20_g05_nseq5k.ipynb | danhtaihoang/e-machine | 9ff075ce1e476b8136da291b05abb34c71a4df9d | [
"MIT"
] | null | null | null | Ref/find_eps_entropy/fig1_nvar20_g05_nseq5k.ipynb | danhtaihoang/e-machine | 9ff075ce1e476b8136da291b05abb34c71a4df9d | [
"MIT"
] | null | null | null | 232.723849 | 50,232 | 0.921792 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport emachine as EM\nimport itertools\nfrom joblib import Parallel, delayed\n#from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"np.random.seed(0)",
"_____no_output_____"
],
[
"n_var = 20 ; g = 0.5 ; n_seq = 5000",
"_____no_output_____"
],
[
"# Synthetic data are generated by using `generate_seq`.\nw_true,seqs = EM.generate_seq(n_var,n_seq,g=g)\nprint(seqs.shape)\n\nops = EM.operators(seqs)\nprint(ops.shape)",
"(5000, 20)\n(5000, 210)\n"
],
[
"# predict interactions w\neps_list = np.linspace(0.1,0.9,9)\nn_eps = len(eps_list)\nres = Parallel(n_jobs = n_eps)(delayed(EM.fit)(ops,eps=eps,max_iter=100) for eps in eps_list)\nw_eps = np.array([res[i][0] for i in range(len(res))])\nw_eps_iter = np.array([res[i][1] for i in range(len(res))])\n\n#e_eps = np.zeros(len(eps_list))\n#w_eps = np.zeros((len(eps_list),ops.shape[1]))\n#for i,eps in enumerate(eps_list):\n# w_eps[i,:],e_eps[i] = EM.fit(ops,w_true,eps=eps,max_iter=100)\n #print('eps and e_eps:',eps,e_eps[i])",
"_____no_output_____"
],
[
"w_eps_iter.shape",
"_____no_output_____"
],
[
"MSE = ((w_true[np.newaxis,np.newaxis,:] - w_eps_iter)**2).mean(axis=2)\nMSE.shape",
"_____no_output_____"
],
[
"# Entropy\n#w_iter_eps[n_eps,n_iter,n_ops]\n#ops[n_seq,n_ops] \nenergy_eps_iter = -np.sum((ops[:,np.newaxis,np.newaxis,:]*w_eps_iter[np.newaxis,:,:,:]),axis=3)\nprob_eps_iter = np.exp(energy_eps_iter) # [n_seq,n_eps,n_iter]\nprob_eps_iter /= prob_eps_iter.sum(axis=0)[np.newaxis,:,:] \nentropy_eps_iter = -(prob_eps_iter*np.log(prob_eps_iter)).sum(axis=0) #[n_eps,n_iter] ",
"_____no_output_____"
],
[
"entropy_eps_iter.shape",
"_____no_output_____"
],
[
"ieps_show = [2,4,8]\n\nnx,ny = 2,2\nfig, ax = plt.subplots(ny,nx,figsize=(nx*3.5,ny*3))\n\nfor i in ieps_show:\n ax[0,0].plot(MSE[i],label='eps=%1.1f'%eps_list[i])\n ax[1,0].plot(entropy_eps_iter[i,:],label='eps=%1.1f'%eps_list[i])\n\nax[0,1].plot(eps_list,MSE[:,-1],'ko-')\n\nax[1,1].plot(eps_list,entropy_eps_iter[:,-1],'ko-',label='final')\nax[1,1].plot(eps_list,entropy_eps_iter[:,:].max(axis=1),'r^--',label='max')\n\nax[0,0].legend()\nax[1,0].legend()\nax[1,1].legend()\n\nax[0,0].set_ylabel('MSE')\nax[0,1].set_ylabel('MSE')\nax[1,0].set_ylabel('Entropy')\nax[1,1].set_ylabel('Entropy')\n\nax[0,0].set_xlabel('Iterations')\nax[0,1].set_xlabel('epsilon')\nax[1,0].set_xlabel('Iterations')\nax[1,1].set_xlabel('epsilon')\n\nplt.tight_layout(h_pad=1, w_pad=1.5)\n#plt.savefig('fig.pdf', format='pdf', dpi=100)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee655ba4395ce9ca26c6f099abeb207c920eea | 68,951 | ipynb | Jupyter Notebook | LCO/Target_E/LCO_create_lightcurveB_and_movie_E.ipynb | jielaizhang/pasea | 08b663e27ffc8d2b119bfa6c3a0bcbe901c11b2f | [
"MIT"
] | null | null | null | LCO/Target_E/LCO_create_lightcurveB_and_movie_E.ipynb | jielaizhang/pasea | 08b663e27ffc8d2b119bfa6c3a0bcbe901c11b2f | [
"MIT"
] | null | null | null | LCO/Target_E/LCO_create_lightcurveB_and_movie_E.ipynb | jielaizhang/pasea | 08b663e27ffc8d2b119bfa6c3a0bcbe901c11b2f | [
"MIT"
] | null | null | null | 218.892063 | 59,788 | 0.912779 | [
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\nimport glob\nimport numpy as np\n\nfrom photutils import Background2D, SExtractorBackground\nfrom photutils import DAOStarFinder\nfrom photutils import CircularAperture,aperture_photometry\nfrom photutils.utils import calc_total_error\n\n\nimport astropy.wcs as wcs\nfrom astropy.io import fits\nfrom astropy.stats import sigma_clipped_stats, SigmaClip\nfrom astropy.nddata.utils import Cutout2D\nfrom astropy import units as u\n\nimport matplotlib.pyplot as plt\nfrom astropy.visualization import ZScaleInterval",
"_____no_output_____"
],
[
"mypath = '/Users/jielaizhang/Desktop/waissya/Testdata_Target_E/ASASJ030015-0459.7_20190913_B/'\noutmoviename='mymovie.gif'",
"_____no_output_____"
],
[
"# Load in all of the fits images in the directory and sort it\n\nimage_list = glob.glob(mypath+'*e91.fits.fz')\nimage_list.sort()",
"_____no_output_____"
],
[
"#Make some useful lists of values to track/record\n\nobstime = []\nBmag = []\nVmag = []\nBmag_e = []\nVmag_e = []\navg_offset = []",
"_____no_output_____"
],
[
"# Input the information for the calibration stars identified\n# in the previous notebook for batch processing of all of the images\n\nzpt_instrumental = 25.\n\ntar_ra = 45.064\ntar_dec = -4.995\ntar_color = 'yellow'\nref_ra = [44.93200, 45.00766, 45.11216, 45.12369]\nref_dec = [-5.03533, -4.79669, -4.91007, -4.93852]\nref_colors = ['red','cyan', 'green', 'blue']\nref_mag = [11.275, 12.093, 13.005, 14.65]",
"_____no_output_____"
],
[
"def do_phot_get_mag(data,hdr,err,ra,dec):\n positions = []\n zpt_instrumental = 25.\n w = wcs.WCS(hdr)\n xcoords, ycoords = w.all_world2pix(ra,dec,1)\n positions = np.transpose((xcoords, ycoords))\n apertures = CircularAperture(positions, r=24.)\n phot = aperture_photometry(data, apertures, error=err)\n\n mag = list(-2.5*np.log10(phot['aperture_sum']) + zpt_instrumental)\n dmag = list((2.5/np.log(10))*(phot['aperture_sum_err']/phot['aperture_sum']))\n \n return mag,dmag",
"_____no_output_____"
],
[
"def make_cutout(data,hdr,ra,dec):\n\n w = wcs.WCS(hdr)\n xcoord, ycoord = w.all_world2pix(ra,dec,1)\n position = np.transpose((xcoord, ycoord))\n size = u.Quantity([120, 120], u.pixel)\n cutout = Cutout2D(data, position, size, wcs=w, mode='strict')\n\n cutout_wcs = cutout.wcs\n header = cutout_wcs.to_header()\n hdu = fits.PrimaryHDU(data=cutout.data, header=header)\n\n return hdu",
"_____no_output_____"
],
[
"# Let's calculate the star's mag for *each* frame in the dataset\n\nfor frame in image_list:\n hdu = fits.open(frame)\n\n # Grab the actual science data based on above.\n sci_data = hdu[1]\n sci_hdr = sci_data.header\n time = sci_hdr['MJD-OBS']\n obstime.append(time)\n\n # Background estimation:\n sigma_clip = SigmaClip(sigma=3.) # Sigma clip bright obvious things to avoid biasing the background estimate\n bkg_estimator = SExtractorBackground() # Apply the SExtractor algorithm to our estimation\n bkg = Background2D(\n sci_data.data, (50, 50),\n filter_size=(3, 3),\n sigma_clip=sigma_clip,\n bkg_estimator=bkg_estimator)\n\n # Now let's subtract the background from the data\n sci_bkg = sci_data.data - bkg.background\n\n # Define an error image that will be used when calculating photometry\n effective_gain = 1.\n error = calc_total_error(sci_bkg, bkg.background_rms, effective_gain)\n\n # Calculate instrumental mags for each of the reference stars\n cal_mag,cal_dmag = do_phot_get_mag(sci_bkg,sci_hdr,error,ref_ra,ref_dec)\n\n # Calculate offsets and the standard deviation of the offset from each star.\n offsets = []\n for i in range(len(cal_mag)):\n offsets.append(ref_mag[i] - cal_mag[i])\n offset = np.mean(offsets)\n avg_offset.append(offset)\n doffset = np.std(offsets)\n \n # Do photometry on the variable target!!\n tar_mag,tar_dmag = do_phot_get_mag(sci_bkg,sci_hdr,error,tar_ra,tar_dec)\n \n cal_tar_mag = tar_mag[0]+offset\n cal_tar_dmag = np.sqrt(tar_dmag[0]**2.+doffset**2.)\n \n Bmag.append(cal_tar_mag)\n Bmag_e.append(cal_tar_dmag)\n \n # Make tiny cutouts of the variable star in each frame\n cutout_hdu = make_cutout(sci_bkg,sci_hdr,tar_ra,tar_dec)\n #cutout_hdu.writeto(frame+'_cutout.fits', overwrite=True)\n \n # Plot figures using these cutouts and output images\n interval = ZScaleInterval()\n vmin = interval.get_limits(cutout_hdu.data)[0]\n vmax = interval.get_limits(cutout_hdu.data)[1]\n\n plt.subplot(projection=wcs.WCS(cutout_hdu.header))\n plt.imshow(cutout_hdu.data, vmin=vmin, vmax=vmax, origin='lower')\n plt.xlabel('R.A.')\n plt.ylabel('Declination')\n \n pngname = str(time).replace('.','')\n plt.savefig(mypath+pngname+'.png', overwrite=True)\n \nBmag = np.array(Bmag)",
"_____no_output_____"
],
[
"# # Make a rudimentary lightcurve\n\nplt.figure(figsize=(10.5, 7))\nplt.errorbar(obstime,Bmag,xerr=None,yerr=Bmag_e, fmt='mo', capsize=9.0)\nplt.xlabel('MJD', fontsize=18)\nplt.ylabel('B Magnitude', fontsize=18)\nplt.show()",
"_____no_output_____"
],
[
"# Here we are going to use the cutouts we made above to make\n# an little movie of the variable star target changing brightness\n# over time and loop it!\n\nimport imageio\n\ncutout_list = glob.glob(mypath+'*.png')\ncutout_list.sort()\n\ncutout_frames = []\nfor file in cutout_list:\n cutout_frames.append(imageio.imread(file))\nimageio.mimsave(mypath+outmoviename, cutout_frames)",
"_____no_output_____"
],
[
"print(obstime)\nprint(list(Bmag))\nprint(Bmag_e)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee6a0421ef209065fff0006f30a659cd4c070d | 15,060 | ipynb | Jupyter Notebook | aata/section-linear-codes.ipynb | johnperry-math/cocalc-examples | 394479e972dc2b74211113bbb43bc1ec4ec9978c | [
"Apache-2.0",
"CC-BY-4.0"
] | 13 | 2017-09-06T23:04:59.000Z | 2021-04-05T11:08:51.000Z | aata/section-linear-codes.ipynb | johnperry-math/cocalc-examples | 394479e972dc2b74211113bbb43bc1ec4ec9978c | [
"Apache-2.0",
"CC-BY-4.0"
] | 9 | 2018-02-01T15:58:28.000Z | 2021-07-14T15:18:35.000Z | aata/section-linear-codes.ipynb | johnperry-math/cocalc-examples | 394479e972dc2b74211113bbb43bc1ec4ec9978c | [
"Apache-2.0",
"CC-BY-4.0"
] | 10 | 2017-10-26T17:30:03.000Z | 2021-12-11T07:25:28.000Z | 627.5 | 1,402 | 0.643692 | [
[
[
"%%html\n<link href=\"http://mathbook.pugetsound.edu/beta/mathbook-content.css\" rel=\"stylesheet\" type=\"text/css\" />\n<link href=\"https://aimath.org/mathbook/mathbook-add-on.css\" rel=\"stylesheet\" type=\"text/css\" />\n<style>.subtitle {font-size:medium; display:block}</style>\n<link href=\"https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic\" rel=\"stylesheet\" type=\"text/css\" />\n<link href=\"https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext\" rel=\"stylesheet\" type=\"text/css\" /><!-- Hide this cell. -->\n<script>\nvar cell = $(\".container .cell\").eq(0), ia = cell.find(\".input_area\")\nif (cell.find(\".toggle-button\").length == 0) {\nia.after(\n $('<button class=\"toggle-button\">Toggle hidden code</button>').click(\n function (){ ia.toggle() }\n )\n )\nia.hide()\n}\n</script>\n",
"_____no_output_____"
]
],
[
[
"**Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the \"Run\" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard.",
"_____no_output_____"
],
[
"$\\newcommand{\\identity}{\\mathrm{id}}\n\\newcommand{\\notdivide}{\\nmid}\n\\newcommand{\\notsubset}{\\not\\subset}\n\\newcommand{\\lcm}{\\operatorname{lcm}}\n\\newcommand{\\gf}{\\operatorname{GF}}\n\\newcommand{\\inn}{\\operatorname{Inn}}\n\\newcommand{\\aut}{\\operatorname{Aut}}\n\\newcommand{\\Hom}{\\operatorname{Hom}}\n\\newcommand{\\cis}{\\operatorname{cis}}\n\\newcommand{\\chr}{\\operatorname{char}}\n\\newcommand{\\Null}{\\operatorname{Null}}\n\\newcommand{\\lt}{<}\n\\newcommand{\\gt}{>}\n\\newcommand{\\amp}{&}\n$",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><h2 class=\"heading hide-type\" alt=\"Section 8.2 Linear Codes\"><span class=\"type\">Section</span><span class=\"codenumber\">8.2</span><span class=\"title\">Linear Codes</span></h2><a href=\"section-linear-codes.ipynb\" class=\"permalink\">¶</a></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><p id=\"p-1232\">To gain more knowledge of a particular code and develop more efficient techniques of encoding, decoding, and error detection, we need to add additional structure to our codes. One way to accomplish this is to require that the code also be a group. A <dfn class=\"terminology\">group code</dfn> is a code that is also a subgroup of ${\\mathbb Z}_2^n\\text{.}$</p></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><p id=\"p-1233\">To check that a code is a group code, we need only verify one thing. If we add any two elements in the code, the result must be an $n$-tuple that is again in the code. It is not necessary to check that the inverse of the $n$-tuple is in the code, since every codeword is its own inverse, nor is it necessary to check that ${\\mathbf 0}$ is a codeword. For instance,</p><div class=\"displaymath\">\n\\begin{equation*}\n(11000101) + (11000101) = (00000000).\n\\end{equation*}\n</div></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"example-like\" id=\"example-algcodes-weights\"><h6 class=\"heading\"><span class=\"type\">Example</span><span class=\"codenumber\">8.16</span></h6><p id=\"p-1234\">Suppose that we have a code that consists of the following 7-tuples:</p><div class=\"displaymath\">\n\\begin{align*}\n&(0000000) & & (0001111) & & (0010101) & & (0011010)\\\\\n&(0100110) & & (0101001) & & (0110011) & & (0111100)\\\\\n&(1000011) & & (1001100) & & (1010110) & & (1011001)\\\\\n&(1100101) & & (1101010) & & (1110000) & & (1111111).\n\\end{align*}\n</div><p>It is a straightforward though tedious task to verify that this code is also a subgroup of ${\\mathbb Z}_2^7$ and, therefore, a group code. This code is a single error-detecting and single error-correcting code, but it is a long and tedious process to compute all of the distances between pairs of codewords to determine that $d_{\\min} = 3\\text{.}$ It is much easier to see that the minimum weight of all the nonzero codewords is 3. As we will soon see, this is no coincidence. However, the relationship between weights and distances in a particular code is heavily dependent on the fact that the code is a group.</p></article></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"theorem-like\" id=\"lemma-5\"><h6 class=\"heading\"><span class=\"type\">Lemma</span><span class=\"codenumber\">8.17</span></h6><p id=\"p-1235\">Let ${\\mathbf x}$ and ${\\mathbf y}$ be binary $n$-tuples. Then $w({\\mathbf x} + {\\mathbf y}) = d({\\mathbf x}, {\\mathbf y})\\text{.}$</p></article><article class=\"proof\" id=\"proof-47\"><h6 class=\"heading\"><span class=\"type\">Proof</span></h6><p id=\"p-1236\">Suppose that ${\\mathbf x}$ and ${\\mathbf y}$ are binary $n$-tuples. Then the distance between ${\\mathbf x}$ and ${\\mathbf y}$ is exactly the number of places in which ${\\mathbf x}$ and ${\\mathbf y}$ differ. But ${\\mathbf x}$ and ${\\mathbf y}$ differ in a particular coordinate exactly when the sum in the coordinate is 1, since</p><div class=\"displaymath\">\n\\begin{align*}\n1 + 1 & = 0\\\\\n0 + 0 & = 0\\\\\n1 + 0 & = 1\\\\\n0 + 1 & = 1.\n\\end{align*}\n</div><p>Consequently, the weight of the sum must be the distance between the two codewords.</p></article></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"theorem-like\" id=\"theorem-33\"><h6 class=\"heading\"><span class=\"type\">Theorem</span><span class=\"codenumber\">8.18</span></h6><p id=\"p-1237\">Let $d_{\\min}$ be the minimum distance for a group code $C\\text{.}$ Then $d_{\\min}$ is the minimum of all the nonzero weights of the nonzero codewords in $C\\text{.}$ That is,</p><div class=\"displaymath\">\n\\begin{equation*}\nd_{\\min} = \\min\\{ w({\\mathbf x}) : { {\\mathbf x} \\neq {\\mathbf 0} } \\}.\n\\end{equation*}\n</div></article><article class=\"proof\" id=\"proof-48\"><h6 class=\"heading\"><span class=\"type\">Proof</span></h6><p id=\"p-1238\">Observe that</p><div class=\"displaymath\">\n\\begin{align*}\nd_{\\min} & = \\min \\{ d({\\mathbf x},{\\mathbf y}) : {\\mathbf x}\\neq{\\mathbf y} \\}\\\\\n&= \\min \\{ d({\\mathbf x},{\\mathbf y}) : {\\mathbf x}+{\\mathbf y} \\neq {\\mathbf 0} \\}\\\\\n&= \\min\\{ w({\\mathbf x} + {\\mathbf y}) : {\\mathbf x}+{\\mathbf y}\\neq {\\mathbf 0} \\}\\\\\n& = \\min\\{ w({\\mathbf z}) : {\\mathbf z} \\neq {\\mathbf 0} \\}.\n\\end{align*}\n</div></article></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><h3 class=\"heading hide-type\" alt=\"Subsection Linear Codes\"><span class=\"type\">Subsection</span><span class=\"codenumber\" /><span class=\"title\">Linear Codes</span></h3><a href=\"section-linear-codes.ipynb#algcodes-subsection-linear-codes\" class=\"permalink\">¶</a></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><p id=\"p-1239\">From Example <a href=\"section-linear-codes.ipynb#example-algcodes-weights\" class=\"xref\" alt=\"Example 8.16 \" title=\"Example 8.16 \">8.16</a>, it is now easy to check that the minimum nonzero weight is 3; hence, the code does indeed detect and correct all single errors. We have now reduced the problem of finding “good” codes to that of generating group codes. One easy way to generate group codes is to employ a bit of matrix theory.</p></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><p id=\"p-1240\">Define the <dfn class=\"terminology\">inner product</dfn> of two binary $n$-tuples to be</p><div class=\"displaymath\">\n\\begin{equation*}\n{\\mathbf x} \\cdot {\\mathbf y} = x_1 y_1 + \\cdots + x_n y_n,\n\\end{equation*}\n</div><p>where ${\\mathbf x} = (x_1, x_2, \\ldots, x_n)^{\\rm t}$ and ${\\mathbf y} = (y_1, y_2, \\ldots, y_n)^{\\rm t}$ are column vectors.<span class=\"footnote\"><a knowl=\"\" class=\"id-ref\" refid=\"hk-fn-4\" id=\"fn-4\"><sup> 4 </sup></a></span><span id=\"hk-fn-4\" class=\"hidden-content tex2jax_ignore\"><span class=\"footnote\">Since we will be working with matrices, we will write binary $n$-tuples as column vectors for the remainder of this chapter.</span></span> For example, if ${\\mathbf x} = (011001)^{\\rm t}$ and ${\\mathbf y} = (110101)^{\\rm t}\\text{,}$ then ${\\mathbf x} \\cdot {\\mathbf y} = 0\\text{.}$ We can also look at an inner product as the product of a row matrix with a column matrix; that is,</p><div class=\"displaymath\">\n\\begin{align*}\n{\\mathbf x} \\cdot {\\mathbf y} & = {\\mathbf x}^{\\rm t} {\\mathbf y}\\\\\n& =\n\\begin{pmatrix}\nx_1 & x_2 & \\cdots & x_n\n\\end{pmatrix}\n\\begin{pmatrix}\ny_1 \\\\ y_2 \\\\ \\vdots \\\\ y_n\n\\end{pmatrix}\\\\\n& = x_{1}y_{1} + x_{2}y_{2} + \\cdots + x_{n}y_{n}.\n\\end{align*}\n</div></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"example-like\" id=\"example-algcodes-matrixcodes\"><h6 class=\"heading\"><span class=\"type\">Example</span><span class=\"codenumber\">8.19</span></h6><p id=\"p-1241\">Suppose that the words to be encoded consist of all binary 3-tuples and that our encoding scheme is even-parity. To encode an arbitrary 3-tuple, we add a fourth bit to obtain an even number of 1s. Notice that an arbitrary $n$-tuple ${\\mathbf x} = (x_1, x_2, \\ldots, x_n)^{\\rm t}$ has an even number of 1s exactly when $x_1 + x_2 + \\cdots + x_n = 0\\text{;}$ hence, a 4-tuple ${\\mathbf x} = (x_1, x_2, x_3, x_4)^{\\rm t}$ has an even number of 1s if $x_1+ x_2+ x_3+ x_4 = 0\\text{,}$ or</p><div class=\"displaymath\">\n\\begin{equation*}\n{\\mathbf x} \\cdot {\\mathbf 1} = {\\mathbf x}^{\\rm t} {\\mathbf 1} =\n\\begin{pmatrix}\nx_1 & x_2 & x_3 & x_4\n\\end{pmatrix}\n\\begin{pmatrix}\n1 \\\\ 1 \\\\ 1 \\\\ 1\n\\end{pmatrix} = 0.\n\\end{equation*}\n</div><p>This example leads us to hope that there is a connection between matrices and coding theory.</p></article></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><p id=\"p-1242\">Let ${\\mathbb M}_{m \\times n}({\\mathbb Z}_2)$ denote the set of all $m \\times n$ matrices with entries in ${\\mathbb Z}_2\\text{.}$ We do matrix operations as usual except that all our addition and multiplication operations occur in ${\\mathbb Z}_2\\text{.}$ Define the <dfn class=\"terminology\">null space</dfn> of a matrix $H \\in {\\mathbb M}_{m \\times n}({\\mathbb Z}_2)$ to be the set of all binary $n$-tuples ${\\mathbf x}$ such that $H{\\mathbf x} = {\\mathbf 0}\\text{.}$ We denote the null space of a matrix $H$ by $\\Null(H)\\text{.}$ </p></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"example-like\" id=\"example-algcodes-group-code\"><h6 class=\"heading\"><span class=\"type\">Example</span><span class=\"codenumber\">8.20</span></h6><p id=\"p-1243\">Suppose that</p><div class=\"displaymath\">\n\\begin{equation*}\nH =\n\\begin{pmatrix}\n0 & 1 & 0 & 1 & 0 \\\\\n1 & 1 & 1 & 1 & 0 \\\\\n0 & 0 & 1 & 1 & 1\n\\end{pmatrix}.\n\\end{equation*}\n</div><p>For a 5-tuple ${\\mathbf x} = (x_1, x_2, x_3, x_4, x_5)^{\\rm t}$ to be in the null space of $H\\text{,}$ $H{\\mathbf x} = {\\mathbf 0}\\text{.}$ Equivalently, the following system of equations must be satisfied:</p><div class=\"displaymath\">\n\\begin{align*}\nx_2 + x_4 & = 0\\\\\nx_1 + x_2 + x_3 + x_4 & = 0\\\\\nx_3 + x_4 + x_5 & = 0.\n\\end{align*}\n</div><p>The set of binary 5-tuples satisfying these equations is</p><div class=\"displaymath\">\n\\begin{equation*}\n(00000) \\qquad (11110) \\qquad (10101) \\qquad (01011).\n\\end{equation*}\n</div><p>This code is easily determined to be a group code.</p></article></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"theorem-like\" id=\"theorem-34\"><h6 class=\"heading\"><span class=\"type\">Theorem</span><span class=\"codenumber\">8.21</span></h6><p id=\"p-1244\">Let $H$ be in ${\\mathbb M}_{m \\times n}({\\mathbb Z}_2)\\text{.}$ Then the null space of $H$ is a group code.</p></article><article class=\"proof\" id=\"proof-49\"><h6 class=\"heading\"><span class=\"type\">Proof</span></h6><p id=\"p-1245\">Since each element of ${\\mathbb Z}_2^n$ is its own inverse, the only thing that really needs to be checked here is closure. Let ${\\mathbf x}, {\\mathbf y} \\in {\\rm Null}(H)$ for some matrix $H$ in ${\\mathbb M}_{m \\times n}({\\mathbb Z}_2)\\text{.}$ Then $H{\\mathbf x} = {\\mathbf 0}$ and $H{\\mathbf y} = {\\mathbf 0}\\text{.}$ So</p><div class=\"displaymath\">\n\\begin{equation*}\nH({\\mathbf x}+{\\mathbf y}) = H{\\mathbf x} + H{\\mathbf y} = {\\mathbf 0} + {\\mathbf 0} = {\\mathbf 0}.\n\\end{equation*}\n</div><p>Hence, ${\\mathbf x} + {\\mathbf y}$ is in the null space of $H$ and therefore must be a codeword.</p></article></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><p id=\"p-1246\">A code is a <dfn class=\"terminology\">linear code</dfn> if it is determined by the null space of some matrix $H \\in {\\mathbb M}_{m \\times n}({\\mathbb Z}_2)\\text{.}$</p></div>",
"_____no_output_____"
],
[
"<div class=\"mathbook-content\"><article class=\"example-like\" id=\"example-algcodes-linear-code\"><h6 class=\"heading\"><span class=\"type\">Example</span><span class=\"codenumber\">8.22</span></h6><p id=\"p-1247\">Let $C$ be the code given by the matrix</p><div class=\"displaymath\">\n\\begin{equation*}\nH =\n\\begin{pmatrix}\n0 & 0 & 0 & 1 & 1 & 1 \\\\\n0 & 1 & 1 & 0 & 1 & 1 \\\\\n1 & 0 & 1 & 0 & 0 & 1\n\\end{pmatrix}.\n\\end{equation*}\n</div><p>Suppose that the 6-tuple ${\\mathbf x} = (010011)^{\\rm t}$ is received. It is a simple matter of matrix multiplication to determine whether or not ${\\mathbf x}$ is a codeword. Since</p><div class=\"displaymath\">\n\\begin{equation*}\nH{\\mathbf x} =\n\\begin{pmatrix} \n0 \\\\ 1 \\\\ 1\n\\end{pmatrix},\n\\end{equation*}\n</div><p>the received word is not a codeword. We must either attempt to correct the word or request that it be transmitted again.</p></article></div>",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7ee7b1ee06aa022711bafe80a3249a1ffd6237c | 271,316 | ipynb | Jupyter Notebook | code/8_CNN_cifar10_mnist.ipynb | Akshatha-Jagadish/DL_topics | 98aa979dde2021a20e7b561b83230ac0a475cf5e | [
"MIT"
] | null | null | null | code/8_CNN_cifar10_mnist.ipynb | Akshatha-Jagadish/DL_topics | 98aa979dde2021a20e7b561b83230ac0a475cf5e | [
"MIT"
] | null | null | null | code/8_CNN_cifar10_mnist.ipynb | Akshatha-Jagadish/DL_topics | 98aa979dde2021a20e7b561b83230ac0a475cf5e | [
"MIT"
] | null | null | null | 143.705508 | 165,112 | 0.874184 | [
[
[
"import tensorflow as tf\nfrom tensorflow.keras import datasets,layers,models\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"(X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data()\nX_train.shape",
"_____no_output_____"
],
[
"X_test.shape",
"_____no_output_____"
],
[
"plt.imshow(X_train[0])",
"_____no_output_____"
],
[
"classes = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']",
"_____no_output_____"
],
[
"y_train = y_train.reshape(-1,)\nclasses[y_train[0]]",
"_____no_output_____"
],
[
"def plot_sample(X,y,index):\n plt.figure(figsize=(15,2))\n plt.imshow(X[index])\n plt.xlabel(classes[y[index]])",
"_____no_output_____"
],
[
"plot_sample(X_train,y_train,0)",
"_____no_output_____"
],
[
"plot_sample(X_train,y_train,4)",
"_____no_output_____"
],
[
"X_train = X_train/255\nX_test = X_test/255",
"_____no_output_____"
],
[
"#model building\nmodel = models.Sequential([\n layers.Flatten(input_shape=(32,32,3)),\n layers.Dense(3000,activation='relu'),\n layers.Dense(1000,activation='relu'),\n layers.Dense(10,activation='sigmoid'),\n])\n\nmodel.compile(\n optimizer='SGD',\n loss='sparse_categorical_crossentropy', \n metrics=['accuracy']\n)\n\nmodel.fit(X_train,y_train,epochs=5)",
"Epoch 1/5\n1563/1563 [==============================] - 97s 62ms/step - loss: 1.8602 - accuracy: 0.3331\nEpoch 2/5\n1563/1563 [==============================] - 96s 61ms/step - loss: 1.6575 - accuracy: 0.41310s -\nEpoch 3/5\n1563/1563 [==============================] - 96s 62ms/step - loss: 1.5689 - accuracy: 0.4462\nEpoch 4/5\n1563/1563 [==============================] - 86s 55ms/step - loss: 1.5080 - accuracy: 0.4688\nEpoch 5/5\n1563/1563 [==============================] - 86s 55ms/step - loss: 1.4557 - accuracy: 0.4862\n"
],
[
"tf.__version__",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix, classification_report\nimport numpy as np\ny_pred = model.predict(X_test)\ny_pred_classes = [np.argmax(element) for element in y_pred]\n\nprint('Classification_report: \\n',classification_report(y_test,y_pred_classes))",
"Classification_report: \n precision recall f1-score support\n\n 0 0.64 0.31 0.41 1000\n 1 0.70 0.49 0.58 1000\n 2 0.23 0.70 0.34 1000\n 3 0.32 0.41 0.36 1000\n 4 0.47 0.19 0.27 1000\n 5 0.44 0.25 0.32 1000\n 6 0.47 0.54 0.51 1000\n 7 0.69 0.32 0.44 1000\n 8 0.52 0.67 0.58 1000\n 9 0.67 0.39 0.49 1000\n\n accuracy 0.43 10000\n macro avg 0.51 0.43 0.43 10000\nweighted avg 0.51 0.43 0.43 10000\n\n"
],
[
"cnn = models.Sequential([\n #cnn\n layers.Conv2D(filters=32,kernel_size=(3,3),activation='relu',input_shape=(32,32,3)),\n layers.MaxPool2D((2,2)),\n layers.Conv2D(filters=64,kernel_size=(3,3),activation='relu'),\n layers.MaxPool2D((2,2)),\n \n #dense\n layers.Flatten(),\n layers.Dense(64,activation='relu'),\n layers.Dense(10,activation='softmax')\n])\n\ncnn.compile(\n optimizer='SGD',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy']\n)\n\ncnn.fit(X_train,y_train,epochs=10)",
"Epoch 1/10\n1563/1563 [==============================] - 46s 30ms/step - loss: 2.0075 - accuracy: 0.27360s - loss: 2.009\nEpoch 2/10\n1563/1563 [==============================] - 41s 26ms/step - loss: 1.6264 - accuracy: 0.4201\nEpoch 3/10\n1563/1563 [==============================] - 48s 31ms/step - loss: 1.4375 - accuracy: 0.4883\nEpoch 4/10\n1563/1563 [==============================] - 47s 30ms/step - loss: 1.3361 - accuracy: 0.5303\nEpoch 5/10\n1563/1563 [==============================] - 40s 26ms/step - loss: 1.2569 - accuracy: 0.5582\nEpoch 6/10\n1563/1563 [==============================] - 43s 27ms/step - loss: 1.1882 - accuracy: 0.5832\nEpoch 7/10\n1563/1563 [==============================] - 40s 26ms/step - loss: 1.1306 - accuracy: 0.6034\nEpoch 8/10\n1563/1563 [==============================] - 42s 27ms/step - loss: 1.0750 - accuracy: 0.6270\nEpoch 9/10\n1563/1563 [==============================] - 41s 26ms/step - loss: 1.0297 - accuracy: 0.6418\nEpoch 10/10\n1563/1563 [==============================] - 41s 26ms/step - loss: 0.9862 - accuracy: 0.6572\n"
],
[
"cnn.evaluate(X_test,y_test)",
"313/313 [==============================] - 2s 7ms/step - loss: 1.0495 - accuracy: 0.6361\n"
],
[
"y_pred = cnn.predict(X_test)\ny_test = y_test.reshape(-1,)\ny_test",
"_____no_output_____"
],
[
"y_pred_classes = [np.argmax(element) for element in y_pred]\ny_pred_classes",
"_____no_output_____"
],
[
"plot_sample(X_test,y_test,2)",
"_____no_output_____"
],
[
"classes[y_pred_classes[2]]",
"_____no_output_____"
],
[
"print(\"Classification report: \\n\", classification_report(y_test,y_pred_classes))",
"Classification report: \n precision recall f1-score support\n\n 0 0.70 0.70 0.70 1000\n 1 0.78 0.66 0.72 1000\n 2 0.58 0.45 0.50 1000\n 3 0.59 0.28 0.38 1000\n 4 0.55 0.62 0.58 1000\n 5 0.55 0.59 0.57 1000\n 6 0.62 0.83 0.71 1000\n 7 0.63 0.77 0.70 1000\n 8 0.82 0.68 0.75 1000\n 9 0.59 0.79 0.68 1000\n\n accuracy 0.64 10000\n macro avg 0.64 0.64 0.63 10000\nweighted avg 0.64 0.64 0.63 10000\n\n"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"(X_train,y_train),(X_test,y_test) = datasets.mnist.load_data()\nX_train.shape",
"_____no_output_____"
],
[
"X_train = X_train.reshape(60000,28,28,1)\nX_train.shape",
"_____no_output_____"
],
[
"X_train.shape",
"_____no_output_____"
],
[
"plt.imshow(X_train[0])",
"_____no_output_____"
],
[
"X_train = X_train/255\nX_test = X_test/255",
"_____no_output_____"
],
[
"mnist_cnn = models.Sequential([\n layers.Conv2D(filters=10, kernel_size=(5,5), activation='relu',input_shape=(28,28,1)),\n layers.MaxPooling2D(2,2),\n# layers.Conv2D(filters=5, kernel_size=(3,3), activation='relu',input_shape=(28,28,1)),\n# layers.MaxPooling2D(2,2),\n layers.Flatten(),\n layers.Dense(50,activation='relu'),\n layers.Dense(10,activation='softmax')\n])\n\nmnist_cnn.compile(\n optimizer='adam',\n loss='sparse_categorical_crossentropy', #categories 1,2,3... sparse because output is integer\n metrics=['accuracy']\n)\n\nmnist_cnn.fit(X_train,y_train,epochs=10)",
"Epoch 1/10\n1875/1875 [==============================] - 12s 7ms/step - loss: 0.1976 - accuracy: 0.9426\nEpoch 2/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0706 - accuracy: 0.9786 0s - loss: 0.0707 - \nEpoch 3/10\n1875/1875 [==============================] - 12s 7ms/step - loss: 0.0491 - accuracy: 0.9850\nEpoch 4/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0388 - accuracy: 0.9878\nEpoch 5/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0303 - accuracy: 0.9903\nEpoch 6/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0245 - accuracy: 0.9920\nEpoch 7/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0192 - accuracy: 0.9935\nEpoch 8/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0157 - accuracy: 0.9948\nEpoch 9/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0126 - accuracy: 0.9960\nEpoch 10/10\n1875/1875 [==============================] - 13s 7ms/step - loss: 0.0113 - accuracy: 0.9961\n"
],
[
"X_test.shape",
"_____no_output_____"
],
[
"X_test = X_test.reshape(10000,28,28,1)",
"313/313 [==============================] - 1s 3ms/step - loss: 0.0560 - accuracy: 0.9843\n"
],
[
"mnist_cnn.evaluate(X_test,y_test)",
"313/313 [==============================] - 1s 3ms/step - loss: 0.0557 - accuracy: 0.9843\n"
],
[
"y_pred = mnist_cnn.predict(X_test)\ny_pred_classes = [np.argmax(element) for element in y_pred]\ncm = confusion_matrix(y_test,y_pred_classes)\ncm",
"_____no_output_____"
],
[
"import seaborn as sn\nplt.figure(figsize=(10,7))\nsn.heatmap(cm,annot=True,fmt='d')\nplt.xlabel('Predicted')\nplt.ylabel('Truth')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee7b4b8461c11a296300edc425d8a016820012 | 5,059 | ipynb | Jupyter Notebook | ipynb/novel-taxa/Index.ipynb | sjanssen2/tax-credit-data | ed1a46f27b343e9519e6dc0cb1dece06e8017996 | [
"BSD-3-Clause"
] | null | null | null | ipynb/novel-taxa/Index.ipynb | sjanssen2/tax-credit-data | ed1a46f27b343e9519e6dc0cb1dece06e8017996 | [
"BSD-3-Clause"
] | null | null | null | ipynb/novel-taxa/Index.ipynb | sjanssen2/tax-credit-data | ed1a46f27b343e9519e6dc0cb1dece06e8017996 | [
"BSD-3-Clause"
] | null | null | null | 63.2375 | 819 | 0.6695 | [
[
[
"# Novel-taxa classification evaluation\n\nThe following notebooks describe the evaluation of taxonomy classifiers using \"novel taxa\" data sets. Novel-taxa analysis is a form of cross-validated taxonomic classification, wherein random unique sequences are sampled from the reference database as a test set; all sequences sharing taxonomic affiliation at a given taxonomic level are removed from the reference database (training set); and taxonomy is assigned to the query sequences at the given taxonomic level. Thus, this test interrogates the behavior of a taxonomy classifier when challenged with \"novel\" sequences that are not represented by close matches within the reference sequence database. Such an analysis is performed to assess the degree to which \"overassignment\" occurs for sequences that are not represented in a reference database.\n\nAt each level ``L``, the unique taxonomic clades are randomly sampled and used as ``QUERY`` sequences. All sequences that match that taxonomic annotation at ``L`` are excluded from ``REF``. Hence, species-level ``QUERY`` assignment asks how accurate assignment is to an \"unknown\" species that is not represented in the ``REF``, though other species in the same genus are. Genus-level ``QUERY`` assignment asks how accurate assignment is to an \"unknown\" genus that is not represented in the ``REF``, though other genera in the same family are, *et cetera*.\n\nThe steps involved in preparing and executing novel-taxa analysis are described in a series of notebooks:\n\n1) **[Novel taxa dataset generation](./dataset-generation.ipynb)** only needs to be performed once for a given reference database. Only run this notebook if you wish to make novel taxa datasets from a different reference database, or alter the parameters used to make the novel taxa datasets. The default included in Tax-Credit is Greengenes 13\\_8 release, amplified *in silico* with primers 515f and 806r, and trimmed to 250 nt from the 5' end.\n\n2) **[Taxonomic classification](./taxonomy-assignment.ipynb)** of novel taxa sequences is performed using the datasets generated in *step 1*. This template currently describes classification using QIIME 1 classifiers and can be used as a template for classifiers that are called via command line interface. Python-based classifiers can be used following the example of q2-feature-classifier.\n\n3) **[Classifier evaluation](./evaluate-classification.ipynb)** is performed based on taxonomic classifications generated by each classifier used in *step 2*. \n\n\n## Definitions\nThe **[dataset generation](./dataset-generation.ipynb)** notebook uses a few novel definitions. The following provides some explanation of the definitions used in that notebook.\n\n* ``source`` = original reference database sequences and taxonomy.\n* ``QUERY`` = 'novel' query sequences and taxonomies randomly drawn from ``source``. \n* ``REF`` = ``source`` - ``novel`` taxa, used for taxonomy assignment.\n* ``L`` = taxonomic level being tested\n * 0 = kingdom, 1 = phylum, 2 = class, 3 = order, 4 = family, 5 = genus, 6 = species\n* ``branching`` = describes a taxon at level ``L`` that \"branches\" into two or more lineages at ``L + 1``. \n * A \"branched\" taxon, then, describes these lineages. E.g., in the example below Lactobacillaceae, Lactobacillus, and Pediococcus branch, while Paralactobacillus is unbranching. The Lactobacillus and Pediococcus species are \"branched\". Paralactobacillus selangorensis is \"unbranched\"\n * The novel taxa analysis only uses \"branching\" taxa, such that for each ``QUERY`` at level ``L``, ``REF`` must contain one or more taxa that share the same clade at level ``L - 1``.\n\n```\nLactobacillaceae\n └── Lactobacillus\n │ ├── Lactobacillus brevis\n │ └── Lactobacillus sanfranciscensis\n ├── Pediococcus\n │ ├── Pediococcus damnosus\n │ └── Pediococcus claussenii\n └── Paralactobacillus\n └── Paralactobacillus selangorensis\n```\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
e7ee819ae0585c20ac91f8f2d276fd0b1f71a598 | 24,441 | ipynb | Jupyter Notebook | runAnalyses.ipynb | draran/decision-weight-recovery | 41c5b888439af621afbb898dbe8bfef66b1e72df | [
"CC0-1.0"
] | null | null | null | runAnalyses.ipynb | draran/decision-weight-recovery | 41c5b888439af621afbb898dbe8bfef66b1e72df | [
"CC0-1.0"
] | null | null | null | runAnalyses.ipynb | draran/decision-weight-recovery | 41c5b888439af621afbb898dbe8bfef66b1e72df | [
"CC0-1.0"
] | null | null | null | 35.217579 | 119 | 0.422405 | [
[
[
"# Importing necessary libraries\n#===============================================================================\nimport matplotlib as mpl\nmpl.use('qt5agg')\nmpl.interactive(True)\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sbn\nsbn.set()\nfrom scipy import stats\nimport h5py\nfrom os.path import dirname\nfrom pathlib import Path\nimport sys\nimport mmodel_reversals as mm",
"_____no_output_____"
],
[
"# Setting paths\n#===============================================================================\nROOTPATH = Path().cwd()\n(ROOTPATH / 'Export').mkdir(parents=True, exist_ok=True)",
"_____no_output_____"
],
[
"# Function to compute complex-valued OLS\n#===============================================================================\ndef complexGLM(pred, crit):\n '''\n Compute regression weights for predicting the criterion variable using predictor arrays\n In -> pred = predictor array, crit = criterion vector\n Out -> coefs = regression coefficients/weights\n '''\n pred = np.array(pred)\n crit = np.array(crit)\n if len(crit.shape) < 2:\n crit = crit.reshape(-1, 1)\n if pred.dtype is not np.dtype('complex'):\n pred = np.exp(pred * 1j)\n if crit.dtype is not np.dtype('complex'):\n crit = np.exp(crit * 1j)\n a, b = [crit.shape[0], pred.shape[0]]\n if crit.shape[0] != pred.shape[0]:\n raise ValueError('The two arrays are of incompatible shape, {} and {}'.format(a, b))\n coefs = np.asmatrix(np.asmatrix(pred).H * np.asmatrix(pred)).I * (np.asmatrix(pred).H * np.asmatrix(crit))\n return coefs",
"_____no_output_____"
],
[
"# Setting simulation parameters\n#===============================================================================\nnp.random.seed(0)\ntrlN = 1000\nrunN = 10000\nsimK = np.sort([.1, 2.5, 1., 5., 10.])",
"_____no_output_____"
],
[
"# Simulate independently sampled motion directions\n#===============================================================================\npresDirs_ind = np.angle(\n np.exp(\n np.random.uniform(\n 0, 2 * np.pi, \n size = [runN, trlN, 6]\n ) * 1j\n )\n)\n\npercDirs_ind = np.concatenate([\n np.angle(\n np.exp(\n np.array(\n [\n np.random.vonmises(\n presDirs_ind, K\n )\n for K in simK\n ]\n ) * 1j\n )\n ),\n # no noise condition, K = inf\n presDirs_ind[None]\n])\n# saving data for independently sampled directions\nwith h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'a') as f:\n f.create_dataset(\n name = 'presDirs_ind', \n data = presDirs_ind, \n compression = 9\n )\nwith h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'a') as f:\n f.create_dataset(\n name = 'percDirs_ind', \n data = percDirs_ind, \n compression = 9\n )\npresDirs_ind = None\npercDirs_ind = None",
"_____no_output_____"
],
[
"# Simulate dependently sampled motion direction\n#===============================================================================\nfrstTar, frstFoil = np.random.choice(\n np.arange(0, 360), \n size = [2, runN, trlN]\n)\nfrstDis, scndTar = (\n frstTar[None] \n # random direction (CW/CCW)\n + np.random.choice(\n [-1, 1],\n size = [2, runN, trlN]\n ) \n # random angular offset\n * np.random.choice(\n np.arange(30, 151),\n size = [2, runN, trlN]\n )\n)\nscndDis, scndFoil = (\n np.stack(\n [scndTar, frstFoil]\n )\n # random direction (CW/CCW)\n + np.random.choice(\n [-1, 1],\n size = [2, runN, trlN]\n ) \n # random angular offset\n * np.random.choice(\n np.arange(30, 151),\n size = [2, runN, trlN]\n )\n)\npresDirs_dep = np.angle(\n np.exp(\n np.deg2rad(np.stack(\n [frstTar, scndTar, frstDis, scndDis, frstFoil, scndFoil],\n axis = -1\n )) * 1j\n )\n)\n\npercDirs_dep = np.concatenate([\n np.angle(\n np.exp(\n np.array(\n [\n np.random.vonmises(\n presDirs_dep, K\n )\n for K in simK\n ]\n ) * 1j\n )\n ),\n # no noise condition, K = inf\n presDirs_dep[None]\n])\n\n# saving data for dependently sampled directions\nwith h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'a') as f:\n f.create_dataset(\n name = 'presDirs_dep', \n data = presDirs_dep, \n compression = 9\n )\nwith h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'a') as f:\n f.create_dataset(\n name = 'percDirs_dep', \n data = percDirs_dep, \n compression = 9\n )\npresDirs_dep = None\npercDirs_dep = None",
"_____no_output_____"
],
[
"# Simulate complex-valued regression weights\n#===============================================================================\nsimCoefAbs = np.random.uniform(size = [runN, 6])\n# the angles of weigthing coeficients\nsimCoefAng = np.random.uniform(\n 0, 2 * np.pi,\n size = [runN, 6]\n)\nwith h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'a') as f:\n f.create_dataset(\n name = 'coefsAbs', \n data = simCoefAbs, \n compression = 9\n )\n f.create_dataset(\n name = 'coefsAng', \n data = simCoefAng, \n compression = 9\n )\nsimCoefAbs = None\nsimCoefAng = None",
"_____no_output_____"
],
[
"# Run complex-valued OLS for different simulation conditions\n#===============================================================================\nfor cond in ['ind', 'dep', 'dep_ss']:\n # there are three conditions:\n # ind: independently sampled motion\n # dep: dependently sampled motion\n # dep_ss: dependently sampled motion, 100 trials per run\n print('Analysing {} simulation condition'.format(cond.upper()))\n ssize = None\n cond_raw = cond\n if 'ss' in cond.split('_'):\n cond, ssize = cond.split('_')\n with h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'r') as f:\n presDirs = f['presDirs_{}'.format(cond)][:]\n percDirs = f['percDirs_{}'.format(cond)][:]\n coefsAbs = f['coefsAbs'][:]\n coefsAng = f['coefsAng'][:]\n if ssize:\n presDirs = presDirs[:, :100]\n percDirs = percDirs[:, :, :100]\n\n # running complex-values OLS for different simulated weight angles\n for idx_simAngle, simAngle in enumerate(['null', 'real']):\n # two analyses are run\n # null: the angles of the simulated complex-valued regression weights are zero\n # real: the angles are are randomly sampled \n simCoefs = (\n np.exp(\n [0, 1][idx_simAngle] * coefsAng * 1j\n ) * coefsAbs\n ) \n # %% simulating response on the basis of perceived directions and simulated\n respDirs = np.array([\n np.angle(\n np.sum(\n simCoefs[:, None] \n * np.exp(simKappa * 1j), \n -1))\n for simKappa in percDirs\n ])\n # weighting coefficients\n coefs = np.array(\n [\n [\n complexGLM(presDirs[idxRun], run)\n for idxRun, run in enumerate(simKappa)\n ]\n for simKappa in respDirs\n ] \n ).squeeze()\n print('Finished complex OLS')\n # %% goodness of fit\n predDirs = np.array([\n np.angle(\n np.sum(\n simKappa[:, None, :] \n * np.exp(presDirs * 1j), -1\n )\n )\n for simKappa in coefs\n ])\n GoF = np.array([\n np.angle(\n np.exp(respDirs[simKappa] * 1j)\n / np.exp(predDirs[simKappa] * 1j)\n )\n for simKappa in range(coefs.shape[0])\n ])\n # saving data\n with h5py.File(ROOTPATH / 'Export' / 'simCoefs.hdf', 'a') as f:\n f.create_dataset(\n name = 'coefsAbsHat_{}_{}'.format(cond_raw,simAngle), \n data = np.abs(coefs), \n compression = 9\n )\n f.create_dataset(\n name = 'coefsAngHat_{}_{}'.format(cond_raw,simAngle), \n data = np.angle(coefs), \n compression = 9\n )\n f.create_dataset(\n name = 'GoF_{}_{}'.format(cond_raw,simAngle), \n data = GoF, \n compression = 9\n )",
"_____no_output_____"
],
[
"# Setting parameters for plotting supplementary figure 1\n#===============================================================================\n# two different plottings can be performed\n# first, the results for simulated complex-valued weights using real angles\n# second, the results for simulated weights using zero angles\n# here, only the real values are plotted.\n# N.B., the results for zero angles yields similart goodness-of-fit\n# N.B., the ability of the complex-valued OLS to recover the angles (not plotted)\n# is similar to its ability to recover the lengths, i.e., the decision weights .\nconds = [\n 'GoF_ind_real',\n 'GoF_dep_real',\n 'GoF_dep_ss_real'\n]\nwith h5py.File(ROOTPATH / 'Export' / 'simCoefs.hdf', 'r') as f:\n GoF = dict([(cond, f[cond][:]) for cond in conds])",
"_____no_output_____"
],
[
"# Plotting supplementary figure 1\n#===============================================================================\nsbn.set_style('ticks')\nSSIZE = 8\nMSIZE = 10\nLSIZE = 12\nparams = {'lines.linewidth' : 1.5,\n 'grid.linewidth' : 1,\n 'xtick.labelsize' : MSIZE,\n 'ytick.labelsize' : MSIZE,\n 'xtick.major.width' : 1,\n 'ytick.major.width' : 1,\n 'xtick.major.size' : 5,\n 'ytick.major.size' : 5,\n 'xtick.direction' : 'inout',\n 'ytick.direction' :'inout',\n 'axes.linewidth': 1,\n 'axes.labelsize' : MSIZE,\n 'axes.titlesize' : MSIZE,\n 'figure.titlesize' : LSIZE,\n 'font.size' : MSIZE,\n 'savefig.dpi': 300,\n 'font.sans-serif' : ['Calibri'],\n 'legend.fontsize' : MSIZE,\n 'hatch.linewidth' : .2}\nsbn.mpl.rcParams.update(params)\ncols = sbn.husl_palette(6, h = .15, s = .75, l = .5)\nsimK = np.sort([.1, 2.5, 1., 5., 10.])\nsimNoise = np.random.vonmises(0, simK[:, None], [5, 100000])\nfig = plt.figure(figsize = (8,2.8))\nax = fig.add_subplot(1, 4, 1)\nfor idx_noise, noise in enumerate(simNoise):\n sbn.kdeplot(\n noise,\n color = cols[idx_noise],\n alpha = .8,\n lw = 2,\n label = simK[idx_noise],\n ax = ax\n )\nax.axvline(0, color = cols[-1], alpha = .8, lw = 2, label = 'No noise')\nfor idx_cond, cond in enumerate(conds):\n ax = fig.add_subplot(1,4,2 + idx_cond)\n for idxK, err in enumerate(GoF[cond]):\n sbn.kdeplot(\n err.flatten(),\n color = cols[idxK],\n alpha = .8,\n lw = 2,\n label = '{}$\\degree$'.format(\n np.rad2deg(mm.cstd(err.flatten())).astype('int')\n ),\n ax = ax\n )\nfor idx_ax, ax in enumerate(fig.axes):\n title = '$\\kappa$'\n xlab = 'Perceptual noise'\n if idx_ax:\n title = '$\\sigma$'\n xlab = 'Prediction error'\n ax.legend(\n title = title, \n frameon = False,\n handlelength = 1,\n handletextpad = .5,\n markerfirst = False\n )\n ax.set_ylim(-0.05, 7)\n ax.set_xlim(-np.pi*1.1, np.pi*1.1)\n ax.set_xticks([-np.pi, 0, np.pi])\n ax.set_xticklabels(['-$\\pi$', '0', '$\\pi$'])\n ax.set_yticks([])\n ax.set_xlabel(xlab)\n ax.set_ylabel('Probability density')\n sbn.despine(ax = ax)\n ax.spines['bottom'].set_bounds(-np.pi, np.pi)\n ax.spines['left'].set_visible(False)\n if idx_ax:\n ax.yaxis.set_visible(False)\nplt.tight_layout(rect = (0, 0, 1, 1))\nfig.savefig(\n str(ROOTPATH / 'Export'/ 'GoodnessOfFit_All.png'), \n dpi = 600\n)\nplt.close(fig)",
"_____no_output_____"
],
[
"# Setting parameters for plotting supplementary figure 2\n#===============================================================================\nconds = [\n 'ind_real',\n 'dep_real',\n 'dep_ss_real'\n]\nwith h5py.File(ROOTPATH / 'Export' / 'simData.hdf', 'r') as f:\n coefsAbs = f['coefsAbs'][:]\ncols = sbn.husl_palette(6, h = .15, s = .75, l = .5)",
"_____no_output_____"
],
[
"# Plotting panels A-C of supplementary figure 2\n#===============================================================================\nfor idx_cond, cond in enumerate(conds):\n fig = plt.figure(figsize = (4,2.8))\n with h5py.File(ROOTPATH / 'Export' / 'simCoefs.hdf', 'r') as f:\n coefsAbsHat = f['_'.join(['coefsAbsHat', cond])][:]\n for idxK, weights in enumerate(coefsAbsHat):\n ax = fig.add_subplot(2, 3, idxK + 1)\n scatter = ax.plot(\n coefsAbs.flatten(), \n weights.flatten(), \n '.',\n mec = (.9,.9,.9),\n mfc = 'none',\n zorder = -10\n )\n line = ax.plot(\n np.array([0, 1]), np.array([0, 1]), \n 'k--',\n lw = 1,\n zorder = 0\n )\n bins = pd.qcut(coefsAbs.flatten(), 4).codes\n dataset = [weights.flatten()[bins == bin] for bin in np.unique(bins)]\n vlnplt = ax.violinplot(\n dataset, \n positions = [.125, .375, .625, .875],\n showextrema = False,\n showmedians = True,\n widths = .15,\n ) \n for i in vlnplt['bodies']:\n i.set_alpha(.8)\n i.set_facecolor(cols[idxK])\n i.set_lw(0)\n vlnplt['cmedians'].set_edgecolor('white')\n vlnplt['cmedians'].set_lw(.5)\n ax.text(\n .05, .95,\n (\n ['$\\kappa$ = {}'.format(k) for k in simK] \n + ['No noise']\n )[idxK],\n transform = ax.transAxes,\n va = 'top'\n )\n ax.set_xlabel('Simulated weights')\n ax.set_ylabel('Estimated weights')\n for idx_ax, ax in enumerate(fig.axes):\n ax.tick_params('both', direction = 'out')\n ax.set_xlim(-.1, 1.1)\n ax.set_ylim(-.1, 1.1)\n ax.spines['bottom'].set_bounds(0,1)\n ax.spines['left'].set_bounds(0,1)\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n ax.set_xticks(np.linspace(0, 1, 3))\n ax.set_yticks(np.linspace(0, 1, 3))\n if idx_ax not in [0, 3]:\n ax.yaxis.set_visible(False)\n ax.spines['left'].set_visible(False)\n if idx_ax not in [3, 4, 5]:\n ax.xaxis.set_visible(False)\n ax.spines['bottom'].set_visible(False)\n plt.tight_layout(rect = (0, 0, 1, .975))\n label = [\n 'Independently sampled motion, 10$^3$ trials, 10$^4$ runs',\n 'Dependently sampled motion, 10$^3$ trials, 10$^4$ runs',\n 'Dependently sampled motion, 10$^2$ trials, 10$^4$ runs'\n ][idx_cond]\n fig.text(\n .5, 1, \n label,\n ha = 'center',\n va = 'top'\n )\n fig.savefig(\n str(\n ROOTPATH / \n 'Export' / \n 'WeightRecovery_{}.png'\n ).format([\n 'A', 'B', 'C'\n ][idx_cond]),\n dpi = 600\n )\n plt.close(fig)",
"_____no_output_____"
],
[
"# Plotting panel D of supplementary figure 2\n#===============================================================================\nfrom mpl_toolkits.axes_grid1 import ImageGrid\ncols = sbn.husl_palette(6, h = .15, s = .75, l = .5)\nfig = plt.figure(figsize = (4,2.8))\ngrid = ImageGrid(\n fig, 111, nrows_ncols = (2, 3), \n share_all = True, cbar_mode= 'single', aspect= True\n)\nfor idxK, weights in enumerate(coefsAbsHat):\n ax = grid[idxK]\n heatmap, xedges, yedges = np.histogram2d(\n np.array(list(map(\n stats.rankdata,\n coefsAbs\n ))).flatten(), \n np.array(list(map(\n stats.rankdata,\n weights\n ))).flatten(),\n bins = np.linspace(.5, 6.5, 7)\n )\n heatmap /= heatmap.sum()\n extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]\n im = ax.imshow(\n heatmap, \n extent = extent, origin = 'lower', \n vmin = 0, vmax = .15,\n cmap = 'viridis'\n )\n ax.text(\n .05, .95,\n (\n ['$\\kappa$ = {}'.format(k) for k in simK] \n + ['No noise']\n )[idxK],\n transform = ax.transAxes,\n va = 'top',\n color = 'white'\n )\ngrid.cbar_axes[0].colorbar(im)\ngrid.cbar_axes[0].set_ylim(0, .14)\ngrid.cbar_axes[0].set_yticks([.0, .05, .10, .15])\ngrid.cbar_axes[0].set_yticklabels(['0','5','10', '15'])\ngrid.cbar_axes[0].tick_params(direction = 'inout', length = 5)\ngrid[0].tick_params('both', direction = 'out', length = 5)\nfor idx_ax, ax in enumerate(grid):\n ax.tick_params('both', direction = 'inout', length = 5)\n ax.set_yticks(np.linspace(1,6,6))\n ax.set_xticks(np.linspace(1,6,6))\n if idx_ax not in [0, 3]:\n ax.yaxis.set_visible(False)\n if idx_ax < 3:\n ax.xaxis.set_visible(False)\nplt.tight_layout(rect = (.01, .01, .94, .99))\nfig.text(\n .5, .99, \n 'Dependently sampled motion, 10$^2$ trials, 10$^4$ runs',\n ha = 'center',\n va = 'top'\n)\nfig.text(\n .01, .5,\n 'Estimated weight rank',\n ha = 'left',\n va = 'center',\n rotation = 90\n)\nfig.text(\n .5, .01,\n 'Simulated weight rank',\n ha = 'center',\n va = 'bottom',\n)\nfig.text(\n .99, .5,\n 'Frequency [%]',\n ha = 'right',\n va = 'center',\n rotation = -90\n)\nfig.savefig(\n str(\n ROOTPATH /\n 'Export' /\n 'WeightRecovery_D.png'\n ), \n dpi = 600\n)\nplt.close(fig)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ee8dc6e96546ab0448ba8d1f03c210c495240d | 10,344 | ipynb | Jupyter Notebook | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes | 61b367ae5843d542b018ef0ac8e66ebbefb9804f | [
"MIT"
] | 2 | 2020-04-05T05:11:35.000Z | 2020-07-04T07:05:03.000Z | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes | 61b367ae5843d542b018ef0ac8e66ebbefb9804f | [
"MIT"
] | 1 | 2021-05-11T07:28:52.000Z | 2021-05-11T07:28:52.000Z | jupyter-notebook/eda_import_exports.ipynb | NanceCA/binational-trade-volumes | 61b367ae5843d542b018ef0ac8e66ebbefb9804f | [
"MIT"
] | 1 | 2020-07-04T07:05:04.000Z | 2020-07-04T07:05:04.000Z | 21.822785 | 95 | 0.465197 | [
[
[
"## EDA for Import and Export Trade Volumes",
"_____no_output_____"
],
[
"### Binational trade relationship between Mexico and the United States",
"_____no_output_____"
]
],
[
[
"#import key libraries\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Dataset 1: General Imports from Mexico to the United States",
"_____no_output_____"
]
],
[
[
"imports = pd.read_csv(\"./data/usitc/total-imports-mx2us.csv\")",
"_____no_output_____"
],
[
"## data to be read includes the customs value of the import and the year\nimports.shape",
"_____no_output_____"
],
[
"imports.head()\n#note that the customs_value and the dollar_amount are the same just different data types",
"_____no_output_____"
],
[
"list(imports.columns)",
"_____no_output_____"
],
[
"imports['imports'].describe()",
"_____no_output_____"
],
[
"imports['dollar_amount'].describe()",
"_____no_output_____"
],
[
"imports['customs_value'].plot(kind=\"bar\")\n## confirming that the data is linear",
"_____no_output_____"
],
[
"plt.scatter(imports[\"year\"],imports['customs_value'],color=\"blue\")\nplt.title('Imports from Mexico to the US, Annual')\nplt.xlabel('year')\nplt.ylabel('customs value e11')\nplt.show()\n##amazing! Looks pretty linear to me",
"_____no_output_____"
]
],
[
[
"## Dataset #2 Exports from US to Mexico",
"_____no_output_____"
]
],
[
[
"exports = pd.read_csv(\"./data/usitc/total-exports-us2mx.csv\")",
"_____no_output_____"
],
[
"exports.shape",
"_____no_output_____"
],
[
"exports.head()",
"_____no_output_____"
],
[
"list(exports.columns)",
"_____no_output_____"
],
[
"exports['exports'].describe()",
"_____no_output_____"
],
[
"plt.scatter(exports[\"year\"],exports['exports'],color=\"green\")\nplt.title('Exports from US to Mexico, Annual')\nplt.xlabel('year')\nplt.ylabel('FAS Value e11')\nplt.show()\n##generally pretty linear",
"_____no_output_____"
],
[
"## Combining both exports and imports",
"_____no_output_____"
],
[
"##combine both vectors on one graph\nplt.plot(exports[\"year\"],exports['exports'],color=\"green\")\nplt.scatter(imports[\"year\"],imports['imports'],color=\"blue\")\nplt.title(\"Plotting imports and exports\")\nplt.xlabel(\"Year\")\nplt.ylabel(\"Value\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Data preprocessing",
"_____no_output_____"
]
],
[
[
"# imports\nyear_var = list(imports['year'])\nprint(year_var)",
"_____no_output_____"
],
[
"dollar = list(imports[\"dollar_amount\"])\nprint(dollar)",
"_____no_output_____"
],
[
"def pre_process(year, dollar):\n print(\"[\",year,\",\",dollar,\"]\",\",\")",
"_____no_output_____"
],
[
"pre_process(1996, 2)",
"_____no_output_____"
]
],
[
[
"## Running descriptive statistics",
"_____no_output_____"
]
],
[
[
"# Pulling in descriptive statistics on IMPORTS\nfrom scipy import stats\nstats.describe(ytrain_pred)",
"_____no_output_____"
],
[
"imports['imports'].describe()",
"_____no_output_____"
],
[
"exports[\"exports\"].describe()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7ee9115cffb2943d409592f71570a25853a3bdf | 34,006 | ipynb | Jupyter Notebook | Cauchy Distribution.ipynb | vikasgorur/all-of-stats | 6e3f47238537b691a7edca14f124abcabed4ac7d | [
"MIT"
] | null | null | null | Cauchy Distribution.ipynb | vikasgorur/all-of-stats | 6e3f47238537b691a7edca14f124abcabed4ac7d | [
"MIT"
] | null | null | null | Cauchy Distribution.ipynb | vikasgorur/all-of-stats | 6e3f47238537b691a7edca14f124abcabed4ac7d | [
"MIT"
] | null | null | null | 213.874214 | 30,314 | 0.920808 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7ee9205aad19aeeb559290637f332ae13d3cd84 | 9,189 | ipynb | Jupyter Notebook | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects | 9dcc3b7909074080dad16666f2e1a06ca2f23f86 | [
"MIT"
] | 8 | 2017-02-28T01:05:52.000Z | 2021-05-27T13:25:13.000Z | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects | 9dcc3b7909074080dad16666f2e1a06ca2f23f86 | [
"MIT"
] | null | null | null | Explore U.S. Births/Basics.ipynb | vipmunot/Data-Science-Projects | 9dcc3b7909074080dad16666f2e1a06ca2f23f86 | [
"MIT"
] | 2 | 2020-10-07T19:39:49.000Z | 2021-06-13T08:12:07.000Z | 22.249395 | 80 | 0.454348 | [
[
[
"## 1: Introduction To The Dataset",
"_____no_output_____"
]
],
[
[
"data = open('US_births_1994-2003_CDC_NCHS.csv','r').read().split('\\n')\ndata[:10]",
"_____no_output_____"
]
],
[
[
"## 2: Converting Data Into A List Of Lists",
"_____no_output_____"
]
],
[
[
"def read_csv(filename,header = False):\n final_list = []\n read_data = open(filename,'r').read().split('\\n')[1:]\n if header == True:\n read_data = open(filename,'r').read().split('\\n')[1:]\n else:\n read_data = open(filename,'r').read().split('\\n')\n for item in read_data:\n int_fields = []\n string_fields = item.split(',')\n for val in string_fields:\n int_fields.append(int(val))\n final_list.append(int_fields) \n return(final_list) \ncdc_list = read_csv('US_births_1994-2003_CDC_NCHS.csv',header = True)\ncdc_list[:10]",
"_____no_output_____"
]
],
[
[
"## 3: Calculating Number Of Births Each Month",
"_____no_output_____"
]
],
[
[
"def month_births(data):\n births_per_month = {}\n for item in data:\n if item[1] in births_per_month.keys():\n births_per_month[item[1]] += item[4]\n else:\n births_per_month[item[1]] = item[4]\n return(births_per_month)\ncdc_month_births = month_births(cdc_list) \ncdc_month_births",
"_____no_output_____"
],
[
"def dow_births(data):\n births_per_dow = {}\n for item in data:\n if item[3] in births_per_dow.keys():\n births_per_dow[item[3]] += item[4]\n else:\n births_per_dow[item[3]] = item[4]\n return(births_per_dow)\ncdc_day_births = dow_births(cdc_list) \ncdc_day_births",
"_____no_output_____"
]
],
[
[
"## 5: Creating A More General Function",
"_____no_output_____"
]
],
[
[
"def calc_counts(data,column):\n birth = {}\n for item in data:\n if item[column] in birth.keys():\n birth[item[column]] += item[4]\n else:\n birth[item[column]] = item[4]\n return(birth)\ncdc_year_births = calc_counts(cdc_list, 0)\ncdc_month_births = calc_counts(cdc_list, 1)\ncdc_dom_births = calc_counts(cdc_list, 2)\ncdc_dow_births = calc_counts(cdc_list, 3)",
"_____no_output_____"
],
[
"cdc_year_births",
"_____no_output_____"
],
[
"cdc_month_births",
"_____no_output_____"
],
[
"cdc_dom_births",
"_____no_output_____"
],
[
"cdc_dow_births",
"_____no_output_____"
],
[
"def min_max(dictionary):\n min_val = min(dictionary.items(), key=lambda k: k[1])\n max_val = max(dictionary.items(), key=lambda k: k[1])\n return(\"Minimum Value:%s Maximum Value:%s\"%(min_val,max_val))\nmin_max(cdc_dow_births)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7eea3a455977f80d6e7cd99813b739c915783b8 | 26,449 | ipynb | Jupyter Notebook | examples/condor_dirhash/plot_iris_dataset.ipynb | SmartDataInnovationLab/git_batch | bccaac72d52bd8dcf3a6da947cc0c43ca73dcefb | [
"BSD-3-Clause"
] | null | null | null | examples/condor_dirhash/plot_iris_dataset.ipynb | SmartDataInnovationLab/git_batch | bccaac72d52bd8dcf3a6da947cc0c43ca73dcefb | [
"BSD-3-Clause"
] | 16 | 2018-03-19T12:33:14.000Z | 2018-08-30T13:02:23.000Z | examples/condor_dirhash/plot_iris_dataset.ipynb | SmartDataInnovationLab/git_batch | bccaac72d52bd8dcf3a6da947cc0c43ca73dcefb | [
"BSD-3-Clause"
] | null | null | null | 383.318841 | 25,068 | 0.941019 | [
[
[
"import matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom csv import reader\nimport numpy\n\niris = numpy.genfromtxt('data/iris.csv', delimiter=',', dtype=\"|U5\")\nX = iris[1:,:2].astype(numpy.float)\ny = numpy.unique(iris[1:,4], return_inverse=True)[1]\n\nplt.scatter(X[:, 0], X[:, 1], c=y)\nplt.xlim(3.5, 8.5)\nplt.ylim(1.5, 5)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e7eeabea31f91bec025e1a82487a0792eaf9f181 | 20,115 | ipynb | Jupyter Notebook | veri_final.ipynb | JULYEN/kmeans-clustering | 69e097a3ea0d052c060a72d3575a38cae81c2cac | [
"Apache-2.0"
] | null | null | null | veri_final.ipynb | JULYEN/kmeans-clustering | 69e097a3ea0d052c060a72d3575a38cae81c2cac | [
"Apache-2.0"
] | null | null | null | veri_final.ipynb | JULYEN/kmeans-clustering | 69e097a3ea0d052c060a72d3575a38cae81c2cac | [
"Apache-2.0"
] | null | null | null | 191.571429 | 17,892 | 0.917027 | [
[
[
"from pandas import DataFrame\nimport matplotlib.pyplot as plt\nfrom sklearn.cluster import KMeans",
"_____no_output_____"
],
[
"\nData = {'x': [25,34,22,27,23,24,31,22,35,26,28,54,57,43,36,27,29,52,32,47,39,48,35,33,44,45,38,43,41,46],\n 'y': [79,51,53,78,99,92,73,57,69,75,51,32,40,77,53,36,35,58,59,50,25,20,14,12,20,5,29,27,8,7]\n }",
"_____no_output_____"
],
[
"df = DataFrame(Data,columns=['x','y'])",
"_____no_output_____"
],
[
"kmeans = KMeans(n_clusters=2).fit(df)\ncentroids = kmeans.cluster_centers_\nprint(centroids)",
"[[31.6875 67.125 ]\n [41.35714286 22.14285714]]\n"
],
[
"plt.scatter(df['x'], df['y'], c= kmeans.labels_.astype(float), s=50, alpha=0.6)\nplt.scatter(centroids[:, 0], centroids[:, 1], c='blue', s=50)\nplt.title('Kişi Sonuç Değerlendirmesi')\nplt.ylabel('Başarı Puanı')\nplt.xlabel('Yaş ')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7eeb5ba98456184532e96f78f40589931d63f0c | 5,152 | ipynb | Jupyter Notebook | ipynb/make_permissive_clusters.ipynb | pdhsulab/GeneGraphDB | f2754a596d08680f7f0b092391653d9dbf477ddd | [
"MIT"
] | 6 | 2021-08-28T07:03:07.000Z | 2022-02-22T03:34:11.000Z | ipynb/make_permissive_clusters.ipynb | pdhsulab/GeneGraphDB | f2754a596d08680f7f0b092391653d9dbf477ddd | [
"MIT"
] | null | null | null | ipynb/make_permissive_clusters.ipynb | pdhsulab/GeneGraphDB | f2754a596d08680f7f0b092391653d9dbf477ddd | [
"MIT"
] | null | null | null | 43.294118 | 1,155 | 0.60559 | [
[
[
"import os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom Bio import SeqIO\nimport csv\nimport sqlite3\nimport time",
"_____no_output_____"
],
[
"con=sqlite3.connect(\"80kprotein_stats.db\")\ncur = con.cursor()\ndef get_protein_seq(pid):\n #cmd = \"SELECT * FROM proteins WHERE hashid='%s'\" % pid\n cmd = \"SELECT * FROM proteins WHERE pid = '%s'\" % pid \n #print(cmd)\n cur.execute(cmd)\n return cur.fetchone()[-1]",
"_____no_output_____"
],
[
"path_clu_rep = \"../clusters/clu_rep_stringent_final.csv\"",
"_____no_output_____"
],
[
"outfile = open(\"../clusters/INPUT/permissive/clu_perm_mmseqs_input.faa\", \"w\")\nwith open(path_clu_rep, 'r') as file:\n reader = csv.reader(file)\n next(reader)\n prev_rep = \"\"\n for row in reader:\n stringent_rep = row[0]\n if prev_rep != stringent_rep:\n print(\">\" + stringent_rep, file = outfile)\n print(get_protein_seq(stringent_rep), file = outfile)\n prev_rep = stringent_rep",
"_____no_output_____"
],
[
"con.commit()\ncon.close()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7eed1c6f168719b4fd256dc918aeb6441f0d099 | 21,387 | ipynb | Jupyter Notebook | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience | 4135a3acc768bda78ca8f999c61de23954a5330e | [
"MIT"
] | null | null | null | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience | 4135a3acc768bda78ca8f999c61de23954a5330e | [
"MIT"
] | null | null | null | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience | 4135a3acc768bda78ca8f999c61de23954a5330e | [
"MIT"
] | null | null | null | 27.883963 | 716 | 0.576986 | [
[
[
"<a href=\"https://cognitiveclass.ai/\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png\" width=\"200\" align=\"center\">\n</a>",
"_____no_output_____"
],
[
"<h1>Classes and Objects in Python</h1>",
"_____no_output_____"
],
[
"<p>\n <strong>Welcome!</strong> \n Objects in programming are like objects in real life. Like life, there are different classes of objects. In this notebook, we will create two classes called Circle and Rectangle. By the end of this notebook, you will have a better idea about :\n <ul>\n <li>what a class is</li>\n <li>what an attribute is</li>\n <li>what a method is</li>\n </ul>\n\n Don’t worry if you don’t get it the first time, as much of the terminology is confusing. Don’t forget to do the practice tests in the notebook.\n</p>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"https://cocl.us/topNotebooksPython101Coursera\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png\" width=\"750\" align=\"center\">\n </a>\n</div>",
"_____no_output_____"
],
[
"<h2>Table of Contents</h2>\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <ul>\n <li>\n <a href=\"#intro\">Introduction to Classes and Objects</a>\n <ul>\n <li><a href=\"create\">Creating a class</a></li>\n <li><a href=\"instance\">Instances of a Class: Objects and Attributes</a></li>\n <li><a href=\"method\">Methods</a></li>\n </ul>\n </li>\n <li><a href=\"creating\">Creating a class</a></li>\n <li><a href=\"circle\">Creating an instance of a class Circle</a></li>\n <li><a href=\"rect\">The Rectangle Class</a></li>\n </ul>\n <p>\n Estimated time needed: <strong>40 min</strong>\n </p>\n</div>\n\n<hr>",
"_____no_output_____"
],
[
"<h2 id=\"intro\">Introduction to Classes and Objects</h2>",
"_____no_output_____"
],
[
"<h3>Creating a Class</h3>",
"_____no_output_____"
],
[
"The first part of creating a class is giving it a name: In this notebook, we will create two classes, Circle and Rectangle. We need to determine all the data that make up that class, and we call that an attribute. Think about this step as creating a blue print that we will use to create objects. In figure 1 we see two classes, circle and rectangle. Each has their attributes, they are variables. The class circle has the attribute radius and color, while the rectangle has the attribute height and width. Let’s use the visual examples of these shapes before we get to the code, as this will help you get accustomed to the vocabulary.",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/ClassesClass.png\" width=\"500\" />",
"_____no_output_____"
],
[
"<i>Figure 1: Classes circle and rectangle, and each has their own attributes. The class circle has the attribute radius and colour, the rectangle has the attribute height and width.</i>\n",
"_____no_output_____"
],
[
"<h3 id=\"instance\">Instances of a Class: Objects and Attributes</h3>",
"_____no_output_____"
],
[
"An instance of an object is the realisation of a class, and in Figure 2 we see three instances of the class circle. We give each object a name: red circle, yellow circle and green circle. Each object has different attributes, so let's focus on the attribute of colour for each object.",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/ClassesObj.png\" width=\"500\" />",
"_____no_output_____"
],
[
"<i>Figure 2: Three instances of the class circle or three objects of type circle.</i>",
"_____no_output_____"
],
[
" The colour attribute for the red circle is the colour red, for the green circle object the colour attribute is green, and for the yellow circle the colour attribute is yellow. \n",
"_____no_output_____"
],
[
"<h3 id=\"method\">Methods</h3>",
"_____no_output_____"
],
[
"Methods give you a way to change or interact with the object; they are functions that interact with objects. For example, let’s say we would like to increase the radius by a specified amount of a circle. We can create a method called **add_radius(r)** that increases the radius by **r**. This is shown in figure 3, where after applying the method to the \"orange circle object\", the radius of the object increases accordingly. The “dot” notation means to apply the method to the object, which is essentially applying a function to the information in the object.",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/ClassesMethod.png\" width=\"500\" /> ",
"_____no_output_____"
],
[
"<i>Figure 3: Applying the method “add_radius” to the object orange circle object.</i>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<h2 id=\"creating\">Creating a Class</h2>",
"_____no_output_____"
],
[
"Now we are going to create a class circle, but first, we are going to import a library to draw the objects: ",
"_____no_output_____"
]
],
[
[
"# Import the library\n\nimport matplotlib.pyplot as plt\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
" The first step in creating your own class is to use the <code>class</code> keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object: ",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/ClassesDefine.png\" width=\"400\" />",
"_____no_output_____"
],
[
"<i>Figure 4: Three instances of the class circle or three objects of type circle.</i>",
"_____no_output_____"
],
[
"The next step is a special method called a constructor <code>__init__</code>, which is used to initialize the object. The input are data attributes. The term <code>self</code> contains all the attributes in the set. For example the <code>self.color</code> gives the value of the attribute color and <code>self.radius</code> will give you the radius of the object. We also have the method <code>add_radius()</code> with the parameter <code>r</code>, the method adds the value of <code>r</code> to the attribute radius. To access the radius we use the syntax <code>self.radius</code>. The labeled syntax is summarized in Figure 5:",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/ClassesCircle.png\" width=\"600\" />",
"_____no_output_____"
],
[
"<i>Figure 5: Labeled syntax of the object circle.</i>",
"_____no_output_____"
],
[
"The actual object is shown below. We include the method <code>drawCircle</code> to display the image of a circle. We set the default radius to 3 and the default colour to blue:",
"_____no_output_____"
]
],
[
[
"# Create a class Circle\n\nclass Circle(object):\n \n # Constructor\n def __init__(self, radius=3, color='blue'):\n self.radius = radius\n self.color = color \n \n # Method\n def add_radius(self, r):\n self.radius = self.radius + r\n return(self.radius)\n \n # Method\n def drawCircle(self):\n plt.gca().add_patch(plt.Circle((0, 0), radius=self.radius, fc=self.color))\n plt.axis('scaled')\n plt.show() ",
"_____no_output_____"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<h2 id=\"circle\">Creating an instance of a class Circle</h2>",
"_____no_output_____"
],
[
"Let’s create the object <code>RedCircle</code> of type Circle to do the following:",
"_____no_output_____"
]
],
[
[
"# Create an object RedCircle\n\nRedCircle = Circle(10, 'red')",
"_____no_output_____"
]
],
[
[
"We can use the <code>dir</code> command to get a list of the object's methods. Many of them are default Python methods.",
"_____no_output_____"
]
],
[
[
"# Find out the methods can be used on the object RedCircle\n\ndir(RedCircle)",
"_____no_output_____"
]
],
[
[
"We can look at the data attributes of the object: ",
"_____no_output_____"
]
],
[
[
"# Print the object attribute radius\n\nRedCircle.radius",
"_____no_output_____"
],
[
"# Print the object attribute color\n\nRedCircle.color",
"_____no_output_____"
]
],
[
[
" We can change the object's data attributes: ",
"_____no_output_____"
]
],
[
[
"# Set the object attribute radius\n\nRedCircle.radius = 1\nRedCircle.radius",
"_____no_output_____"
]
],
[
[
" We can draw the object by using the method <code>drawCircle()</code>:",
"_____no_output_____"
]
],
[
[
"# Call the method drawCircle\n\nRedCircle.drawCircle()",
"_____no_output_____"
]
],
[
[
"We can increase the radius of the circle by applying the method <code>add_radius()</code>. Let increases the radius by 2 and then by 5: ",
"_____no_output_____"
]
],
[
[
"# Use method to change the object attribute radius\n\nprint('Radius of object:',RedCircle.radius)\nRedCircle.add_radius(2)\nprint('Radius of object of after applying the method add_radius(2):',RedCircle.radius)\nRedCircle.add_radius(5)\nprint('Radius of object of after applying the method add_radius(5):',RedCircle.radius)",
"_____no_output_____"
]
],
[
[
" Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is:",
"_____no_output_____"
]
],
[
[
"# Create a blue circle with a given radius\n\nBlueCircle = Circle(radius=100)",
"_____no_output_____"
]
],
[
[
" As before we can access the attributes of the instance of the class by using the dot notation:",
"_____no_output_____"
]
],
[
[
"# Print the object attribute radius\n\nBlueCircle.radius",
"_____no_output_____"
],
[
"# Print the object attribute color\n\nBlueCircle.color",
"_____no_output_____"
]
],
[
[
" We can draw the object by using the method <code>drawCircle()</code>:",
"_____no_output_____"
]
],
[
[
"# Call the method drawCircle\n\nBlueCircle.drawCircle()",
"_____no_output_____"
]
],
[
[
"Compare the x and y axis of the figure to the figure for <code>RedCircle</code>; they are different.",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<h2 id=\"rect\">The Rectangle Class</h2>",
"_____no_output_____"
],
[
"Let's create a class rectangle with the attributes of height, width and color. We will only add the method to draw the rectangle object:",
"_____no_output_____"
]
],
[
[
"# Create a new Rectangle class for creating a rectangle object\n\nclass Rectangle(object):\n \n # Constructor\n def __init__(self, width=2, height=3, color='r'):\n self.height = height \n self.width = width\n self.color = color\n \n # Method\n def drawRectangle(self):\n plt.gca().add_patch(plt.Rectangle((0, 0), self.width, self.height ,fc=self.color))\n plt.axis('scaled')\n plt.show()",
"_____no_output_____"
]
],
[
[
"Let’s create the object <code>SkinnyBlueRectangle</code> of type Rectangle. Its width will be 2 and height will be 3, and the color will be blue:",
"_____no_output_____"
]
],
[
[
"# Create a new object rectangle\n\nSkinnyBlueRectangle = Rectangle(2, 10, 'blue')",
"_____no_output_____"
]
],
[
[
" As before we can access the attributes of the instance of the class by using the dot notation:",
"_____no_output_____"
]
],
[
[
"# Print the object attribute height\n\nSkinnyBlueRectangle.height ",
"_____no_output_____"
],
[
"# Print the object attribute width\n\nSkinnyBlueRectangle.width",
"_____no_output_____"
],
[
"# Print the object attribute color\n\nSkinnyBlueRectangle.color",
"_____no_output_____"
]
],
[
[
" We can draw the object:",
"_____no_output_____"
]
],
[
[
"# Use the drawRectangle method to draw the shape\n\nSkinnyBlueRectangle.drawRectangle()",
"_____no_output_____"
]
],
[
[
"Let’s create the object <code>FatYellowRectangle</code> of type Rectangle :",
"_____no_output_____"
]
],
[
[
"# Create a new object rectangle\n\nFatYellowRectangle = Rectangle(20, 5, 'yellow')",
"_____no_output_____"
]
],
[
[
" We can access the attributes of the instance of the class by using the dot notation:",
"_____no_output_____"
]
],
[
[
"# Print the object attribute height\n\nFatYellowRectangle.height ",
"_____no_output_____"
],
[
"# Print the object attribute width\n\nFatYellowRectangle.width",
"_____no_output_____"
],
[
"# Print the object attribute color\n\nFatYellowRectangle.color",
"_____no_output_____"
]
],
[
[
" We can draw the object:",
"_____no_output_____"
]
],
[
[
"# Use the drawRectangle method to draw the shape\n\nFatYellowRectangle.drawRectangle()",
"_____no_output_____"
]
],
[
[
"<hr>\n<h2>The last exercise!</h2>\n<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href=\"https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/\" target=\"_blank\">this article</a> to learn how to share your work.\n<hr>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<h2>Get IBM Watson Studio free of charge!</h2>\n <p><a href=\"https://cocl.us/bottemNotebooksPython101Coursera\"><img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png\" width=\"750\" align=\"center\"></a></p>\n</div>",
"_____no_output_____"
],
[
"<h3>About the Authors:</h3> \n<p><a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\" target=\"_blank\">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>",
"_____no_output_____"
],
[
"Other contributors: <a href=\"www.linkedin.com/in/jiahui-mavis-zhou-a4537814a\">Mavis Zhou</a>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href=\"https://cognitiveclass.ai/mit-license/\">MIT License</a>.</p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7eee6e708e445378b01380be9d80e607ad25e9d | 122,545 | ipynb | Jupyter Notebook | notebooks/01-exploracao-dados.ipynb | andersonnrc/projeto-bootcamp-carrefour-analise-dados | 51d215b829e1ffc683d9baa686ca98e497bc1b92 | [
"MIT"
] | null | null | null | notebooks/01-exploracao-dados.ipynb | andersonnrc/projeto-bootcamp-carrefour-analise-dados | 51d215b829e1ffc683d9baa686ca98e497bc1b92 | [
"MIT"
] | null | null | null | notebooks/01-exploracao-dados.ipynb | andersonnrc/projeto-bootcamp-carrefour-analise-dados | 51d215b829e1ffc683d9baa686ca98e497bc1b92 | [
"MIT"
] | null | null | null | 102.291319 | 31,048 | 0.785263 | [
[
[
"# Despesas - Autorizações de Pagamento do Governo do Estado da Paraíba\n## De Janeiro/2021 a Junho/2021",
"_____no_output_____"
]
],
[
[
"# Instalação pacotes\n\n!pip install pandas\n!pip install PyMySQL\n!pip install SQLAlchemy",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"# Carregar CSVs em data frame do pandas\n\ndf1 = pd.read_csv('../data/pagamento_exercicio_2021_mes_1.csv', encoding='ISO-8859-1',sep=';')\ndf2 = pd.read_csv('../data/pagamento_exercicio_2021_mes_2.csv', encoding='ISO-8859-1',sep=';')\ndf3 = pd.read_csv('../data/pagamento_exercicio_2021_mes_3.csv', encoding='ISO-8859-1',sep=';')\ndf4 = pd.read_csv('../data/pagamento_exercicio_2021_mes_4.csv', encoding='ISO-8859-1',sep=';')\ndf5 = pd.read_csv('../data/pagamento_exercicio_2021_mes_5.csv', encoding='ISO-8859-1',sep=';')\ndf6 = pd.read_csv('../data/pagamento_exercicio_2021_mes_6.csv', encoding='ISO-8859-1',sep=';')",
"_____no_output_____"
],
[
"# Concatenar todos os dataframes\n\ndf = pd.concat([df1, df2, df3, df4, df5, df6])",
"_____no_output_____"
]
],
[
[
"## Realização de análises e transformações",
"_____no_output_____"
]
],
[
[
"# Exibir as colunas\n\ndf.columns",
"_____no_output_____"
],
[
"# Exibir quantidade de linhas e colunas\n\ndf.shape",
"_____no_output_____"
],
[
"# Exibir tipos das colunas\n\ndf.dtypes",
"_____no_output_____"
],
[
"# Converter coluna (DATA_PAGAMENTO) em datetime\n# Converter colunas (EXERCICIO, CODIGO_UNIDADE_GESTORA, NUMERO_EMPENHO, NUMERO_AUTORIZACAO_PAGAMENTO) em object\n\ndf[\"DATA_PAGAMENTO\"] = pd.to_datetime(df[\"DATA_PAGAMENTO\"])\ndf[\"EXERCICIO\"] = df[\"EXERCICIO\"].astype(\"object\")\ndf[\"CODIGO_UNIDADE_GESTORA\"] = df[\"CODIGO_UNIDADE_GESTORA\"].astype(\"object\")\ndf[\"NUMERO_EMPENHO\"] = df[\"CODIGO_UNIDADE_GESTORA\"].astype(\"object\")\ndf[\"NUMERO_AUTORIZACAO_PAGAMENTO\"] = df[\"NUMERO_AUTORIZACAO_PAGAMENTO\"].astype(\"object\")",
"_____no_output_____"
],
[
"# Exibir tipos das colunas\n\ndf.dtypes",
"_____no_output_____"
],
[
"# Consultar linhas com valores faltantes\n\ndf.isnull().sum()",
"_____no_output_____"
],
[
"# Exibir amostra\n\ndf.sample(10)",
"_____no_output_____"
],
[
"# Criar nova coluna que vai receber o mês de pagamento\n\ndf[\"MES_PAGAMENTO\"] = df[\"DATA_PAGAMENTO\"].dt.month",
"_____no_output_____"
],
[
"# Exibir amostra\n\ndf.sample(10)",
"_____no_output_____"
],
[
"# Conveter saída para coluna (VALOR_PAGAMENTO) com o tipo float\n\npd.options.display.float_format = 'R${:,.2f}'.format",
"_____no_output_____"
],
[
"# Retornar total pago agrupado por mês e por tipo de despesa\n\n# df.groupby([df[\"MES_PAGAMENTO\"], \"TIPO_DESPESA\"])[\"VALOR_PAGAMENTO\"].sum().reset_index()\n\n# Outra forma\ndf.groupby(['MES_PAGAMENTO', \"TIPO_DESPESA\"]).agg({\"VALOR_PAGAMENTO\":\"sum\"}).reset_index()",
"_____no_output_____"
],
[
"# Retornar maior valor pago a um credor agrupado por mês\n\n# df.groupby(df[\"MES_PAGAMENTO\"])[\"VALOR_PAGAMENTO\"].max()\n\ndf.groupby([\"MES_PAGAMENTO\"]).agg({\"VALOR_PAGAMENTO\":\"max\"}).reset_index()",
"_____no_output_____"
],
[
"# Salvar dataframe em um arquivo CSV\n\ndf.to_csv('../data/pagamento_exercicio_2021_jan_a_jun_governo_pb.csv', index=False)",
"_____no_output_____"
],
[
"# Salvar dataframe no banco de dados\n\nfrom sqlalchemy import create_engine\n\ncon = create_engine(\"mysql+pymysql://root:mysql@localhost:3307/db_governo_pb\",\n encoding=\"utf-8\")\ndf.to_sql('tb_pagamento_exercicio_2021', con, index = False, if_exists = 'replace', method = 'multi', chunksize=10000)",
"_____no_output_____"
]
],
[
[
"## Gráficos para análise exploratória e/ou tomada de decisão",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nplt.style.use(\"seaborn\")",
"_____no_output_____"
],
[
"# Gráfico com o total pago aos credores por mês (Janeiro a Junho)\n\ndf.groupby(df['MES_PAGAMENTO'])['VALOR_PAGAMENTO'].sum().plot.bar(title = 'Total Pago por Mês', color = 'blue')\nplt.xlabel('MÊS')\nplt.ylabel('RECEITA');",
"_____no_output_____"
],
[
"# Gráfico com o valor máximo pago a um credor por mês (Janeiro a Junho)\n\ndf.groupby([\"MES_PAGAMENTO\"]).agg({\"VALOR_PAGAMENTO\":\"max\"}).plot.bar(title = 'Maior valor pago a um credor po mês', color = 'green')\nplt.xlabel('MÊS')\nplt.ylabel('VALOR');",
"_____no_output_____"
],
[
"# Gráfico de linha exibindo a soma dos pagamentos a credores no decorrer dos meses\n\ndf.groupby([\"MES_PAGAMENTO\"]).agg({\"VALOR_PAGAMENTO\":\"sum\"}).plot(title = 'Total de pagamentos por mês aos credores')\nplt.xlabel('MÊS')\nplt.ylabel('TOTAL PAGO')\nplt.legend();",
"_____no_output_____"
],
[
"# Gráfico com o valor pago a credores agrupados por tipo de despesa\n\ndf.groupby([\"TIPO_DESPESA\"]).agg({\"VALOR_PAGAMENTO\":\"sum\"}).plot.bar(title = 'Soma dos valores pagos por tipo de despesa', color = 'gray')\nplt.xlabel('TIPO DE DESPESA')\nplt.ylabel('VALOR');",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7eee7355e1737763300cf1df828956273307e27 | 13,659 | ipynb | Jupyter Notebook | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook | 0a26b3948930333a357193c979b18269e1772651 | [
"MIT"
] | null | null | null | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook | 0a26b3948930333a357193c979b18269e1772651 | [
"MIT"
] | null | null | null | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook | 0a26b3948930333a357193c979b18269e1772651 | [
"MIT"
] | 2 | 2019-11-29T02:23:59.000Z | 2020-11-30T06:49:29.000Z | 31.256293 | 238 | 0.390219 | [
[
[
"**Load the libraries:**",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\nfrom sklearn.model_selection import train_test_split\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint\nfrom keras.optimizers import SGD, Adadelta, Adam, RMSprop, Adagrad, Nadam, Adamax\n\nSEED = 2017",
"Using TensorFlow backend.\n"
]
],
[
[
"**Import the dataset and extract the target variable:**",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv',\n sep = ';')\n\ny = data['quality']\nX = data.drop(['quality'], axis = 1)",
"_____no_output_____"
]
],
[
[
"**Split the dataset for training, validation and testing:**",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, \n test_size = 0.2,\n random_state = SEED)\n\nX_train, X_val, y_train, y_val = train_test_split(X_train, y_train,\n test_size = 0.2,\n random_state = SEED)",
"_____no_output_____"
]
],
[
[
"**Define a function that creates the model:**",
"_____no_output_____"
]
],
[
[
"def create_model(opt):\n model = Sequential()\n model.add(Dense(100, input_dim = X_train.shape[1],\n activation = 'relu'))\n model.add(Dense(50, activation = 'relu'))\n model.add(Dense(25, activation = 'relu'))\n model.add(Dense(10, activation = 'relu'))\n model.add(Dense(1, activation = 'linear'))\n return model",
"_____no_output_____"
]
],
[
[
"**Create a function that defines callbacks we will be using during training:**",
"_____no_output_____"
]
],
[
[
"def create_callbacks(opt):\n callbacks = [\n EarlyStopping(monitor = 'val_acc', patience = 200,\n verbose = 2),\n ModelCheckpoint('optimizers_best_' + opt + '.h5',\n monitor = 'val_acc',\n save_best_only = True,\n verbose = 0)\n ]\n return callbacks",
"_____no_output_____"
]
],
[
[
"**Create a dict of the optimizers we want to try:**",
"_____no_output_____"
]
],
[
[
"opts = dict({\n 'sgd': SGD(),\n 'sgd-0001': SGD(lr = 0.0001, decay = 0.00001),\n 'adam': Adam(),\n 'adadelta': Adadelta(),\n 'rmsprop': RMSprop(),\n 'rmsprop-0001': RMSprop(lr = 0.0001),\n 'nadam': Nadam(),\n 'adamax': Adamax()\n})",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\n"
]
],
[
[
"**Train our networks and store results:**",
"_____no_output_____"
]
],
[
[
"batch_size = 128\nn_epochs = 1000\n\nresults = []\n\n# Loop through the optimizers\nfor opt in opts:\n model = create_model(opt)\n callbacks = create_callbacks(opt)\n model.compile(loss = 'mse', \n optimizer = opts[opt],\n metrics = ['accuracy'])\n hist = model.fit(X_train.values, y_train, \n batch_size = batch_size,\n epochs = n_epochs,\n validation_data = (X_val.values, y_val),\n verbose = 0,\n callbacks = callbacks)\n \n best_epoch = np.argmax(hist.history['val_acc'])\n best_acc = hist.history['val_acc'][best_epoch]\n best_model = create_model(opt)\n \n # Load the model weights with the highest validation accuracy\n best_model.load_weights('optimizers_best_' + opt + '.h5')\n best_model.compile(loss = 'mse',\n optimizer = opts[opt],\n metrics = ['accuracy'])\n score = best_model.evaluate(X_test.values, y_test, verbose = 0)\n results.append([opt, best_epoch, best_acc, score[1]])",
"Epoch 00201: early stopping\nEpoch 00414: early stopping\nEpoch 00625: early stopping\nEpoch 00373: early stopping\nEpoch 00413: early stopping\nEpoch 00230: early stopping\nEpoch 00269: early stopping\nEpoch 00424: early stopping\n"
]
],
[
[
"**Compare the results:**",
"_____no_output_____"
]
],
[
[
"res = pd.DataFrame(results)\n\nres.columns = ['optimizer', 'epochs', 'val_accuracy', 'test_accuracy']\nres",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7eef35004db6c11c7ab9cce5b2df42781d92bc5 | 182,867 | ipynb | Jupyter Notebook | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 | 63505d9b8133f80330fe92a74b7641066dba420c | [
"MIT"
] | 2 | 2020-11-18T19:29:20.000Z | 2021-09-09T13:52:29.000Z | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 | 63505d9b8133f80330fe92a74b7641066dba420c | [
"MIT"
] | null | null | null | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 | 63505d9b8133f80330fe92a74b7641066dba420c | [
"MIT"
] | 2 | 2020-11-18T19:39:31.000Z | 2021-11-17T07:49:09.000Z | 43.821471 | 4,668 | 0.534181 | [
[
[
"# Jupyter (IPython) Advanced Features\n---",
"_____no_output_____"
],
[
"Outline\n- Keyboard shortcuts\n- Magic\n- Accessing the underlying operating system\n- Using different languages inside single notebook\n- File magic\n- Using Jupyter more efficiently\n- Profiling\n- Output\n- Automation\n- Extensions\n- 'Big Data' Analysis\n \n\nSources: [IPython Tutorial](https://github.com/ipython/ipython-in-depth/blob/pycon-2019/1%20-%20Beyond%20Plain%20Python.ipynb), [Dataquest](https://www.dataquest.io/blog/advanced-jupyter-notebooks-tutorial/), and [Dataquest](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/), [Alex Rogozhnikov Blog](http://arogozhnikov.github.io/2016/09/10/jupyter-features.html) [Toward Data Science](https://towardsdatascience.com/how-to-effortlessly-optimize-jupyter-notebooks-e864162a06ee)",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"## Keyboard Shortcuts\n\n\nKeyboard Shortcuts\n\nAs in the classic Notebook, you can navigate the user interface through keyboard shortcuts. You can find and customize the current list of keyboard shortcuts by selecting the Advanced Settings Editor item in the Settings menu, then selecting Keyboard Shortcuts in the Settings tab.\n\n### Shortcut Keys for Jupyter lab\n\nWhile working with any tools, it helps if you know shortcut key to perform most frequent tasks. It increases your productivity and can be very comfortable while working. I have listed down some of the shortcuts which I use frequently while working on Jupyter Lab. Hopefully, it will be useful for others too. Also, you can check full list of shortcut by accessing the __commands tab__ in the Jupyter lab. You will find it below the Files on the left hand side.\n\n1. **ESC** takes users into command mode view while **ENTER** takes users into cell mode view.\n2. **A** inserts a cell above the currently selected cell. Before using this, make sure that you are in command mode (by pressing ESC).\n3. **B** inserts a cell below the currently selected cell. Before using this make sure that you are in command mode (by pressing ESC).\n4. **D + D** = Pressing D two times in a quick succession in command mode deletes the currently selected cell. \n5. Jupyter Lab gives you an option to change your cell into Code cell, Markdown cell or Raw Cell. You can use **M** to change current cell to a markdown cell, **Y** to change it to a code cell and **R** to change it to a raw cell.\n6. ****CTRL + B**** = Jupyter lab has two columns design. One column is for launcher or code blocks and another column is for file view etc. To increase workspace while writing code, we can close it. **CTRL + B** is the shortcut for toggling the file view column in the Jupyter lab.\n7. **SHIFT + M** = It merges multiple selected cells into one cell. \n8. **CTRL + SHIFT + –** = It splits the current cell into two cells from where your cursor is.\n9. **SHIFT+J** or **SHIFT + DOWN** = It selects the next cell in a downward direction. It will help in making multiple selections of cells.\n10. **SHIFT + K** or **SHIFT + UP** = It selects the next cell in an upwards direction. It will help in making multiple selections of cells.\n11. **CTRL +** / = It helps you in either commenting or uncommenting any line in the Jupyter lab. For this to work, you don’t even need to select the whole line. It will comment or uncomment line where your cursor is. If you want to do it for more that one line then you will need to first select all the line and then use this shortcut.\n\nA PDF!!!\n- https://blog.ja-ke.tech/2019/01/20/jupyterlab-shortcuts.html\n- https://github.com/Jakeler/jupyter-shortcuts",
"_____no_output_____"
],
[
"## Magics\n\n---",
"_____no_output_____"
],
[
"Magics are turning simple python into *magical python*. Magics are the key to power of ipython.\n\nMagic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.",
"_____no_output_____"
],
[
"#### What is Magic??? Information about IPython's 'magic' % functions.",
"_____no_output_____"
]
],
[
[
"%magic",
"_____no_output_____"
]
],
[
[
"#### List available python magics",
"_____no_output_____"
]
],
[
[
"%lsmagic",
"_____no_output_____"
]
],
[
[
"#### %env\nYou can manage environment variables of your notebook without restarting the jupyter server process. Some libraries (like theano) use environment variables to control behavior, %env is the most convenient way.",
"_____no_output_____"
]
],
[
[
"# %env - without arguments lists environmental variables\n%env OMP_NUM_THREADS=4",
"_____no_output_____"
]
],
[
[
"# Accessing the underlying operating system\n\n---",
"_____no_output_____"
],
[
"## Executing shell commands\n\nYou can call any shell command. This in particular useful to manage your virtual environment.",
"_____no_output_____"
]
],
[
[
"!pip install numpy",
"Requirement already satisfied: numpy in /Users/squiresrb/opt/anaconda3/lib/python3.8/site-packages (1.19.2)\n"
],
[
"!pip list | grep Theano",
"_____no_output_____"
]
],
[
[
"## Adding packages can also be done using...",
"_____no_output_____"
],
[
"%conda install numpy",
"_____no_output_____"
],
[
"%pip install numpy",
"_____no_output_____"
],
[
"will attempt to install packages in the current environment.",
"_____no_output_____"
]
],
[
[
"!pwd",
"/Users/squiresrb/Documents/GitHub/jupyter_training_2020/notebooks/2-Jupyter\n"
],
[
"%pwd",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
],
[
"files = !ls .\nprint(\"files in notebooks directory:\")\nprint(files)",
"files in notebooks directory:\n['2-1-Jupyter-ecosystem.ipynb', '2-1-jupyter-ecosystem.slides.html', '2-10-jupyter-code-script-of-scripts.ipynb', '2-11-Advanced-jupyter.ipynb', '2-2-jupyter-get-in-and-out.ipynb', '2-3-jupyter-notebook-basics.ipynb', '2-4-jupyter-markdown.ipynb', '2-5-jupyter-code-python.ipynb', '2-6-jupyter-code-r.ipynb', '2-7-jupyter-command-line.ipynb', '2-8-jupyter-magics.ipynb', '2-9-jupyter-sharing', '2-Jupyter-help.ipynb', 'Advanced_jupyter.ipynb', 'big-data-analysis-jupyter.ipynb', 'foo.py', 'images', 'jupyter-advanced.ipynb', 'matplotlib-anatomy.ipynb', 'pythoncode.py']\n"
],
[
"!echo $files",
"[2-1-Jupyter-ecosystem.ipynb, 2-1-jupyter-ecosystem.slides.html, 2-10-jupyter-code-script-of-scripts.ipynb, 2-11-Advanced-jupyter.ipynb, 2-2-jupyter-get-in-and-out.ipynb, 2-3-jupyter-notebook-basics.ipynb, 2-4-jupyter-markdown.ipynb, 2-5-jupyter-code-python.ipynb, 2-6-jupyter-code-r.ipynb, 2-7-jupyter-command-line.ipynb, 2-8-jupyter-magics.ipynb, 2-9-jupyter-sharing, 2-Jupyter-help.ipynb, Advanced_jupyter.ipynb, big-data-analysis-jupyter.ipynb, foo.py, images, jupyter-advanced.ipynb, matplotlib-anatomy.ipynb, pythoncode.py]\n"
],
[
"!echo {files[0].upper()}",
"2-1-JUPYTER-ECOSYSTEM.IPYNB\n"
]
],
[
[
"Note that all this is available even in multiline blocks:",
"_____no_output_____"
]
],
[
[
"import os\nfor i,f in enumerate(files):\n if f.endswith('ipynb'):\n !echo {\"%02d\" % i} - \"{os.path.splitext(f)[0]}\"\n else:\n print('--')",
"00 - 2-1-Jupyter-ecosystem\n--\n02 - 2-10-jupyter-code-script-of-scripts\n03 - 2-11-Advanced-jupyter\n04 - 2-2-jupyter-get-in-and-out\n05 - 2-3-jupyter-notebook-basics\n06 - 2-4-jupyter-markdown\n07 - 2-5-jupyter-code-python\n08 - 2-6-jupyter-code-r\n09 - 2-7-jupyter-command-line\n10 - 2-8-jupyter-magics\n--\n12 - 2-Jupyter-help\n13 - Advanced_jupyter\n14 - big-data-analysis-jupyter\n--\n--\n17 - jupyter-advanced\n18 - matplotlib-anatomy\n--\n"
]
],
[
[
"## I could take the same list with a bash command\n\nbecause magics and bash calls return python variables:",
"_____no_output_____"
]
],
[
[
"names = !ls ../images/ml_demonstrations/*.png\nnames[:5]",
"_____no_output_____"
]
],
[
[
"## Suppress output of last line\n\nsometimes output isn't needed, so we can either use `pass` instruction on new line or semicolon at the end ",
"_____no_output_____"
],
[
"%conda install matplotlib",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy",
"_____no_output_____"
],
[
"# if you don't put semicolon at the end, you'll have output of function printed\n\nplt.hist(numpy.linspace(0, 1, 1000)**1.5);",
"_____no_output_____"
]
],
[
[
"# Using different languages inside single notebook\n\n---\n\nIf you're missing those much, using other computational kernels:\n\n- %%python2\n- %%python3\n- %%ruby\n- %%perl\n- %%bash\n- %%R\n\nis possible, but obviously you'll need to setup the corresponding kernel first.",
"_____no_output_____"
]
],
[
[
"# %%ruby\n# puts 'Hi, this is ruby.'",
"_____no_output_____"
],
[
"%%bash\necho 'Hi, this is bash.'",
"Hi, this is bash.\n"
]
],
[
[
"## Running R code in Jupyter notebook",
"_____no_output_____"
],
[
"#### Installing R kernel\n\nEasy Option: Installing the R Kernel Using Anaconda\nIf you used Anaconda to set up your environment, getting R working is extremely easy. Just run the below in your terminal:",
"_____no_output_____"
]
],
[
[
"# %conda install -c r r-essentials",
"_____no_output_____"
]
],
[
[
"#### Running R and Python in the same notebook.\n\nThe best solution to this is to install rpy2 (requires a working version of R as well), which can be easily done with pip:",
"_____no_output_____"
]
],
[
[
"%pip install rpy2",
"Collecting rpy2\n Downloading rpy2-3.3.6.tar.gz (179 kB)\n\u001b[K |████████████████████████████████| 179 kB 465 kB/s eta 0:00:01\n\u001b[31m ERROR: Command errored out with exit status 1:\n command: /Users/squiresrb/opt/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-install-79ynf4cx/rpy2/setup.py'\"'\"'; __file__='\"'\"'/private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-install-79ynf4cx/rpy2/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base /private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-pip-egg-info-8nhbgt8z\n cwd: /private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-install-79ynf4cx/rpy2/\n Complete output (2 lines):\n cffi mode: CFFI_MODE.ANY\n Error: rpy2 in API mode cannot be built without R in the PATH or R_HOME defined. Correct this or force ABI mode-only by defining the environment variable RPY2_CFFI_MODE=ABI\n ----------------------------------------\u001b[0m\n\u001b[31mERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.\u001b[0m\n\u001b[?25hNote: you may need to restart the kernel to use updated packages.\n"
]
],
[
[
"You can then use the two languages together, and even pass variables inbetween:",
"_____no_output_____"
]
],
[
[
"%load_ext rpy2.ipython",
"_____no_output_____"
],
[
"%R require(ggplot2)",
"_____no_output_____"
],
[
"import pandas as pd\ndf = pd.DataFrame({\n 'Letter': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'],\n 'X': [4, 3, 5, 2, 1, 7, 7, 5, 9],\n 'Y': [0, 4, 3, 6, 7, 10, 11, 9, 13],\n 'Z': [1, 2, 3, 1, 2, 3, 1, 2, 3]\n })",
"_____no_output_____"
],
[
"%%R -i df\nggplot(data = df) + geom_point(aes(x = X, y= Y, color = Letter, size = Z))",
"_____no_output_____"
]
],
[
[
"## Writing functions in cython (or fortran)\n\nSometimes the speed of numpy is not enough and I need to write some fast code. \nIn principle, you can compile function in the dynamic library and write python wrappers...\n\nBut it is much better when this boring part is done for you, right?\n\nYou can write functions in cython or fortran and use those directly from python code.\n\nFirst you'll need to install:\n```\n%pip install cython \n```",
"_____no_output_____"
]
],
[
[
"%pip install cython",
"_____no_output_____"
],
[
"%load_ext Cython",
"_____no_output_____"
],
[
"%%cython\ndef myltiply_by_2(float x):\n return 2.0 * x",
"_____no_output_____"
],
[
"myltiply_by_2(23.)",
"_____no_output_____"
]
],
[
[
"I also should mention that there are different jitter systems which can speed up your python code.\nMore examples in [my notebook](http://arogozhnikov.github.io/2015/09/08/SpeedBenchmarks.html). \n",
"_____no_output_____"
],
[
"For more information see the IPython help at: [Cython](https://github.com/ipython/ipython-in-depth/blob/pycon-2019/6%20-%20Cross-Language-Integration.ipynb)",
"_____no_output_____"
],
[
"# File magic",
"_____no_output_____"
],
[
"%%writefile Export the contents of a cell",
"_____no_output_____"
]
],
[
[
"%%writefile?",
"_____no_output_____"
]
],
[
[
"`%pycat` ill output in the pop-up window:\n```\nShow a syntax-highlighted file through a pager.\n\nThis magic is similar to the cat utility, but it will assume the file\nto be Python source and will show it with syntax highlighting.\n\nThis magic command can either take a local filename, an url,\nan history range (see %history) or a macro as argument ::\n\n%pycat myscript.py\n%pycat 7-27\n%pycat myMacro\n%pycat http://www.example.com/myscript.py\n```",
"_____no_output_____"
],
[
"## %load \nloading code directly into cell. You can pick local file or file on the web.\n\nAfter uncommenting the code below and executing, it will replace the content of cell with contents of file.\n",
"_____no_output_____"
]
],
[
[
"# %load https://matplotlib.org/_downloads/f7171577b84787f4b4d987b663486a94/anatomy.py",
"_____no_output_____"
]
],
[
[
"## %run to execute python code\n\n%run can execute python code from .py files — this is a well-documented behavior. \n\nBut it also can execute other jupyter notebooks! Sometimes it is quite useful.\n\nNB. %run is not the same as importing python module.",
"_____no_output_____"
]
],
[
[
"# this will execute all the code cells from different notebooks\n%run ./matplotlib-anatomy.ipynb",
"_____no_output_____"
]
],
[
[
"# Using Jupyter more efficiently\n\n---",
"_____no_output_____"
],
[
"## Store Magic - %store: lazy passing data between notebooks",
"_____no_output_____"
],
[
"%store lets you store your macro and use it across all of your Jupyter Notebooks.",
"_____no_output_____"
]
],
[
[
"data = 'this is the string I want to pass to different notebook'\n%store data\ndel data # deleted variable",
"_____no_output_____"
],
[
"# in second notebook I will use:\n%store -r data\nprint(data)",
"_____no_output_____"
]
],
[
[
"## %who: analyze variables of global scope",
"_____no_output_____"
]
],
[
[
"%whos",
"_____no_output_____"
],
[
"# pring names of string variables\n%who str",
"_____no_output_____"
]
],
[
[
"## Multiple cursors\n\nSince recently jupyter supports multiple cursors (in a single cell), just like sublime ot intelliJ! __Alt + mouse selection__ for multiline selection and __Ctrl + mouse clicks__ for multicursors.\n\n<img src='./images/jupyter/multi-cursor.gif' />\n\nGif taken from http://swanintelligence.com/multi-cursor-in-jupyter.html",
"_____no_output_____"
],
[
"## Timing ",
"_____no_output_____"
],
[
"When you need to measure time spent or find the bottleneck in the code, ipython comes to the rescue.",
"_____no_output_____"
],
[
"%%time\nimport time\ntime.sleep(2) # sleep for two seconds",
"_____no_output_____"
]
],
[
[
"# measure small code snippets with timeit !\nimport numpy\n%timeit numpy.random.normal(size=100)",
"_____no_output_____"
],
[
"%%writefile pythoncode.py\n\nimport numpy\ndef append_if_not_exists(arr, x):\n if x not in arr:\n arr.append(x)\n \ndef some_useless_slow_function():\n arr = list()\n for i in range(10000):\n x = numpy.random.randint(0, 10000)\n append_if_not_exists(arr, x)",
"_____no_output_____"
],
[
"# shows highlighted source of the newly-created file\n%pycat pythoncode.py",
"_____no_output_____"
],
[
"from pythoncode import some_useless_slow_function, append_if_not_exists",
"_____no_output_____"
]
],
[
[
"## Hiding code or output",
"_____no_output_____"
],
[
"- Click on the blue vertical bar or line to the left to collapse code or output",
"_____no_output_____"
],
[
"## Commenting and uncommenting a block of code\n\nYou might want to add new lines of code and comment out the old lines while you’re working. This is great if you’re improving the performance of your code or trying to debug it.\n- First, select all the lines you want to comment out.\n- Next hit cmd + / to comment out the highlighted code!",
"_____no_output_____"
],
[
"## Pretty Print all cell outputs\n\nNormally only the last output in the cell will be printed. For everything else, you have to manually add print(), which is fine but not super convenient. You can change that by adding this at the top of the notebook:",
"_____no_output_____"
]
],
[
[
"from IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"",
"_____no_output_____"
]
],
[
[
"# Profiling: %prun, %lprun, %mprun\n---",
"_____no_output_____"
],
[
"See a much longer explination of profiling and timeing in Jake Vanderplas' Python Data Science Handbook: \nhttps://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html",
"_____no_output_____"
]
],
[
[
"# shows how much time program spent in each function\n%prun some_useless_slow_function()",
"_____no_output_____"
]
],
[
[
"Example of output:\n```\n26338 function calls in 0.713 seconds\n\n Ordered by: internal time\n\n ncalls tottime percall cumtime percall filename:lineno(function)\n 10000 0.684 0.000 0.685 0.000 pythoncode.py:3(append_if_not_exists)\n 10000 0.014 0.000 0.014 0.000 {method 'randint' of 'mtrand.RandomState' objects}\n 1 0.011 0.011 0.713 0.713 pythoncode.py:7(some_useless_slow_function)\n 1 0.003 0.003 0.003 0.003 {range}\n 6334 0.001 0.000 0.001 0.000 {method 'append' of 'list' objects}\n 1 0.000 0.000 0.713 0.713 <string>:1(<module>)\n 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}\n```",
"_____no_output_____"
]
],
[
[
"# %load_ext memory_profiler ???",
"_____no_output_____"
],
[
"To profile memory, you can install and run pmrun\n\n# %pip install memory_profiler\n# %pip install line_profiler",
"_____no_output_____"
],
[
"# tracking memory consumption (show in the pop-up)\n# %mprun -f append_if_not_exists some_useless_slow_function()",
"_____no_output_____"
]
],
[
[
"Example of output:\n```\nLine # Mem usage Increment Line Contents\n================================================\n 3 20.6 MiB 0.0 MiB def append_if_not_exists(arr, x):\n 4 20.6 MiB 0.0 MiB if x not in arr:\n 5 20.6 MiB 0.0 MiB arr.append(x)\n```",
"_____no_output_____"
],
[
"**%lprun** is line profiling, but it seems to be broken for latest IPython release, so we'll manage without magic this time:",
"_____no_output_____"
],
[
"```python\nimport line_profiler\nlp = line_profiler.LineProfiler()\nlp.add_function(some_useless_slow_function)\nlp.runctx('some_useless_slow_function()', locals=locals(), globals=globals())\nlp.print_stats()\n```",
"_____no_output_____"
],
[
"## Debugging with %debug\n\nJupyter has own interface for [ipdb](https://docs.python.org/2/library/pdb.html). Makes it possible to go inside the function and investigate what happens there.\n\nThis is not pycharm and requires much time to adapt, but when debugging on the server this can be the only option (or use pdb from terminal).",
"_____no_output_____"
]
],
[
[
"#%%debug filename:line_number_for_breakpoint\n# Here some code that fails. This will activate interactive context for debugging",
"_____no_output_____"
]
],
[
[
"A bit easier option is `%pdb`, which activates debugger when exception is raised:",
"_____no_output_____"
]
],
[
[
"# %pdb\n\n# def pick_and_take():\n# picked = numpy.random.randint(0, 1000)\n# raise NotImplementedError()\n \n# pick_and_take()",
"_____no_output_____"
]
],
[
[
"# Output\n---",
"_____no_output_____"
],
[
"## [RISE](https://github.com/damianavila/RISE): presentations with notebook\n\nExtension by Damian Avila makes it possible to show notebooks as demonstrations. Example of such presentation: http://bollwyvl.github.io/live_reveal/#/7\n\nIt is very useful when you teach others e.g. to use some library.\n",
"_____no_output_____"
],
[
"## Jupyter output system\n\nNotebooks are displayed as HTML and the cell output can be HTML, so you can return virtually anything: video/audio/images. \n\nIn this example I scan the folder with images in my repository and show first five of them:",
"_____no_output_____"
]
],
[
[
"import os\nfrom IPython.display import display, Image\nnames = [f for f in os.listdir('../images/') if f.endswith('.png')]\nfor name in names[:5]:\n display(Image('../images/' + name, width=300))",
"_____no_output_____"
]
],
[
[
"## Write your posts in notebooks\n\nLike this one. Use `nbconvert` to export them to html.",
"_____no_output_____"
],
[
"# [Jupyter-contrib extensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions)\n\nare installed with \n```\n!pip install https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tarball/master\n!pip install jupyter_nbextensions_configurator\n!jupyter contrib nbextension install --user\n!jupyter nbextensions_configurator enable --user\n```\n\n<img src='./images/jupyter/nbextensions.png' />\n\nthis is a family of different extensions, including e.g. **jupyter spell-checker and code-formatter**, \nthat are missing in jupyter by default. ",
"_____no_output_____"
],
[
"## Reconnect to kernel\n\nLong before, when you started some long-taking process and at some point your connection to ipython server dropped, \nyou completely lost the ability to track the computations process (unless you wrote this information to file). So either you interrupt the kernel and potentially lose some progress, or you wait till it completes without any idea of what is happening.\n\n`Reconnect to kernel` option now makes it possible to connect again to running kernel without interrupting computations and get the newcoming output shown (but some part of output is already lost).",
"_____no_output_____"
],
[
"# Big data analysis\n\nA number of solutions are available for querying/processing large data samples: \n- [ipyparallel (formerly ipython cluster)](https://github.com/ipython/ipyparallel) is a good option for simple map-reduce operations in python. We use it in [rep](github.com/yandex/rep) to train many machine learning models in parallel\n- [pyspark](http://www.cloudera.com/documentation/enterprise/5-5-x/topics/spark_ipython.html)\n- spark-sql magic [%%sql](https://github.com/jupyter-incubator/sparkmagic)",
"_____no_output_____"
],
[
"Additional Resources:\n\n* IPython [built-in magics](https://ipython.org/ipython-doc/3/interactive/magics.html)\n* Nice [interactive presentation about jupyter](http://quasiben.github.io/dfwmeetup_2014/#/) by Ben Zaitlen\n* Advanced notebooks [part 1: magics](https://blog.dominodatalab.com/lesser-known-ways-of-using-notebooks/) and [part 2: widgets](https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/)\n* [Profiling in python with jupyter](http://pynash.org/2013/03/06/timing-and-profiling/)\n* [4 ways to extend notebooks](http://mindtrove.info/4-ways-to-extend-jupyter-notebook/)\n* [IPython notebook tricks](https://www.quora.com/What-are-your-favorite-tricks-for-IPython-Notebook)\n* [Jupyter vs Zeppelin for big data](https://www.linkedin.com/pulse/comprehensive-comparison-jupyter-vs-zeppelin-hoc-q-phan-mba-)\n* [Making publication ready Python notebooks](http://blog.juliusschulz.de/blog/ultimate-ipython-notebook).\n* https://yoursdata.net/installing-and-configuring-jupyter-lab-on-windows/",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7eefbe0e8b565b9cfa0de569f21745e1042f248 | 8,740 | ipynb | Jupyter Notebook | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 | be35c2f8e0ee6446c377bafcc79b03caaab56fe7 | [
"Apache-2.0"
] | null | null | null | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 | be35c2f8e0ee6446c377bafcc79b03caaab56fe7 | [
"Apache-2.0"
] | null | null | null | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 | be35c2f8e0ee6446c377bafcc79b03caaab56fe7 | [
"Apache-2.0"
] | null | null | null | 21.421569 | 261 | 0.371396 | [
[
[
"<a href=\"https://colab.research.google.com/github/michaelll22/CPEN-21A-ECE-2-2/blob/main/Operations_and_Expressions_in_Python.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"##Boolean Operators",
"_____no_output_____"
]
],
[
[
"a = 10\nb = 9\nc = 8\n\nprint (10 > 9)\nprint (10 == 9)\nprint (10 < 9)\n\nprint (a)\n\nprint (a > b)\nc = print (a > b)\n\nc",
"True\nFalse\nFalse\n10\nTrue\nTrue\n"
],
[
"##true\nprint(bool(\"Hello\"))\nprint(bool(15))\nprint(bool(True))\nprint(bool(1))\n\n##false\nprint(bool(False))\nprint(bool(0))\nprint(bool(None))\nprint(bool([]))",
"True\nTrue\nTrue\nFalse\nTrue\nFalse\nFalse\nFalse\n"
],
[
"def myFunction(): \n return True\n\nprint(myFunction())",
"True\n"
],
[
"def myFunction():\n return True\n\nif myFunction():\n print(\"Yes/True\")\nelse:\n print(\"No/Flase\")\n",
"Yes\n"
],
[
"print(10>9)\n\na = 6 #0000 0110\nb = 7 #0000 0111\n\nprint(a == b)\nprint(a != b)",
"True\nFalse\nTrue\n"
]
],
[
[
"##Python Operators",
"_____no_output_____"
]
],
[
[
"print(10 + 5)\nprint(10 - 5)\nprint(10 * 5)\nprint(10 / 5)\nprint(10 % 5)\nprint(10 // 3)\nprint(10 ** 2)",
"15\n5\n50\n2.0\n0\n3\n100\n"
]
],
[
[
"##Bitwise Operators",
"_____no_output_____"
]
],
[
[
"a = 60 #0011 1100\nb = 13 \n\nprint (a^b)\nprint (~a)\nprint (a<<2)\nprint (a>>2) #0000 1111",
"49\n-61\n240\n15\n"
]
],
[
[
"##Assignment Operator",
"_____no_output_____"
]
],
[
[
"x = 2\nx += 3 #Same As x = x+3\nprint(x)\nx",
"5\n"
]
],
[
[
"##Logical Operators",
"_____no_output_____"
]
],
[
[
"a = 5\nb = 6\n\nprint(a>b and a==a)\nprint(a<b or b==a)",
"False\nTrue\n"
]
],
[
[
"##Identity Operator",
"_____no_output_____"
]
],
[
[
"print(a is b)\nprint(a is not b)",
"False\nTrue\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7ef112f2faeedbfefa5be57a962761191496e03 | 46,184 | ipynb | Jupyter Notebook | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence | 776741c61383dbc700c65cd8b1ad3d2dff8b4a8e | [
"MIT"
] | 2 | 2019-06-06T06:39:43.000Z | 2019-06-26T08:02:29.000Z | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence | 776741c61383dbc700c65cd8b1ad3d2dff8b4a8e | [
"MIT"
] | null | null | null | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence | 776741c61383dbc700c65cd8b1ad3d2dff8b4a8e | [
"MIT"
] | null | null | null | 33.933872 | 1,465 | 0.499675 | [
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport sys\nfrom time import time\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (20.0, 10.0)\n\n%load_ext autoreload\n%autoreload 2\n\nsys.path.append('..')\n\nimport isolation\nimport sample_players\nimport run_match\nimport my_baseline_player as custom\nimport book as b",
"Populating the interactive namespace from numpy and matplotlib\nThe autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"## I estimate 10s per game. 100 starting positions, 100 secondary starting positions, then 10000 openings. 4 threads, and symmetries that produce x4 data. If I want 12 points per opening, then that would be:",
"_____no_output_____"
]
],
[
[
"estimated_seconds = 10000 * 12 * 10/ (4 * 4)\nestimated_hours = estimated_seconds / 3600\nprint(estimated_hours)",
"20.833333333333332\n"
]
],
[
[
"### The plan is as follows:\n - Create a book (or load previously saved)\n - for each starting action for player 1 (100) and each starting action for player 2 (100) run 3 experiments (DETERMINISTIC BOOK FILLING).\n - Run epsilon-greedy algorithm to make a STOCHASTIC BOOK FILLING (using the opening book up to its depth [1-epsilon of the time]). Reduce epsilon exponentially to zero.",
"_____no_output_____"
]
],
[
[
"book = b.load_latest_book(depth=4)",
"_____no_output_____"
],
[
"type(book)",
"_____no_output_____"
],
[
"sum(abs(value) for value in book.values())",
"_____no_output_____"
],
[
"#book # book -> {(state, action): counts}",
"_____no_output_____"
],
[
"agent_names = ('CustomPlayer1', 'CustomPlayer2')\nagent1 = isolation.Agent(custom.CustomPlayer, agent_names[0])\nagent2 = isolation.Agent(custom.CustomPlayer, agent_names[1])\nagents = (agent1, agent2)\n\nstate = isolation.isolation.Isolation()\ntime_limit = 150\nmatch_id = 0\n\ntic = time.time()\nwinner, game_history, match_id = isolation.play((agents,\n state,\n time_limit,\n match_id))\ntoc = time.time()\nprint('Elapsed time: {}'.format((toc-tic)))",
"Elapsed time: 11.986933946609497\n"
],
[
"root = isolation.isolation.Isolation()\nopening_states = list(b.get_full_states(root, depth=2))\nprint(type(opening_states))\nprint(len(opening_states))",
"<class 'list'>\n9801\n"
],
[
"len([s for s in opening_states if s.ply_count==1])",
"_____no_output_____"
],
[
"[s for s in opening_states if s.ply_count==0]",
"_____no_output_____"
],
[
"99*99",
"_____no_output_____"
],
[
"opening_states[0]",
"_____no_output_____"
]
],
[
[
"### Let's generate the corresponding matches",
"_____no_output_____"
]
],
[
[
"# Constant parameteres\ntime_limit = 150\ndepth = 4\nfull_search_depth = 2\nmatches_per_opening = 3\n\n# Create the agents that will play\nagent_names = ('CustomPlayer1', 'CustomPlayer2')\nagent1 = isolation.Agent(custom.CustomPlayer, agent_names[0])\nagent2 = isolation.Agent(custom.CustomPlayer, agent_names[1])\nagents = (agent1, agent2)\n\n# Get the initial states\nroot = isolation.isolation.Isolation()\nopening_states = list(b.get_full_states(root, depth=full_search_depth))\n\n# Generate the matches\nmatches = [(agents, state, time_limit, match_id) \n for match_id, state in enumerate(opening_states)]\nmatches = matches * 3\nprint('Generated {} matches.'.format(len(matches)))\n\n# Create or load the book\nbook = b.load_latest_book(depth=depth)",
"Generated 29403 matches.\n"
],
[
"matches[0]",
"_____no_output_____"
],
[
"def active_player(state):\n return state.ply_count % 2",
"_____no_output_____"
],
[
"active_player(matches[0][1])",
"_____no_output_____"
],
[
"batch_size = 10\nx = list(range(10,45))\nbatches = [x[i*batch_size:(i+1)*batch_size] \n for i in range(len(x) // batch_size + (len(x) % batch_size != 0))]\nbatches",
"_____no_output_____"
],
[
"l = [1,2,3,445]",
"_____no_output_____"
],
[
"isinstance(l[3], int)",
"_____no_output_____"
],
[
"l.insert(0,45)\nl",
"_____no_output_____"
],
[
"from multiprocessing.pool import ThreadPool as Pool\nnum_processes = 1\nbatch_size = 10\n\n# Small test for debugging\nmatches = matches[:10]\n\nresults = []\npool = Pool(num_processes)\ntic = time.time()\nfor result in pool.imap_unordered(isolation.play, matches):\n results.append(result)\n winner, game_history, match_id = result\n print('Results for match {}: {} wins.'.format(match_id, winner.name))\n _, state, _, _ = matches[match_id]\n if state.locs[1] is not None:\n game_history.insert(0,state.locs[1])\n if state.locs[0] is not None:\n game_history.insert(0,state.locs[0])\n root = isolation.isolation.Isolation()\n print(game_history)\n b.process_game_history(root,\n game_history, \n book,\n agent_names.index(winner.name),\n active_player=state.ply_count % 2,\n depth=depth)\ntoc = time.time()\nprint('Elapsed time {} seconds.'.format((toc-tic)))",
"Results for match 0: CustomPlayer1 wins.\n[84, 56, <Action.ENE: 11>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.ENE: 11>, <Action.ESE: -15>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.NNW: 27>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.NNE: 25>, <Action.NNW: 27>, <Action.ESE: -15>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.NNW: 27>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.ENE: 11>, <Action.ENE: 11>, <Action.SSE: -27>, <Action.ESE: -15>, <Action.NNE: 25>, <Action.NNW: 27>, <Action.WSW: -11>, <Action.ENE: 11>, <Action.WNW: 15>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.ENE: 11>, <Action.WNW: 15>, <Action.SSW: -25>]\nState: Isolation(board=41523161203939122082683632224299007, ply_count=0, locs=(None, None)) \n Action: 84\n\n\nGot an int action: loc_sym = 82\nGot an int action: loc_sym = 32\nGot an int action: loc_sym = 30\nState: Isolation(board=41523161184596308968849565429000191, ply_count=1, locs=(84, None)) \n Action: 56\n\n\nGot an int action: loc_sym = 58\nGot an int action: loc_sym = 56\nGot an int action: loc_sym = 58\nState: Isolation(board=41523161184596308896791971391072255, ply_count=2, locs=(84, 56)) \n Action: 11\n\n\nState: Isolation(board=41523121570515051764623174619097087, ply_count=3, locs=(95, 56)) \n Action: 25\n\n\nResults for match 1: CustomPlayer2 wins.\n[18, 79, <Action.NNE: 25>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.NNW: 27>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.WNW: 15>, <Action.ENE: 11>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.ESE: -15>, <Action.ENE: 11>, <Action.ENE: 11>, <Action.SSE: -27>, <Action.SSW: -25>, <Action.SSE: -27>, <Action.WNW: 15>, <Action.ENE: 11>, <Action.WNW: 15>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.SSE: -27>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.WSW: -11>, <Action.NNW: 27>, <Action.ESE: -15>, <Action.SSW: -25>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.WSW: -11>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.ENE: 11>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.ESE: -15>, <Action.SSW: -25>, <Action.NNE: 25>]\nState: Isolation(board=41523161203939122082683632224299007, ply_count=0, locs=(None, None)) \n Action: 18\n\n\nGot an int action: loc_sym = 18\nGot an int action: loc_sym = 96\nGot an int action: loc_sym = 96\nState: Isolation(board=41523161203939122082683632224036863, ply_count=1, locs=(18, None)) \n Action: 79\n\n\nGot an int action: loc_sym = 87\nGot an int action: loc_sym = 27\nGot an int action: loc_sym = 35\nState: Isolation(board=41523161203334659172876317636683775, ply_count=2, locs=(18, 79)) \n Action: 25\n\n\nState: Isolation(board=41523161203334659172867521543661567, ply_count=3, locs=(43, 79)) \n Action: -27\n\n\n\nResults for match 2: CustomPlayer1 wins.\n[81, 29, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.SSE: -27>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.ESE: -15>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.ESE: -15>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.WNW: 15>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.NNW: 27>, <Action.WNW: 15>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.ENE: 11>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.NNW: 27>, <Action.SSE: -27>, <Action.ENE: 11>, <Action.WSW: -11>, <Action.ESE: -15>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.SSE: -27>, <Action.ESE: -15>, <Action.ENE: 11>, <Action.ENE: 11>]\nState: Isolation(board=41523161203939122082683632224299007, ply_count=0, locs=(None, None)) \n Action: 81\n\n\nGot an int action: loc_sym = 85\nGot an int action: loc_sym = 29\nGot an int action: loc_sym = 33\nState: Isolation(board=41523161201521270443454373874886655, ply_count=1, locs=(81, None)) \n Action: 29\n\n\nGot an int action: loc_sym = 33\nGot an int action: loc_sym = 81\nGot an int action: loc_sym = 85\nState: Isolation(board=41523161201521270443454373338015743, ply_count=2, locs=(81, 29)) \n Action: 25\n\n\nState: Isolation(board=41442031563106663761758584332871679, ply_count=3, locs=(106, 29)) \n Action: 25\n\n\nResults for match 3: CustomPlayer2 wins.\n[108, 2, <Action.ESE: -15>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.NNE: 25>, <Action.NNW: 27>, <Action.WSW: -11>, <Action.WNW: 15>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.ENE: 11>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.NNW: 27>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.ENE: 11>, <Action.NNE: 25>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.ESE: -15>, <Action.SSW: -25>, <Action.ENE: 11>, <Action.SSE: -27>, <Action.ESE: -15>, <Action.ESE: -15>, <Action.SSW: -25>, <Action.WSW: -11>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.NNE: 25>]\nState: Isolation(board=41523161203939122082683632224299007, ply_count=0, locs=(None, None)) \n Action: 108\n\n\nGot an int action: loc_sym = 110\nGot an int action: loc_sym = 4\nGot an int action: loc_sym = 6\nState: Isolation(board=41198642650280695355900476203722751, ply_count=1, locs=(108, None)) \n Action: 2\n\n\nGot an int action: loc_sym = 8\nGot an int action: loc_sym = 106\nGot an int action: loc_sym = 112\nState: Isolation(board=41198642650280695355900476203722747, ply_count=2, locs=(108, 2)) \n Action: -15\n\n\nState: Isolation(board=41198632746760381072858277010728955, ply_count=3, locs=(93, 2)) \n Action: 25\n\n\nResults for match 4: CustomPlayer2 wins.\n[58, 71, <Action.NNE: 25>, <Action.ENE: 11>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.ESE: -15>, <Action.SSE: -27>, <Action.SSE: -27>, <Action.SSE: -27>, <Action.SSE: -27>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.WNW: 15>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.ENE: 11>, <Action.SSE: -27>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.WSW: -11>, <Action.ESE: -15>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.SSW: -25>, <Action.NNW: 27>, <Action.ESE: -15>, <Action.SSW: -25>, <Action.ESE: -15>, <Action.WNW: 15>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.WNW: 15>, <Action.WNW: 15>, <Action.NNW: 27>, <Action.ENE: 11>, <Action.SSW: -25>, <Action.SSW: -25>, <Action.NNW: 27>, <Action.WNW: 15>, <Action.WSW: -11>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.SSE: -27>, <Action.WSW: -11>, <Action.WNW: 15>, <Action.SSE: -27>, <Action.NNW: 27>, <Action.NNE: 25>, <Action.NNE: 25>, <Action.WSW: -11>, <Action.SSE: -27>, <Action.NNE: 25>]\nState: Isolation(board=41523161203939122082683632224299007, ply_count=0, locs=(None, None)) \n Action: 58\n\n\nGot an int action: loc_sym = 56\nGot an int action: loc_sym = 58\nGot an int action: loc_sym = 56\nState: Isolation(board=41523161203939121794453256072587263, ply_count=1, locs=(58, None)) \n Action: 71\n\n\nGot an int action: loc_sym = 69\nGot an int action: loc_sym = 45\nGot an int action: loc_sym = 43\nState: Isolation(board=41523161203936760611211821249980415, ply_count=2, locs=(58, 71)) \n Action: 25\n\n\nState: Isolation(board=41523161194265354054294787852331007, ply_count=3, locs=(83, 71)) \n Action: 11\n\n\n"
],
[
"sum(abs(value) for value in book.values())",
"_____no_output_____"
],
[
"seconds = 29403 * 37 / 10\nprint('{} seconds'.format(seconds))\nprint('{} hours'.format(seconds/3600))",
"108791.1 seconds\n30.21975 hours\n"
],
[
"game_history",
"_____no_output_____"
]
],
[
[
"## Let's add the symmetry conditions to the game processing",
"_____no_output_____"
]
],
[
[
"s_a = list(book.keys())[0]\ns_a",
"_____no_output_____"
],
[
"W, H = 11, 9\n\ndef h_symmetry(loc):\n if loc is None:\n return None\n row = loc // (W + 2)\n center = W + (row - 1) * (W + 2) + (W + 2) // 2 + 1 if row != 0 else W // 2\n return 2 * center - loc",
"_____no_output_____"
],
[
"h_symmetry(28)",
"2\n31\n"
],
[
"h_symmetry(1)",
"0\n5\n"
],
[
"center = (H // 2) * (W + 2) + W // 2\ncenter",
"_____no_output_____"
],
[
"def c_symmetry(loc):\n if loc is None:\n return None\n center = (H // 2) * (W + 2) + W // 2\n return 2 * center - loc",
"_____no_output_____"
],
[
"c_symmetry(81)",
"_____no_output_____"
],
[
"c_symmetry(67)",
"_____no_output_____"
],
[
"def v_symmetry(loc):\n if loc is None:\n return None\n col = loc % (W + 2)\n center = (H // 2) * (W + 2) + col\n return 2 * center - loc",
"_____no_output_____"
],
[
"v_symmetry(2)",
"_____no_output_____"
],
[
"v_symmetry(28)",
"_____no_output_____"
],
[
"v_symmetry(48)",
"_____no_output_____"
],
[
"v_symmetry(86)",
"_____no_output_____"
],
[
"symmetric = b.sym_sa(s_a, loc_sym=h_symmetry, cardinal_sym=b.cardinal_sym_h)\nsymmetric",
"8\n109\n6\n83\n"
],
[
"print(isolation.DebugState.from_state(s_a[0]))",
"\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | 1 | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | X | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | 2 | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n\n"
],
[
"print(isolation.DebugState.from_state(symmetric[0]))",
"\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | 1 | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | 2 | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n| | | | | | | | | | | |\n+ - + - + - + - + - + - + - + - + - + - + - +\n\n"
],
[
"def process_game_history(state,\n game_history,\n book,\n winner_id,\n active_player=0,\n depth=4):\n \"\"\" Given an initial state, and a list of actions, this function iterates\n through the resulting states of the actions and updates count of wins in\n the state/action book\"\"\"\n OPENING_MOVES = 2\n game_value = 2 * (active_player == winner_id) - 1\n curr_state = state # It is a named tuple, so I think it is immutable. No need to copy.\n for num_action, action in enumerate(game_history):\n if (curr_state, action) in book.keys():\n book[(curr_state, action)] += game_value\n if curr_state.ply_count <= OPENING_MOVES:\n book[b.sym_sa((curr_state, action), \n loc_sym=h_symmetry,\n cardinal_sym=b.cardinal_sym_h)] += game_value\n book[b.sym_sa((curr_state, action), \n loc_sym=v_symmetry,\n cardinal_sym=b.cardinal_sym_v)] += game_value\n book[b.sym_sa((curr_state, action), \n loc_sym=c_symmetry,\n cardinal_sym=b.cardinal_sym_c)] += game_value\n curr_state = curr_state.result(action)\n active_player = 1 - active_player\n game_value = 2 * (active_player == winner_id) - 1\n # Break on depth equal to book\n if num_action >= depth - 1:\n break",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ef21cc64dab21f816726583743ea5083d54147 | 29,315 | ipynb | Jupyter Notebook | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar | 0737c66a36f8969e7a17276990bc7e76f7b410c4 | [
"Apache-2.0"
] | 1 | 2018-08-20T16:36:40.000Z | 2018-08-20T16:36:40.000Z | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar | 0737c66a36f8969e7a17276990bc7e76f7b410c4 | [
"Apache-2.0"
] | 3 | 2018-08-23T13:25:47.000Z | 2018-08-23T15:59:45.000Z | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar | 0737c66a36f8969e7a17276990bc7e76f7b410c4 | [
"Apache-2.0"
] | null | null | null | 40.602493 | 701 | 0.6044 | [
[
[
"# Statistical Downscaling and Bias-Adjustment\n\n`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Almost all adjustment algorithms conform to the `train` - `adjust` scheme, formalized within `TrainAdjust` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.\n\nThis presents examples, while a bit more info and the API are given on [this page](../sdba.rst).\n\nA very simple \"Quantile Mapping\" approach is available through the \"Empirical Quantile Mapping\" object. The object is created through the `.train` method of the class, and the simulation is adjusted with `.adjust`.",
"_____no_output_____"
]
],
[
[
"from __future__ import annotations\n\nimport cftime\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport xarray as xr\n\n%matplotlib inline\nplt.style.use(\"seaborn\")\nplt.rcParams[\"figure.figsize\"] = (11, 5)\n\n# Create toy data to explore bias adjustment, here fake temperature timeseries\nt = xr.cftime_range(\"2000-01-01\", \"2030-12-31\", freq=\"D\", calendar=\"noleap\")\nref = xr.DataArray(\n (\n -20 * np.cos(2 * np.pi * t.dayofyear / 365)\n + 2 * np.random.random_sample((t.size,))\n + 273.15\n + 0.1 * (t - t[0]).days / 365\n ), # \"warming\" of 1K per decade,\n dims=(\"time\",),\n coords={\"time\": t},\n attrs={\"units\": \"K\"},\n)\nsim = xr.DataArray(\n (\n -18 * np.cos(2 * np.pi * t.dayofyear / 365)\n + 2 * np.random.random_sample((t.size,))\n + 273.15\n + 0.11 * (t - t[0]).days / 365\n ), # \"warming\" of 1.1K per decade\n dims=(\"time\",),\n coords={\"time\": t},\n attrs={\"units\": \"K\"},\n)\n\nref = ref.sel(time=slice(None, \"2015-01-01\"))\nhist = sim.sel(time=slice(None, \"2015-01-01\"))\n\nref.plot(label=\"Reference\")\nsim.plot(label=\"Model\")\nplt.legend()",
"_____no_output_____"
],
[
"from xclim import sdba\n\nQM = sdba.EmpiricalQuantileMapping.train(\n ref, hist, nquantiles=15, group=\"time\", kind=\"+\"\n)\nscen = QM.adjust(sim, extrapolation=\"constant\", interp=\"nearest\")\n\nref.groupby(\"time.dayofyear\").mean().plot(label=\"Reference\")\nhist.groupby(\"time.dayofyear\").mean().plot(label=\"Model - biased\")\nscen.sel(time=slice(\"2000\", \"2015\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2000-15\", linestyle=\"--\"\n)\nscen.sel(time=slice(\"2015\", \"2030\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2015-30\", linestyle=\"--\"\n)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.",
"_____no_output_____"
]
],
[
[
"QM_mo = sdba.EmpiricalQuantileMapping.train(\n ref, hist, nquantiles=15, group=\"time.month\", kind=\"+\"\n)\nscen = QM_mo.adjust(sim, extrapolation=\"constant\", interp=\"linear\")\n\nref.groupby(\"time.dayofyear\").mean().plot(label=\"Reference\")\nhist.groupby(\"time.dayofyear\").mean().plot(label=\"Model - biased\")\nscen.sel(time=slice(\"2000\", \"2015\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2000-15\", linestyle=\"--\"\n)\nscen.sel(time=slice(\"2015\", \"2030\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2015-30\", linestyle=\"--\"\n)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.",
"_____no_output_____"
]
],
[
[
"QM_mo.ds",
"_____no_output_____"
],
[
"QM_mo.ds.af.plot()",
"_____no_output_____"
]
],
[
[
"## Grouping\n\nFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.\n\nExample here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).",
"_____no_output_____"
]
],
[
[
"group = sdba.Grouper(\"time.dayofyear\", window=31)\nQM_doy = sdba.Scaling.train(ref, hist, group=group, kind=\"+\")\nscen = QM_doy.adjust(sim)\n\nref.groupby(\"time.dayofyear\").mean().plot(label=\"Reference\")\nhist.groupby(\"time.dayofyear\").mean().plot(label=\"Model - biased\")\nscen.sel(time=slice(\"2000\", \"2015\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2000-15\", linestyle=\"--\"\n)\nscen.sel(time=slice(\"2015\", \"2030\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2015-30\", linestyle=\"--\"\n)\nplt.legend()",
"_____no_output_____"
],
[
"sim",
"_____no_output_____"
],
[
"QM_doy.ds.af.plot()",
"_____no_output_____"
]
],
[
[
"## Modular approach\n\nThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.\nA generic bias adjustment process is laid out as follows:\n\n- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)\n- creating and training the adjustment object `Adj = Adjustment.train(obs, hist, **kwargs)` (from `xclim.sdba.adjustment`)\n- adjustment `scen = Adj.adjust(sim, **kwargs)`\n- post-processing on `scen` (for example: re-trending)\n\nThe train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.\n\nFor heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).\n\nParameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray.\n\n### First example : pr and frequency adaptation\n\nThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.",
"_____no_output_____"
]
],
[
[
"vals = np.random.randint(0, 1000, size=(t.size,)) / 100\nvals_ref = (4 ** np.where(vals < 9, vals / 100, vals)) / 3e6\nvals_sim = (\n (1 + 0.1 * np.random.random_sample((t.size,)))\n * (4 ** np.where(vals < 9.5, vals / 100, vals))\n / 3e6\n)\n\npr_ref = xr.DataArray(\n vals_ref, coords={\"time\": t}, dims=(\"time\",), attrs={\"units\": \"mm/day\"}\n)\npr_ref = pr_ref.sel(time=slice(\"2000\", \"2015\"))\npr_sim = xr.DataArray(\n vals_sim, coords={\"time\": t}, dims=(\"time\",), attrs={\"units\": \"mm/day\"}\n)\npr_hist = pr_sim.sel(time=slice(\"2000\", \"2015\"))\n\npr_ref.plot(alpha=0.9, label=\"Reference\")\npr_sim.plot(alpha=0.7, label=\"Model\")\nplt.legend()",
"_____no_output_____"
],
[
"# 1st try without adapt_freq\nQM = sdba.EmpiricalQuantileMapping.train(\n pr_ref, pr_hist, nquantiles=15, kind=\"*\", group=\"time\"\n)\nscen = QM.adjust(pr_sim)\n\npr_ref.sel(time=\"2010\").plot(alpha=0.9, label=\"Reference\")\npr_hist.sel(time=\"2010\").plot(alpha=0.7, label=\"Model - biased\")\nscen.sel(time=\"2010\").plot(alpha=0.6, label=\"Model - adjusted\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more \"dry days\" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).",
"_____no_output_____"
]
],
[
[
"# 2nd try with adapt_freq\nsim_ad, pth, dP0 = sdba.processing.adapt_freq(\n pr_ref, pr_sim, thresh=\"0.05 mm d-1\", group=\"time\"\n)\nQM_ad = sdba.EmpiricalQuantileMapping.train(\n pr_ref, sim_ad, nquantiles=15, kind=\"*\", group=\"time\"\n)\nscen_ad = QM_ad.adjust(pr_sim)\n\npr_ref.sel(time=\"2010\").plot(alpha=0.9, label=\"Reference\")\npr_sim.sel(time=\"2010\").plot(alpha=0.7, label=\"Model - biased\")\nscen_ad.sel(time=\"2010\").plot(alpha=0.6, label=\"Model - adjusted\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Second example: tas and detrending\n\nThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.\n\nThis process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.",
"_____no_output_____"
]
],
[
[
"doy_win31 = sdba.Grouper(\"time.dayofyear\", window=15)\nSca = sdba.Scaling.train(ref, hist, group=doy_win31, kind=\"+\")\nsim_scl = Sca.adjust(sim)\n\ndetrender = sdba.detrending.PolyDetrend(degree=1, group=\"time.dayofyear\", kind=\"+\")\nsim_fit = detrender.fit(sim_scl)\nsim_detrended = sim_fit.detrend(sim_scl)\n\nref_n, _ = sdba.processing.normalize(ref, group=doy_win31, kind=\"+\")\nhist_n, _ = sdba.processing.normalize(hist, group=doy_win31, kind=\"+\")\n\nQM = sdba.EmpiricalQuantileMapping.train(\n ref_n, hist_n, nquantiles=15, group=\"time.month\", kind=\"+\"\n)\nscen_detrended = QM.adjust(sim_detrended, extrapolation=\"constant\", interp=\"nearest\")\nscen = sim_fit.retrend(scen_detrended)\n\n\nref.groupby(\"time.dayofyear\").mean().plot(label=\"Reference\")\nsim.groupby(\"time.dayofyear\").mean().plot(label=\"Model - biased\")\nscen.sel(time=slice(\"2000\", \"2015\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2000-15\", linestyle=\"--\"\n)\nscen.sel(time=slice(\"2015\", \"2030\")).groupby(\"time.dayofyear\").mean().plot(\n label=\"Model - adjusted - 2015-30\", linestyle=\"--\"\n)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Third example : Multi-method protocol - Hnilica et al. 2017\nIn [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.\n\nThe same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension. An example is given in the [advanced notebook](sdba-advanced.ipynb).\n\nHere we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.",
"_____no_output_____"
]
],
[
[
"# We are using xarray's \"air_temperature\" dataset\nds = xr.tutorial.open_dataset(\"air_temperature\")",
"_____no_output_____"
],
[
"# To get an exagerated example we select different points\n# here \"lon\" will be our dimension of two \"spatially correlated\" points\nreft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars([\"lon\", \"lat\"])\nsimt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars([\"lon\", \"lat\"])\n\n# Principal Components Adj, no grouping and use \"lon\" as the space dimensions\nPCA = sdba.PrincipalComponents.train(reft, simt, group=\"time\", crd_dim=\"lon\")\nscen1 = PCA.adjust(simt)\n\n# QM, no grouping, 20 quantiles and additive adjustment\nEQM = sdba.EmpiricalQuantileMapping.train(\n reft, scen1, group=\"time\", nquantiles=50, kind=\"+\"\n)\nscen2 = EQM.adjust(scen1)",
"_____no_output_____"
],
[
"# some Analysis figures\nfig = plt.figure(figsize=(12, 16))\ngs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)\n\naxPCA = plt.subplot(gs[0, :])\naxPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label=\"Reference\")\naxPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label=\"Simulation\")\naxPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label=\"Adjusted - PCA+EQM\")\naxPCA.set_xlabel(\"Point 1\")\naxPCA.set_ylabel(\"Point 2\")\naxPCA.set_title(\"PC-space\")\naxPCA.legend()\n\nrefQ = reft.quantile(EQM.ds.quantiles, dim=\"time\")\nsimQ = simt.quantile(EQM.ds.quantiles, dim=\"time\")\nscen1Q = scen1.quantile(EQM.ds.quantiles, dim=\"time\")\nscen2Q = scen2.quantile(EQM.ds.quantiles, dim=\"time\")\nfor i in range(2):\n if i == 0:\n axQM = plt.subplot(gs[1, 0])\n else:\n axQM = plt.subplot(gs[1, 1], sharey=axQM)\n axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label=\"No adj\")\n axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label=\"PCA\")\n axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label=\"PCA+EQM\")\n axQM.plot(\n refQ.isel(lon=i), refQ.isel(lon=i), color=\"k\", linestyle=\":\", label=\"Ideal\"\n )\n axQM.set_title(f\"QQ plot - Point {i + 1}\")\n axQM.set_xlabel(\"Reference\")\n axQM.set_xlabel(\"Model\")\n axQM.legend()\n\naxT = plt.subplot(gs[2, :])\nreft.isel(lon=0).plot(ax=axT, label=\"Reference\")\nsimt.isel(lon=0).plot(ax=axT, label=\"Unadjusted sim\")\n# scen1.isel(lon=0).plot(ax=axT, label='PCA only')\nscen2.isel(lon=0).plot(ax=axT, label=\"PCA+EQM\")\naxT.legend()\naxT.set_title(\"Timeseries - Point 1\")",
"_____no_output_____"
]
],
[
[
"### Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018\n\nThis section replicates the \"MBCn\" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexplore.ieee.org/document/1544887/) and a final reordering step.\n\nIn the following, we use the AHCCD and CanESM2 data are reference and simulation and we correct both `pr` and `tasmax` together.",
"_____no_output_____"
]
],
[
[
"from xclim.core.units import convert_units_to\nfrom xclim.testing import open_dataset\n\ndref = open_dataset(\n \"sdba/ahccd_1950-2013.nc\", chunks={\"location\": 1}, drop_variables=[\"lat\", \"lon\"]\n).sel(time=slice(\"1981\", \"2010\"))\ndref = dref.assign(\n tasmax=convert_units_to(dref.tasmax, \"K\"),\n pr=convert_units_to(dref.pr, \"kg m-2 s-1\"),\n)\ndsim = open_dataset(\n \"sdba/CanESM2_1950-2100.nc\", chunks={\"location\": 1}, drop_variables=[\"lat\", \"lon\"]\n)\n\ndhist = dsim.sel(time=slice(\"1981\", \"2010\"))\ndsim = dsim.sel(time=slice(\"2041\", \"2070\"))\ndref",
"_____no_output_____"
]
],
[
[
"##### Perform an initial univariate adjustment.",
"_____no_output_____"
]
],
[
[
"# additive for tasmax\nQDMtx = sdba.QuantileDeltaMapping.train(\n dref.tasmax, dhist.tasmax, nquantiles=20, kind=\"+\", group=\"time\"\n)\n# Adjust both hist and sim, we'll feed both to the Npdf transform.\nscenh_tx = QDMtx.adjust(dhist.tasmax)\nscens_tx = QDMtx.adjust(dsim.tasmax)\n\n# remove == 0 values in pr:\ndref[\"pr\"] = sdba.processing.jitter_under_thresh(dref.pr, \"0.01 mm d-1\")\ndhist[\"pr\"] = sdba.processing.jitter_under_thresh(dhist.pr, \"0.01 mm d-1\")\ndsim[\"pr\"] = sdba.processing.jitter_under_thresh(dsim.pr, \"0.01 mm d-1\")\n\n# multiplicative for pr\nQDMpr = sdba.QuantileDeltaMapping.train(\n dref.pr, dhist.pr, nquantiles=20, kind=\"*\", group=\"time\"\n)\n# Adjust both hist and sim, we'll feed both to the Npdf transform.\nscenh_pr = QDMpr.adjust(dhist.pr)\nscens_pr = QDMpr.adjust(dsim.pr)\n\nscenh = xr.Dataset(dict(tasmax=scenh_tx, pr=scenh_pr))\nscens = xr.Dataset(dict(tasmax=scens_tx, pr=scens_pr))",
"_____no_output_____"
]
],
[
[
"##### Stack the variables to multivariate arrays and standardize them\nThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.\n\n`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result.",
"_____no_output_____"
]
],
[
[
"# Stack the variables (tasmax and pr)\nref = sdba.processing.stack_variables(dref)\nscenh = sdba.processing.stack_variables(scenh)\nscens = sdba.processing.stack_variables(scens)\n\n# Standardize\nref, _, _ = sdba.processing.standardize(ref)\n\nallsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), \"time\"))\nhist = allsim.sel(time=scenh.time)\nsim = allsim.sel(time=scens.time)",
"_____no_output_____"
]
],
[
[
"##### Perform the N-dimensional probability density function transform\n\nThe NpdfTransform will iteratively randomly rotate our arrays in the \"variables\" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converges toward the target's joint distribution when a large number of iterations is done.",
"_____no_output_____"
]
],
[
[
"from xclim import set_options\n\n# See the advanced notebook for details on how this option work\nwith set_options(sdba_extra_output=True):\n out = sdba.adjustment.NpdfTransform.adjust(\n ref,\n hist,\n sim,\n base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.\n base_kws={\"nquantiles\": 20, \"group\": \"time\"},\n n_iter=20, # perform 20 iteration\n n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)\n )\n\nscenh = out.scenh.rename(time_hist=\"time\") # Bias-adjusted historical period\nscens = out.scen # Bias-adjusted future period\nextra = out.drop_vars([\"scenh\", \"scen\"])\n\n# Un-standardize (add the mean and the std back)\nscenh = sdba.processing.unstandardize(scenh, savg, sstd)\nscens = sdba.processing.unstandardize(scens, savg, sstd)",
"_____no_output_____"
]
],
[
[
"##### Restoring the trend\n\nThe NpdfT has given us new \"hist\" and \"sim\" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.\n\n`sdba.processing.reordering` : 'ref' the argument that provides the order, 'sim' is the argument to reorder.",
"_____no_output_____"
]
],
[
[
"scenh = sdba.processing.reordering(hist, scenh, group=\"time\")\nscens = sdba.processing.reordering(sim, scens, group=\"time\")",
"_____no_output_____"
],
[
"scenh = sdba.processing.unstack_variables(scenh)\nscens = sdba.processing.unstack_variables(scens)",
"_____no_output_____"
]
],
[
[
"##### There we are!\n\nLet's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call.",
"_____no_output_____"
]
],
[
[
"from dask import compute\nfrom dask.diagnostics import ProgressBar\n\ntasks = [\n scenh.isel(location=2).to_netcdf(\"mbcn_scen_hist_loc2.nc\", compute=False),\n scens.isel(location=2).to_netcdf(\"mbcn_scen_sim_loc2.nc\", compute=False),\n extra.escores.isel(location=2)\n .to_dataset()\n .to_netcdf(\"mbcn_escores_loc2.nc\", compute=False),\n]\n\nwith ProgressBar():\n compute(tasks)",
"_____no_output_____"
]
],
[
[
"Let's compare the series and look at the distance scores to see how well the Npdf transform has converged.",
"_____no_output_____"
]
],
[
[
"scenh = xr.open_dataset(\"mbcn_scen_hist_loc2.nc\")\n\nfig, ax = plt.subplots()\n\ndref.isel(location=2).tasmax.plot(ax=ax, label=\"Reference\")\nscenh.tasmax.plot(ax=ax, label=\"Adjusted\", alpha=0.65)\ndhist.isel(location=2).tasmax.plot(ax=ax, label=\"Simulated\")\n\nax.legend()",
"_____no_output_____"
],
[
"escores = xr.open_dataarray(\"mbcn_escores_loc2.nc\")\ndiff_escore = escores.differentiate(\"iterations\")\ndiff_escore.plot()\nplt.title(\"Difference of the subsequent e-scores.\")\nplt.ylabel(\"E-scores difference\")",
"_____no_output_____"
],
[
"diff_escore",
"_____no_output_____"
]
],
[
[
"The tutorial continues in the [advanced notebook](sdba-advanced.ipynb) with more on optimization with dask, other fancier detrending algorithms and an example pipeline for heavy processing.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7ef261f645ff1e1f53ef5d166ea00aac1cf3a91 | 69,132 | ipynb | Jupyter Notebook | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder | accddd4d17d053694241c1e91d34e9e2aac80b03 | [
"MIT"
] | null | null | null | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder | accddd4d17d053694241c1e91d34e9e2aac80b03 | [
"MIT"
] | null | null | null | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder | accddd4d17d053694241c1e91d34e9e2aac80b03 | [
"MIT"
] | 1 | 2021-06-23T23:46:41.000Z | 2021-06-23T23:46:41.000Z | 36.77234 | 934 | 0.410924 | [
[
[
"# Working with PDBsum in Jupyter & Demonstration of PDBsum protein interface data to dataframe script",
"_____no_output_____"
],
[
"Usually you'll want to get some data from PDBsum and analyze it. For the current example in this series of notebooks, I'll cover how to bring in a file of protein-protein interactions and then progress through using that in combination with Python to analyze the results and ultimately compare the results to a different structure.\n\n-----\n\n<div class=\"alert alert-block alert-warning\">\n<p>If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens!.</p>\n\n<p>\n Some tips:\n <ul>\n <li>Code cells have boxes around them. When you hover over them an <i class=\"fa-step-forward fa\"></i> icon appears.</li>\n <li>To run a code cell either click the <i class=\"fa-step-forward fa\"></i> icon, or click on the cell and then hit <b>Shift+Enter</b>. The <b>Shift+Enter</b> combo will also move you to the next cell, so it's a quick way to work through the notebook.</li>\n <li>While a cell is running a <b>*</b> appears in the square brackets next to the cell. Once the cell has finished running the asterisk will be replaced with a number.</li>\n <li>In most cases you'll want to start from the top of notebook and work your way down running each cell in turn. Later cells might depend on the results of earlier ones.</li>\n <li>To edit a code cell, just click on it and type stuff. Remember to run the cell once you've finished editing.</li>\n </ul>\n</p>\n</div>\n\n----",
"_____no_output_____"
],
[
"### Retrieving Protein-Protein interface reports/ the list of interactions\n\n#### Getting list of interactions between two proteins under individual entries under PDBsum's 'Prot-prot' tab via command line.\n\nSay example from [here](http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode=6ah3&template=interfaces.html&o=RESIDUE&l=3) links to the following as 'List of\ninteractions' in the bottom right of the page:\n\n```text \nhttp://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl?pdb=6ah3&chain1=B&chain2=G\n```\n \nThen based on suggestion at top [here](https://stackoverflow.com/a/52363117/8508004) that would be used in a curl command where the items after the `?` in the original URL get placed into quotes and provided following the `--data` flag argument option in the call to `curl`, like so:\n```text\ncurl -L -o data.txt --data \"pdb=6ah3&chain1=B&chain2=G\" http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl\n```\n\n**Specifically**, the `--data \"pdb=6ah3&chain1=B&chain2=G\"` is the part coming from the end of the original URL.\n\n\nPutting that into action in Jupyter to fetch for the example the interactions list in a text:",
"_____no_output_____"
]
],
[
[
"!curl -L -o data.txt --data \"pdb=6ah3&chain1=B&chain2=G\" http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl",
" % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 7063 0 7037 100 26 9033 33 --:--:-- --:--:-- --:--:-- 9055\n"
]
],
[
[
"To prove that the data file has been retieved, we'll show the first 16 lines of it by running the next cell:",
"_____no_output_____"
]
],
[
[
"!head -16 data.txt",
"<PRE>\r\nList of atom-atom interactions across protein-protein interface\r\n---------------------------------------------------------------\r\n<P>\r\n PDB code: 6ah3 Chains B }{ G\r\n ------------------------------\r\n<P>\r\n\r\nHydrogen bonds\r\n--------------\r\n\r\n <----- A T O M 1 -----> <----- A T O M 2 ----->\r\n\r\n Atom Atom Res Res Atom Atom Res Res\r\n no. name name no. Chain no. name name no. Chain Distance\r\n 1. 9937 NZ LYS 326 B <--> 20598 O LYS 122 G 2.47\r\n"
]
],
[
[
"Later in this series of notebooks, I'll demonstrate how to make this step even easier with just the PDB entry id and the chains you are interested in and the later how to loop on this process to get multiple data files for interactions from different structures.",
"_____no_output_____"
],
[
"### Making a Pandas dataframe from the interactions file\n\nTo convert the data to a dataframe, we'll use a script.\n\n If you haven't encountered Pandas dataframes before I suggest you see the first two notebooks that come up with you launch a session from my [blast-binder](https://github.com/fomightez/blast-binder) site. Those first two notebooks cover using the dataframe containing BLAST results some. \n \nTo get that script, you can run the next cell. (It is not included in the repository where this launches from to insure you always get the most current version, which is assumed to be the best available at the time.)",
"_____no_output_____"
]
],
[
[
"!curl -OL https://raw.githubusercontent.com/fomightez/structurework/master/pdbsum-utilities/pdbsum_prot_interactions_list_to_df.py",
" % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 23915 100 23915 0 0 35272 0 --:--:-- --:--:-- --:--:-- 35220\n"
]
],
[
[
"We have the script now. And we already have a data file for it to process. To process the data file, run the next command where we use Python to run the script and direct it at the results file, `data.txt`, we made just a few cells ago.",
"_____no_output_____"
]
],
[
[
"%run pdbsum_prot_interactions_list_to_df.py data.txt",
"Provided interactions data read and converted to a dataframe...\n\nA dataframe of the data has been saved as a file\nin a manner where other Python programs can access it (pickled form).\nRESULTING DATAFRAME is stored as ==> 'prot_int_pickled_df.pkl'"
]
],
[
[
"As of writing this, the script we are using outputs a file that is a binary, compact form of the dataframe. (That means it is tiny and not human readable. It is called 'pickled'. Saving in that form may seem odd, but as illustrated [here](#Output-to-more-universal,-table-like-formats) below this is is a very malleable form. And even more pertinent for dealing with data in Jupyter notebooks, there is actually an easier way to interact with this script when in Jupyter notebook that skips saving this intermediate file. So hang on through the long, more trandtional way of doing this before the easier way is introduced. And I saved it in the compact form and not the mroe typical tab-delimited form because we mostly won't go this route and might as well make tiny files while working along to a better route. It is easy to convert back and forth using the pickled form assuming you can match the Pandas/Python versions.)\n\nWe can take that file where the dataframe is pickled, and bring it into active memory in this notebook with another command form the Pandas library. First, we have to import the Pandas library.\nRun the next command to bring the dataframe into active memory. Note the name comes from the name noted when we ran the script in the cell above.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_pickle(\"prot_int_pickled_df.pkl\")",
"_____no_output_____"
]
],
[
[
"When that last cell ran, you won't notice any output, but something happened. We can look at that dataframe by calling it in a cell.",
"_____no_output_____"
]
],
[
[
"df",
"_____no_output_____"
]
],
[
[
"You'll notice that if the list of data is large, that the Jupyter environment represents just the head and tail to make it more reasonable. There are ways you can have Jupyter display it all which we won't go into here. \n\nInstead we'll start to show some methods of dataframes that make them convenient. For example, you can use the `head` method to see the start like we used on the command line above.",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"Now what types of interactions are observed for this pair of interacting protein chains?\n\nTo help answer that, we can group the results by the type column.",
"_____no_output_____"
]
],
[
[
"grouped = df.groupby('type')\nfor type, grouped_df in grouped:\n print(type)\n display(grouped_df)",
"Hydrogen bonds\n"
]
],
[
[
"Same data as earlier but we can cleary see we have Hydrogen bonds, Non-bonded contacts (a.k.a., van der Waals contacts), and salt bridges, and we immediately get a sense of what types of interactions are more abundant.\n\nYou may want to get a sense of what else you can do by examining he first two notebooks that come up with you launch a session from my [blast-binder](https://github.com/fomightez/blast-binder) site. Those first two notebooks cover using the dataframe containing BLAST results some.\n\nShortly, we'll cover how to bring the dataframe we just made into the notebook without dealing with a file intermediate; however, next I'll demonstrate how to save it as text for use elsewhere, such as in Excel.",
"_____no_output_____"
],
[
"## Output to more universal, table-like formats\n\nI've tried to sell you on the power of the Python/Pandas dataframe, but it isn't for all uses or everyone. However, most everyone is accustomed to dealing with text based tables or even Excel. In fact, a text-based based table perhaps tab or comma-delimited would be the better way to archive the data we are generating here. Python/Pandas makes it easy to go from the dataframe form to these tabular forms. You can even go back later from the table to the dataframe, which may be inportant if you are going to different versions of Python/Pandas as I briefly mentioned parenthetically above.\n\n**First, generating a text-based table.**",
"_____no_output_____"
]
],
[
[
"#Save / write a TSV-formatted (tab-separated values/ tab-delimited) file\ndf.to_csv('pdbsum_data.tsv', sep='\\t',index = False) #add `,header=False` to leave off header, too",
"_____no_output_____"
]
],
[
[
"Because `df.to_csv()` defaults to dealing with csv, you can simply use `df.to_csv('example.csv',index = False)` for comma-delimited (comma-separated) files.\n\nYou can see that worked by looking at the first few lines with the next command. (Feel free to make the number higher or delete the number all together. I restricted it just to first line to make output smaller.)",
"_____no_output_____"
]
],
[
[
"!head -5 pdbsum_data.tsv",
"Atom1 no.\tAtom1 name\tAtom1 Res name\tAtom1 Res no.\tAtom1 Chain\tAtom2 no.\tAtom2 name\tAtom2 Res name\tAtom2 Res no.\tAtom2 Chain\tDistance\ttype\r\n9937\tNZ\tLYS\t326\tB\t20598\tO\tLYS\t122\tG\t2.47\tHydrogen bonds\r\n9591\tO\tCYS\t280\tB\t19928\tCG1\tILE\t29\tG\t3.77\tNon-bonded contacts\r\n9591\tO\tCYS\t280\tB\t19930\tCD1\tILE\t29\tG\t3.42\tNon-bonded contacts\r\n9593\tSG\tCYS\t280\tB\t19872\tNZ\tLYS\t22\tG\t3.81\tNon-bonded contacts\r\n"
]
],
[
[
"If you had need to go back from a tab-separated table to a dataframe, you can run something like in the following cell.",
"_____no_output_____"
]
],
[
[
"reverted_df = pd.read_csv('pdbsum_data.tsv', sep='\\t')\nreverted_df.to_pickle('reverted_df.pkl') # OPTIONAL: pickle that data too",
"_____no_output_____"
]
],
[
[
"For a comma-delimited (CSV) file you'd use `df = pd.read_csv('example.csv')` because `pd.read_csv()` method defaults to comma as the separator (`sep` parameter).\n\nYou can verify that read from the text-based table by viewing it with the next line.",
"_____no_output_____"
]
],
[
[
"reverted_df.head()",
"_____no_output_____"
]
],
[
[
"**Generating an Excel spreadsheet from a dataframe.**\n\nBecause this is an specialized need, there is a special module needed that I didn't bother installing by default and so it needs to be installed before generating the Excel file. Running the next cell will do both.",
"_____no_output_____"
]
],
[
[
"%pip install openpyxl\n# save to excel (KEEPS multiINDEX, and makes sparse to look good in Excel straight out of Python)\ndf.to_excel('pdbsum_data.xlsx') # after openpyxl installed",
"Requirement already satisfied: openpyxl in /srv/conda/envs/notebook/lib/python3.7/site-packages (3.0.6)\nRequirement already satisfied: et-xmlfile in /srv/conda/envs/notebook/lib/python3.7/site-packages (from openpyxl) (1.0.1)\nRequirement already satisfied: jdcal in /srv/conda/envs/notebook/lib/python3.7/site-packages (from openpyxl) (1.4.1)\nNote: you may need to restart the kernel to use updated packages.\n"
]
],
[
[
"You'll need to download the file first to your computer and then view it locally as there is no viewer in the Jupyter environment.\n\nAdiitionally, it is possible to add styles to dataframes and the styles such as shading of cells and coloring of text will be translated to the Excel document made as well.\n\nExcel files can be read in to Pandas dataframes directly without needing to go to a text based intermediate first.",
"_____no_output_____"
]
],
[
[
"# read Excel\ndf_from_excel = pd.read_excel('pdbsum_data.xlsx',engine='openpyxl') # see https://stackoverflow.com/a/65266270/8508004 where notes xlrd no longer supports xlsx",
"Collecting xlrd\n Downloading xlrd-2.0.1-py2.py3-none-any.whl (96 kB)\n\u001b[K |████████████████████████████████| 96 kB 2.8 MB/s eta 0:00:011\n\u001b[?25hInstalling collected packages: xlrd\nSuccessfully installed xlrd-2.0.1\nNote: you may need to restart the kernel to use updated packages.\n"
]
],
[
[
"That can be viewed to convince yourself it worked by running the next command.",
"_____no_output_____"
]
],
[
[
"df_from_excel.head()",
"_____no_output_____"
]
],
[
[
"Next, we'll cover how to bring the dataframe we just made into the notebook without dealing with a file intermediate.\n\n----\n\n### Making a Pandas dataframe from the interactions file directly in Jupyter\n\nFirst we'll check for the script we'll use and get it if we don't already have it. \n\n(The thinking is once you know what you are doing you may have skipped all the steps above and not have the script you'll need yet. It cannot hurt to check and if it isn't present, bring it here.)",
"_____no_output_____"
]
],
[
[
"# Get a file if not yet retrieved / check if file exists\nimport os\nfile_needed = \"pdbsum_prot_interactions_list_to_df.py\"\nif not os.path.isfile(file_needed):\n !curl -OL https://raw.githubusercontent.com/fomightez/structurework/master/pdbsum-utilities/{file_needed}",
"_____no_output_____"
]
],
[
[
"This is going to rely on approaches very similar to those illustrated [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/PatMatch%20with%20more%20Python.ipynb#Passing-results-data-into-active-memory-without-a-file-intermediate) and [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/Sending%20PatMatch%20output%20directly%20to%20Python.ipynb##Running-Patmatch-and-passing-the-results-to-Python-without-creating-an-output-file-intermediate).\n\nWe obtained the `pdbsum_prot_interactions_list_to_df.py` script in the preparation steps above. However, instead of using it as an external script as we did earlier in this notebook, we want to use the core function of that script within this notebook for the options that involve no pickled-object file intermediate. Similar to the way we imported a lot of other useful modules in the first notebook and a cell above, you can run the next cell to bring in to memory of this notebook's computational environment, the main function associated with the `pdbsum_prot_interactions_list_to_df.py` script, aptly named `pdbsum_prot_interactions_list_to_df`. (As written below the command to do that looks a bit redundant;however, the first from part of the command below actually is referencing the `pdbsum_prot_interactions_list_to_df.py` script, but it doesn't need the `.py` extension because the import only deals with such files.)",
"_____no_output_____"
]
],
[
[
"from pdbsum_prot_interactions_list_to_df import pdbsum_prot_interactions_list_to_df",
"_____no_output_____"
]
],
[
[
"We can demonstrate that worked by calling the function.",
"_____no_output_____"
]
],
[
[
"pdbsum_prot_interactions_list_to_df()",
"_____no_output_____"
]
],
[
[
"If the module was not imported, you'd see `ModuleNotFoundError: No module named 'pdbsum_prot_interactions_list_to_df'`, but instead you should see it saying it is missing `data_file` to act on because you passed it nothing.\n\nAfter importing the main function of that script into this running notebook, you are ready to demonstrate the approach that doesn't require a file intermediates. The imported `pdbsum_prot_interactions_list_to_df` function is used within the computational environment of the notebook and the dataframe produced assigned to a variable in the running the notebook. In the end, the results are in an active dataframe in the notebook without needing to read the pickled dataframe. **Although bear in mind the pickled dataframe still gets made, and it is good to download and keep that pickled dataframe since you'll find it convenient for reading and getting back into an analysis without need for rerunning earlier steps again.**",
"_____no_output_____"
]
],
[
[
"direct_df = pdbsum_prot_interactions_list_to_df(\"data.txt\")\ndirect_df.head()",
"Provided interactions data read and converted to a dataframe...\n\nA dataframe of the data has been saved as a file\nin a manner where other Python programs can access it (pickled form).\nRESULTING DATAFRAME is stored as ==> 'prot_int_pickled_df.pkl'\n\nReturning a dataframe with the information as well."
]
],
[
[
"This may be how you prefer to use the script. Either option exists.\n\n----\n\nContinue on with the next notebook in the series, [Using PDBsum data to highlight changes in protein-protein interactions](Using%20PDBsum%20data%20to%20highlight%20changes%20in%20protein-protein%20interactions.ipynb). That notebook builds on the ground work here to demonstrate how to examine similarities and differences in specific residue-level interactions between the same chains in different, related structures.\n\n----",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7ef2c0cc8bcff19ecfffc4a98022bbbf3122422 | 730 | ipynb | Jupyter Notebook | sp20/2020-03-26-meeting08/2020-03-26-meeting08.ipynb | brandons209/supplementary | 2940da71101d3c4a86002cf2291ec579b699521f | [
"MIT"
] | null | null | null | sp20/2020-03-26-meeting08/2020-03-26-meeting08.ipynb | brandons209/supplementary | 2940da71101d3c4a86002cf2291ec579b699521f | [
"MIT"
] | null | null | null | sp20/2020-03-26-meeting08/2020-03-26-meeting08.ipynb | brandons209/supplementary | 2940da71101d3c4a86002cf2291ec579b699521f | [
"MIT"
] | null | null | null | 20.857143 | 108 | 0.457534 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7ef316eda609a111d0f96deff2688681b96c156 | 21,019 | ipynb | Jupyter Notebook | titanic_classfication.ipynb | jhee-yun/test_machinelearning1 | 9787930996b0f44a155a9c8656ec25783ddf42e0 | [
"Apache-2.0"
] | null | null | null | titanic_classfication.ipynb | jhee-yun/test_machinelearning1 | 9787930996b0f44a155a9c8656ec25783ddf42e0 | [
"Apache-2.0"
] | null | null | null | titanic_classfication.ipynb | jhee-yun/test_machinelearning1 | 9787930996b0f44a155a9c8656ec25783ddf42e0 | [
"Apache-2.0"
] | null | null | null | 31.895296 | 1,323 | 0.421904 | [
[
[
"# !python -m pip install seaborn",
"_____no_output_____"
],
[
"# %load_ext autoreload\n# %autoreload 2",
"_____no_output_____"
],
[
"import seaborn as sns",
"_____no_output_____"
],
[
"df = sns.load_dataset('titanic')\ndf.shape",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 891 entries, 0 to 890\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 survived 891 non-null int64 \n 1 pclass 891 non-null int64 \n 2 sex 891 non-null object \n 3 age 714 non-null float64 \n 4 sibsp 891 non-null int64 \n 5 parch 891 non-null int64 \n 6 fare 891 non-null float64 \n 7 embarked 889 non-null object \n 8 class 891 non-null category\n 9 who 891 non-null object \n 10 adult_male 891 non-null bool \n 11 deck 203 non-null category\n 12 embark_town 889 non-null object \n 13 alive 891 non-null object \n 14 alone 891 non-null bool \ndtypes: bool(2), category(2), float64(2), int64(4), object(5)\nmemory usage: 80.6+ KB\n"
]
],
[
[
"survived, pclass, sibsp, parch, fare",
"_____no_output_____"
]
],
[
[
"X = df[['pclass', 'sibsp', 'parch', 'fare']]\nY = df[['survived']]\nX.shape, Y.shape",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test = train_test_split(X, Y)\nx_train.shape, x_test.shape, y_train.shape, y_test.shape",
"_____no_output_____"
],
[
"from sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"logR = LogisticRegression()\ntype(logR)",
"_____no_output_____"
],
[
"logR.fit(x_train, y_train)",
"/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
],
[
"logR.classes_",
"_____no_output_____"
],
[
"logR.coef_\n# 'pclass', 'sibsp', 'parch', 'fare'",
"_____no_output_____"
],
[
"logR.score(x_train, y_train)",
"_____no_output_____"
],
[
"logR.predict(x_train)",
"_____no_output_____"
],
[
"logR.predict_proba(x_train)",
"_____no_output_____"
],
[
"logR.predict_proba(x_train[10:13])",
"_____no_output_____"
],
[
"0.41873577+0.58126423",
"_____no_output_____"
],
[
"logR.predict(x_train[10:13])",
"_____no_output_____"
],
[
"print('Hello')",
"Hello\n"
],
[
"from sklearn import metrics",
"_____no_output_____"
],
[
"metrics.confusion_matrix(x_train, y_train)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ef4b6efc793e4f7cefbf352056e7fffce4040e | 249,512 | ipynb | Jupyter Notebook | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st | 47adcfa5803eba7549b1185ec69d2317b386d9ff | [
"BSD-3-Clause"
] | null | null | null | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st | 47adcfa5803eba7549b1185ec69d2317b386d9ff | [
"BSD-3-Clause"
] | null | null | null | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st | 47adcfa5803eba7549b1185ec69d2317b386d9ff | [
"BSD-3-Clause"
] | 1 | 2019-12-12T12:46:55.000Z | 2019-12-12T12:46:55.000Z | 967.100775 | 97,408 | 0.956988 | [
[
[
"# Working with SeqFish data",
"_____no_output_____"
]
],
[
[
"import stlearn as st",
"_____no_output_____"
]
],
[
[
"The data is downloaded from https://www.spatialomics.org/SpatialDB/download.php\n\n| Technique | PMID | Title | Expression | SV genes|\n| ----------- | ----------- | ----------- | ----------- | ----------- |\n|seqFISH|30911168|Transcriptome-scale super-resolved imaging in tissues by RNA seqFISH+\tseqfish_30911168.tar.gz|seqfish_30911168_SVG.tar.gz\n\nRead SeqFish data and we select field 5.",
"_____no_output_____"
]
],
[
[
"data = st.ReadSeqFish(count_matrix_file=\"../Downloads/seqfish_30911168/cortex_svz_counts.matrix\",\n spatial_file=\"../Downloads/seqfish_30911168/cortex_svz_cellcentroids.csv\",\n field=5)",
"D:\\Anaconda3\\envs\\test2\\lib\\site-packages\\anndata-0.7.3-py3.8.egg\\anndata\\_core\\anndata.py:119: ImplicitModificationWarning: Transforming to str index.\n warnings.warn(\"Transforming to str index.\", ImplicitModificationWarning)\n"
]
],
[
[
"Quality checking for the data",
"_____no_output_____"
]
],
[
[
"st.pl.QC_plot(data)",
"_____no_output_____"
]
],
[
[
"Plot gene Nr4a1",
"_____no_output_____"
]
],
[
[
"st.pl.gene_plot(data,genes=\"Nr4a1\")",
"_____no_output_____"
]
],
[
[
"Running Preprocessing for MERFISH data",
"_____no_output_____"
]
],
[
[
"st.pp.filter_genes(data,min_cells=3)\nst.pp.normalize_total(data)\nst.pp.log1p(data)\nst.pp.scale(data)",
"Normalization step is finished in adata.X\nLog transformation step is finished in adata.X\nScale step is finished in adata.X\n"
]
],
[
[
"Running PCA to reduce the dimensions to 50",
"_____no_output_____"
]
],
[
[
"st.em.run_pca(data,n_comps=50,random_state=0)",
"PCA is done! Generated in adata.obsm['X_pca'], adata.uns['pca'] and adata.varm['PCs']\n"
]
],
[
[
"Perform Louvain clustering",
"_____no_output_____"
]
],
[
[
"st.pp.neighbors(data,n_neighbors=25)",
"D:\\Anaconda3\\envs\\test2\\lib\\site-packages\\umap_learn-0.4.3-py3.8.egg\\umap\\spectral.py:4: NumbaDeprecationWarning: No direct replacement for 'numba.targets' available. Visit https://gitter.im/numba/numba-dev to request help. Thanks!\n import numba.targets\n"
],
[
"st.tl.clustering.louvain(data)",
"Applying Louvain clustering ...\nLouvain clustering is done! The labels are stored in adata.obs['louvain']\n"
],
[
"st.pl.cluster_plot(data,use_label=\"louvain\",spot_size=10)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7ef6ae8df756935dc452726afc0bc7ecf00b7a4 | 379,554 | ipynb | Jupyter Notebook | section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | chauthinh/machine-learning-deployment | ac0dd21ebfc374bebe4ea1ac84a481cfa7c056a0 | [
"BSD-3-Clause"
] | null | null | null | section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | chauthinh/machine-learning-deployment | ac0dd21ebfc374bebe4ea1ac84a481cfa7c056a0 | [
"BSD-3-Clause"
] | null | null | null | section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | chauthinh/machine-learning-deployment | ac0dd21ebfc374bebe4ea1ac84a481cfa7c056a0 | [
"BSD-3-Clause"
] | null | null | null | 120.838586 | 11,760 | 0.82481 | [
[
[
"# Machine Learning Pipeline - Feature Engineering\n\nIn the following notebooks, we will go through the implementation of each one of the steps in the Machine Learning Pipeline. \n\nWe will discuss:\n\n1. Data Analysis\n2. **Feature Engineering**\n3. Feature Selection\n4. Model Training\n5. Obtaining Predictions / Scoring\n\n\nWe will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.\n\n===================================================================================================\n\n## Predicting Sale Price of Houses\n\nThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses.\n\n\n### Why is this important? \n\nPredicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated.\n\n\n### What is the objective of the machine learning model?\n\nWe aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance with the:\n\n1. mean squared error (mse)\n2. root squared of the mean squared error (rmse)\n3. r-squared (r2).\n\n\n### How do I download the dataset?\n\n- Visit the [Kaggle Website](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data).\n\n- Remember to **log in**\n\n- Scroll down to the bottom of the page, and click on the link **'train.csv'**, and then click the 'download' blue button towards the right of the screen, to download the dataset.\n\n- The download the file called **'test.csv'** and save it in the directory with the notebooks.\n\n\n**Note the following:**\n\n- You need to be logged in to Kaggle in order to download the datasets.\n- You need to accept the terms and conditions of the competition to download the dataset\n- If you save the file to the directory with the jupyter notebook, then you can run the code as it is written here.",
"_____no_output_____"
],
[
"# Reproducibility: Setting the seed\n\nWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.",
"_____no_output_____"
]
],
[
[
"# to handle datasets\nimport pandas as pd\nimport numpy as np\n\n# for plotting\nimport matplotlib.pyplot as plt\n\n# for the yeo-johnson transformation\nimport scipy.stats as stats\n\n# to divide train and test set\nfrom sklearn.model_selection import train_test_split\n\n# feature scaling\nfrom sklearn.preprocessing import MinMaxScaler\n\n# to save the trained scaler class\nimport joblib\n\n# to visualise al the columns in the dataframe\npd.pandas.set_option('display.max_columns', None)",
"_____no_output_____"
],
[
"# load dataset\ndata = pd.read_csv('train.csv')\n\n# rows and columns of the data\nprint(data.shape)\n\n# visualise the dataset\ndata.head()",
"(1460, 81)\n"
]
],
[
[
"# Separate dataset into train and test\n\nIt is important to separate our data intro training and testing set. \n\nWhen we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.\n\nOur feature engineering techniques will learn:\n\n- mean\n- mode\n- exponents for the yeo-johnson\n- category frequency\n- and category to number mappings\n\nfrom the train set.\n\n**Separating the data into train and test involves randomness, therefore, we need to set the seed.**",
"_____no_output_____"
]
],
[
[
"# Let's separate into train and test set\n# Remember to set the seed (random_state for this sklearn function)\n\nX_train, X_test, y_train, y_test = train_test_split(\n data.drop(['Id', 'SalePrice'], axis=1), # predictive variables\n data['SalePrice'], # target\n test_size=0.1, # portion of dataset to allocate to test set\n random_state=0, # we are setting the seed here\n)\n\nX_train.shape, X_test.shape",
"_____no_output_____"
]
],
[
[
"# Feature Engineering\n\nIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:\n\n1. Missing values\n2. Temporal variables\n3. Non-Gaussian distributed variables\n4. Categorical variables: remove rare labels\n5. Categorical variables: convert strings to numbers\n5. Put the variables in a similar scale",
"_____no_output_____"
],
[
"## Target\n\nWe apply the logarithm",
"_____no_output_____"
]
],
[
[
"y_train = np.log(y_train)\ny_test = np.log(y_test)",
"_____no_output_____"
]
],
[
[
"## Missing values\n\n### Categorical variables\n\nWe will replace missing values with the string \"missing\" in those variables with a lot of missing data. \n\nAlternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. \n\nThis is common practice.",
"_____no_output_____"
]
],
[
[
"# let's identify the categorical variables\n# we will capture those of type object\n\ncat_vars = [var for var in data.columns if data[var].dtype == 'O']\n\n# MSSubClass is also categorical by definition, despite its numeric values\n# (you can find the definitions of the variables in the data_description.txt\n# file available on Kaggle, in the same website where you downloaded the data)\n\n# lets add MSSubClass to the list of categorical variables\ncat_vars = cat_vars + ['MSSubClass']\n\n# cast all variables as categorical\nX_train[cat_vars] = X_train[cat_vars].astype('O')\nX_test[cat_vars] = X_test[cat_vars].astype('O')\n\n# number of categorical variables\nlen(cat_vars)",
"_____no_output_____"
],
[
"# make a list of the categorical variables that contain missing values\n\ncat_vars_with_na = [\n var for var in cat_vars\n if X_train[var].isnull().sum() > 0\n]\n\n# print percentage of missing values per variable\nX_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)",
"_____no_output_____"
],
[
"# variables to impute with the string missing\nwith_string_missing = [\n var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]\n\n# variables to impute with the most frequent category\nwith_frequent_category = [\n var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]",
"_____no_output_____"
],
[
"with_string_missing",
"_____no_output_____"
],
[
"# replace missing values with new label: \"Missing\"\n\nX_train[with_string_missing] = X_train[with_string_missing].fillna('Missing')\nX_test[with_string_missing] = X_test[with_string_missing].fillna('Missing')",
"_____no_output_____"
],
[
"for var in with_frequent_category:\n \n # there can be more than 1 mode in a variable\n # we take the first one with [0] \n mode = X_train[var].mode()[0]\n \n print(var, mode)\n \n X_train[var].fillna(mode, inplace=True)\n X_test[var].fillna(mode, inplace=True)",
"MasVnrType None\nBsmtQual TA\nBsmtCond TA\nBsmtExposure No\nBsmtFinType1 Unf\nBsmtFinType2 Unf\nElectrical SBrkr\nGarageType Attchd\nGarageFinish Unf\nGarageQual TA\nGarageCond TA\n"
],
[
"# check that we have no missing information in the engineered variables\n\nX_train[cat_vars_with_na].isnull().sum()",
"_____no_output_____"
],
[
"# check that test set does not contain null values in the engineered variables\n\n[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Numerical variables\n\nTo engineer missing values in numerical variables, we will:\n\n- add a binary missing indicator variable\n- and then replace the missing values in the original variable with the mean",
"_____no_output_____"
]
],
[
[
"# now let's identify the numerical variables\n\nnum_vars = [\n var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'\n]\n\n# number of numerical variables\nlen(num_vars)",
"_____no_output_____"
],
[
"# make a list with the numerical variables that contain missing values\nvars_with_na = [\n var for var in num_vars\n if X_train[var].isnull().sum() > 0\n]\n\n# print percentage of missing values per variable\nX_train[vars_with_na].isnull().mean()",
"_____no_output_____"
],
[
"# replace missing values as we described above\n\nfor var in vars_with_na:\n\n # calculate the mean using the train set\n mean_val = X_train[var].mean()\n \n print(var, mean_val)\n\n # add binary missing indicator (in train and test)\n X_train[var + '_na'] = np.where(X_train[var].isnull(), 1, 0)\n X_test[var + '_na'] = np.where(X_test[var].isnull(), 1, 0)\n\n # replace missing values by the mean\n # (in train and test)\n X_train[var].fillna(mean_val, inplace=True)\n X_test[var].fillna(mean_val, inplace=True)\n\n# check that we have no more missing values in the engineered variables\nX_train[vars_with_na].isnull().sum()",
"LotFrontage 69.87974098057354\nMasVnrArea 103.7974006116208\nGarageYrBlt 1978.2959677419356\n"
],
[
"# check that test set does not contain null values in the engineered variables\n\n[var for var in vars_with_na if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# check the binary missing indicator variables\n\nX_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()",
"_____no_output_____"
]
],
[
[
"## Temporal variables\n\n### Capture elapsed time\n\nWe learned in the previous notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. \n\nWe will capture the time elapsed between those variables and the year in which the house was sold:",
"_____no_output_____"
]
],
[
[
"def elapsed_years(df, var):\n # capture difference between the year variable\n # and the year in which the house was sold\n df[var] = df['YrSold'] - df[var]\n return df",
"_____no_output_____"
],
[
"for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:\n X_train = elapsed_years(X_train, var)\n X_test = elapsed_years(X_test, var)",
"_____no_output_____"
],
[
"# now we drop YrSold\nX_train.drop(['YrSold'], axis=1, inplace=True)\nX_test.drop(['YrSold'], axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"## Numerical variable transformation\n\n### Logarithmic transformation\n\nIn the previous notebook, we observed that the numerical variables are not normally distributed.\n\nWe will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.",
"_____no_output_____"
]
],
[
[
"for var in [\"LotFrontage\", \"1stFlrSF\", \"GrLivArea\"]:\n X_train[var] = np.log(X_train[var])\n X_test[var] = np.log(X_test[var])",
"_____no_output_____"
],
[
"# check that test set does not contain null values in the engineered variables\n[var for var in [\"LotFrontage\", \"1stFlrSF\", \"GrLivArea\"] if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# same for train set\n[var for var in [\"LotFrontage\", \"1stFlrSF\", \"GrLivArea\"] if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Yeo-Johnson transformation\n\nWe will apply the Yeo-Johnson transformation to LotArea.",
"_____no_output_____"
]
],
[
[
"# the yeo-johnson transformation learns the best exponent to transform the variable\n# it needs to learn it from the train set: \nX_train['LotArea'], param = stats.yeojohnson(X_train['LotArea'])\n\n# and then apply the transformation to the test set with the same\n# parameter: see who this time we pass param as argument to the \n# yeo-johnson\nX_test['LotArea'] = stats.yeojohnson(X_test['LotArea'], lmbda=param)\n\nprint(param)",
"-12.55283001172003\n"
],
[
"# check absence of na in the train set\n[var for var in X_train.columns if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# check absence of na in the test set\n[var for var in X_train.columns if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Binarize skewed variables\n\nThere were a few variables very skewed, we would transform those into binary variables.",
"_____no_output_____"
]
],
[
[
"skewed = [\n 'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',\n '3SsnPorch', 'ScreenPorch', 'MiscVal'\n]\n\nfor var in skewed:\n \n # map the variable values into 0 and 1\n X_train[var] = np.where(X_train[var]==0, 0, 1)\n X_test[var] = np.where(X_test[var]==0, 0, 1)",
"_____no_output_____"
]
],
[
[
"## Categorical variables\n\n### Apply mappings\n\nThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.",
"_____no_output_____"
]
],
[
[
"# re-map strings to numbers, which determine quality\n\nqual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}\n\nqual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',\n 'HeatingQC', 'KitchenQual', 'FireplaceQu',\n 'GarageQual', 'GarageCond',\n ]\n\nfor var in qual_vars:\n X_train[var] = X_train[var].map(qual_mappings)\n X_test[var] = X_test[var].map(qual_mappings)",
"_____no_output_____"
],
[
"exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}\n\nvar = 'BsmtExposure'\n\nX_train[var] = X_train[var].map(exposure_mappings)\nX_test[var] = X_test[var].map(exposure_mappings)",
"_____no_output_____"
],
[
"finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}\n\nfinish_vars = ['BsmtFinType1', 'BsmtFinType2']\n\nfor var in finish_vars:\n X_train[var] = X_train[var].map(finish_mappings)\n X_test[var] = X_test[var].map(finish_mappings)",
"_____no_output_____"
],
[
"garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}\n\nvar = 'GarageFinish'\n\nX_train[var] = X_train[var].map(garage_mappings)\nX_test[var] = X_test[var].map(garage_mappings)",
"_____no_output_____"
],
[
"fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}\n\nvar = 'Fence'\n\nX_train[var] = X_train[var].map(fence_mappings)\nX_test[var] = X_test[var].map(fence_mappings)",
"_____no_output_____"
],
[
"# check absence of na in the train set\n[var for var in X_train.columns if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Removing Rare Labels\n\nFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string \"Rare\".\n\nTo learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.",
"_____no_output_____"
]
],
[
[
"# capture all quality variables\n\nqual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']\n\n# capture the remaining categorical variables\n# (those that we did not re-map)\n\ncat_others = [\n var for var in cat_vars if var not in qual_vars\n]\n\nlen(cat_others)",
"_____no_output_____"
],
[
"def find_frequent_labels(df, var, rare_perc):\n \n # function finds the labels that are shared by more than\n # a certain % of the houses in the dataset\n\n df = df.copy()\n\n tmp = df.groupby(var)[var].count() / len(df)\n\n return tmp[tmp > rare_perc].index\n\n\nfor var in cat_others:\n \n # find the frequent categories\n frequent_ls = find_frequent_labels(X_train, var, 0.01)\n \n print(var, frequent_ls)\n print()\n \n # replace rare categories by the string \"Rare\"\n X_train[var] = np.where(X_train[var].isin(\n frequent_ls), X_train[var], 'Rare')\n \n X_test[var] = np.where(X_test[var].isin(\n frequent_ls), X_test[var], 'Rare')",
"MSZoning Index(['FV', 'RH', 'RL', 'RM'], dtype='object', name='MSZoning')\n\nStreet Index(['Pave'], dtype='object', name='Street')\n\nAlley Index(['Grvl', 'Missing', 'Pave'], dtype='object', name='Alley')\n\nLotShape Index(['IR1', 'IR2', 'Reg'], dtype='object', name='LotShape')\n\nLandContour Index(['Bnk', 'HLS', 'Low', 'Lvl'], dtype='object', name='LandContour')\n\nUtilities Index(['AllPub'], dtype='object', name='Utilities')\n\nLotConfig Index(['Corner', 'CulDSac', 'FR2', 'Inside'], dtype='object', name='LotConfig')\n\nLandSlope Index(['Gtl', 'Mod'], dtype='object', name='LandSlope')\n\nNeighborhood Index(['Blmngtn', 'BrDale', 'BrkSide', 'ClearCr', 'CollgCr', 'Crawfor',\n 'Edwards', 'Gilbert', 'IDOTRR', 'MeadowV', 'Mitchel', 'NAmes', 'NWAmes',\n 'NoRidge', 'NridgHt', 'OldTown', 'SWISU', 'Sawyer', 'SawyerW',\n 'Somerst', 'StoneBr', 'Timber'],\n dtype='object', name='Neighborhood')\n\nCondition1 Index(['Artery', 'Feedr', 'Norm', 'PosN', 'RRAn'], dtype='object', name='Condition1')\n\nCondition2 Index(['Norm'], dtype='object', name='Condition2')\n\nBldgType Index(['1Fam', '2fmCon', 'Duplex', 'Twnhs', 'TwnhsE'], dtype='object', name='BldgType')\n\nHouseStyle Index(['1.5Fin', '1Story', '2Story', 'SFoyer', 'SLvl'], dtype='object', name='HouseStyle')\n\nRoofStyle Index(['Gable', 'Hip'], dtype='object', name='RoofStyle')\n\nRoofMatl Index(['CompShg'], dtype='object', name='RoofMatl')\n\nExterior1st Index(['AsbShng', 'BrkFace', 'CemntBd', 'HdBoard', 'MetalSd', 'Plywood',\n 'Stucco', 'VinylSd', 'Wd Sdng', 'WdShing'],\n dtype='object', name='Exterior1st')\n\nExterior2nd Index(['AsbShng', 'BrkFace', 'CmentBd', 'HdBoard', 'MetalSd', 'Plywood',\n 'Stucco', 'VinylSd', 'Wd Sdng', 'Wd Shng'],\n dtype='object', name='Exterior2nd')\n\nMasVnrType Index(['BrkFace', 'None', 'Stone'], dtype='object', name='MasVnrType')\n\nFoundation Index(['BrkTil', 'CBlock', 'PConc', 'Slab'], dtype='object', name='Foundation')\n\nHeating Index(['GasA', 'GasW'], dtype='object', name='Heating')\n\nCentralAir Index(['N', 'Y'], dtype='object', name='CentralAir')\n\nElectrical Index(['FuseA', 'FuseF', 'SBrkr'], dtype='object', name='Electrical')\n\nFunctional Index(['Min1', 'Min2', 'Mod', 'Typ'], dtype='object', name='Functional')\n\nGarageType Index(['Attchd', 'Basment', 'BuiltIn', 'Detchd'], dtype='object', name='GarageType')\n\nPavedDrive Index(['N', 'P', 'Y'], dtype='object', name='PavedDrive')\n\nPoolQC Index(['Missing'], dtype='object', name='PoolQC')\n\nMiscFeature Index(['Missing', 'Shed'], dtype='object', name='MiscFeature')\n\nSaleType Index(['COD', 'New', 'WD'], dtype='object', name='SaleType')\n\nSaleCondition Index(['Abnorml', 'Family', 'Normal', 'Partial'], dtype='object', name='SaleCondition')\n\nMSSubClass Int64Index([20, 30, 50, 60, 70, 75, 80, 85, 90, 120, 160, 190], dtype='int64', name='MSSubClass')\n\n"
]
],
[
[
"### Encoding of categorical variables\n\nNext, we need to transform the strings of the categorical variables into numbers. \n\nWe will do it so that we capture the monotonic relationship between the label and the target.\n\nTo learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.",
"_____no_output_____"
]
],
[
[
"# this function will assign discrete values to the strings of the variables,\n# so that the smaller value corresponds to the category that shows the smaller\n# mean house sale price\n\ndef replace_categories(train, test, y_train, var, target):\n \n tmp = pd.concat([X_train, y_train], axis=1)\n \n # order the categories in a variable from that with the lowest\n # house sale price, to that with the highest\n ordered_labels = tmp.groupby([var])[target].mean().sort_values().index\n\n # create a dictionary of ordered categories to integer values\n ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}\n \n print(var, ordinal_label)\n print()\n\n # use the dictionary to replace the categorical strings by integers\n train[var] = train[var].map(ordinal_label)\n test[var] = test[var].map(ordinal_label)",
"_____no_output_____"
],
[
"for var in cat_others:\n replace_categories(X_train, X_test, y_train, var, 'SalePrice')",
"MSZoning {'Rare': 0, 'RM': 1, 'RH': 2, 'RL': 3, 'FV': 4}\n\nStreet {'Rare': 0, 'Pave': 1}\n\nAlley {'Grvl': 0, 'Pave': 1, 'Missing': 2}\n\nLotShape {'Reg': 0, 'IR1': 1, 'Rare': 2, 'IR2': 3}\n\nLandContour {'Bnk': 0, 'Lvl': 1, 'Low': 2, 'HLS': 3}\n\nUtilities {'Rare': 0, 'AllPub': 1}\n\nLotConfig {'Inside': 0, 'FR2': 1, 'Corner': 2, 'Rare': 3, 'CulDSac': 4}\n\nLandSlope {'Gtl': 0, 'Mod': 1, 'Rare': 2}\n\nNeighborhood {'IDOTRR': 0, 'MeadowV': 1, 'BrDale': 2, 'Edwards': 3, 'BrkSide': 4, 'OldTown': 5, 'Sawyer': 6, 'SWISU': 7, 'NAmes': 8, 'Mitchel': 9, 'SawyerW': 10, 'Rare': 11, 'NWAmes': 12, 'Gilbert': 13, 'Blmngtn': 14, 'CollgCr': 15, 'Crawfor': 16, 'ClearCr': 17, 'Somerst': 18, 'Timber': 19, 'StoneBr': 20, 'NridgHt': 21, 'NoRidge': 22}\n\nCondition1 {'Artery': 0, 'Feedr': 1, 'Norm': 2, 'RRAn': 3, 'Rare': 4, 'PosN': 5}\n\nCondition2 {'Rare': 0, 'Norm': 1}\n\nBldgType {'2fmCon': 0, 'Duplex': 1, 'Twnhs': 2, '1Fam': 3, 'TwnhsE': 4}\n\nHouseStyle {'SFoyer': 0, '1.5Fin': 1, 'Rare': 2, '1Story': 3, 'SLvl': 4, '2Story': 5}\n\nRoofStyle {'Gable': 0, 'Rare': 1, 'Hip': 2}\n\nRoofMatl {'CompShg': 0, 'Rare': 1}\n\nExterior1st {'AsbShng': 0, 'Wd Sdng': 1, 'WdShing': 2, 'MetalSd': 3, 'Stucco': 4, 'Rare': 5, 'HdBoard': 6, 'Plywood': 7, 'BrkFace': 8, 'CemntBd': 9, 'VinylSd': 10}\n\nExterior2nd {'AsbShng': 0, 'Wd Sdng': 1, 'MetalSd': 2, 'Wd Shng': 3, 'Stucco': 4, 'Rare': 5, 'HdBoard': 6, 'Plywood': 7, 'BrkFace': 8, 'CmentBd': 9, 'VinylSd': 10}\n\nMasVnrType {'Rare': 0, 'None': 1, 'BrkFace': 2, 'Stone': 3}\n\nFoundation {'Slab': 0, 'BrkTil': 1, 'CBlock': 2, 'Rare': 3, 'PConc': 4}\n\nHeating {'Rare': 0, 'GasW': 1, 'GasA': 2}\n\nCentralAir {'N': 0, 'Y': 1}\n\nElectrical {'Rare': 0, 'FuseF': 1, 'FuseA': 2, 'SBrkr': 3}\n\nFunctional {'Rare': 0, 'Min2': 1, 'Mod': 2, 'Min1': 3, 'Typ': 4}\n\nGarageType {'Rare': 0, 'Detchd': 1, 'Basment': 2, 'Attchd': 3, 'BuiltIn': 4}\n\nPavedDrive {'N': 0, 'P': 1, 'Y': 2}\n\nPoolQC {'Missing': 0, 'Rare': 1}\n\nMiscFeature {'Rare': 0, 'Shed': 1, 'Missing': 2}\n\nSaleType {'COD': 0, 'Rare': 1, 'WD': 2, 'New': 3}\n\nSaleCondition {'Rare': 0, 'Abnorml': 1, 'Family': 2, 'Normal': 3, 'Partial': 4}\n\nMSSubClass {30: 0, 'Rare': 1, 190: 2, 90: 3, 160: 4, 50: 5, 85: 6, 70: 7, 80: 8, 20: 9, 75: 10, 120: 11, 60: 12}\n\n"
],
[
"# check absence of na in the train set\n[var for var in X_train.columns if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# check absence of na in the test set\n[var for var in X_test.columns if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# let me show you what I mean by monotonic relationship\n# between labels and target\n\ndef analyse_vars(train, y_train, var):\n \n # function plots median house sale price per encoded\n # category\n \n tmp = pd.concat([X_train, np.log(y_train)], axis=1)\n \n tmp.groupby(var)['SalePrice'].median().plot.bar()\n plt.title(var)\n plt.ylim(2.2, 2.6)\n plt.ylabel('SalePrice')\n plt.show()\n \nfor var in cat_others:\n analyse_vars(X_train, y_train, var)",
"_____no_output_____"
]
],
[
[
"The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.\n\n(remember that the target is log-transformed, that is why the differences seem so small).",
"_____no_output_____"
],
[
"## Feature Scaling\n\nFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:",
"_____no_output_____"
]
],
[
[
"# create scaler\nscaler = MinMaxScaler()\n\n# fit the scaler to the train set\nscaler.fit(X_train) \n\n# transform the train and test set\n\n# sklearn returns numpy arrays, so we wrap the\n# array with a pandas dataframe\n\nX_train = pd.DataFrame(\n scaler.transform(X_train),\n columns=X_train.columns\n)\n\nX_test = pd.DataFrame(\n scaler.transform(X_test),\n columns=X_train.columns\n)",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"# let's now save the train and test sets for the next notebook!\n\nX_train.to_csv('xtrain.csv', index=False)\nX_test.to_csv('xtest.csv', index=False)\n\ny_train.to_csv('ytrain.csv', index=False)\ny_test.to_csv('ytest.csv', index=False)",
"_____no_output_____"
],
[
"# now let's save the scaler\n\njoblib.dump(scaler, 'minmax_scaler.joblib') ",
"_____no_output_____"
]
],
[
[
"That concludes the feature engineering section.\n\n# Additional Resources\n\n- [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) - Online Course\n- [Packt Feature Engineering Cookbook](https://www.packtpub.com/data/python-feature-engineering-cookbook) - Book\n- [Feature Engineering for Machine Learning: A comprehensive Overview](https://trainindata.medium.com/feature-engineering-for-machine-learning-a-comprehensive-overview-a7ad04c896f8) - Article\n- [Practical Code Implementations of Feature Engineering for Machine Learning with Python](https://towardsdatascience.com/practical-code-implementations-of-feature-engineering-for-machine-learning-with-python-f13b953d4bcd) - Article",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7ef6b29046c1a956207e2f6b9dab5c36c509d13 | 52,325 | ipynb | Jupyter Notebook | 02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb | ViniciusRFerraz/pandas_exercises | 14197b19f966a06f64b6811c42247e62a7160d58 | [
"BSD-3-Clause"
] | null | null | null | 02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb | ViniciusRFerraz/pandas_exercises | 14197b19f966a06f64b6811c42247e62a7160d58 | [
"BSD-3-Clause"
] | null | null | null | 02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb | ViniciusRFerraz/pandas_exercises | 14197b19f966a06f64b6811c42247e62a7160d58 | [
"BSD-3-Clause"
] | null | null | null | 29.396067 | 253 | 0.307845 | [
[
[
"# Fictional Army - Filtering and Sorting",
"_____no_output_____"
],
[
"### Introduction:\n\nThis exercise was inspired by this [page](http://chrisalbon.com/python/)\n\nSpecial thanks to: https://github.com/chrisalbon for sharing the dataset and materials.\n\n### Step 1. Import the necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"### Step 2. This is the data given as a dictionary",
"_____no_output_____"
]
],
[
[
"# Create an example dataframe about a fictional army\nraw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],\n 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],\n 'deaths': [523, 52, 25, 616, 43, 234, 523, 62, 62, 73, 37, 35],\n 'battles': [5, 42, 2, 2, 4, 7, 8, 3, 4, 7, 8, 9],\n 'size': [1045, 957, 1099, 1400, 1592, 1006, 987, 849, 973, 1005, 1099, 1523],\n 'veterans': [1, 5, 62, 26, 73, 37, 949, 48, 48, 435, 63, 345],\n 'readiness': [1, 2, 3, 3, 2, 1, 2, 3, 2, 1, 2, 3],\n 'armored': [1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1],\n 'deserters': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],\n 'origin': ['Arizona', 'California', 'Texas', 'Florida', 'Maine', 'Iowa', 'Alaska', 'Washington', 'Oregon', 'Wyoming', 'Louisana', 'Georgia']}",
"_____no_output_____"
]
],
[
[
"### Step 3. Create a dataframe and assign it to a variable called army. \n\n#### Don't forget to include the columns names in the order presented in the dictionary ('regiment', 'company', 'deaths'...) so that the column index order is consistent with the solutions. If omitted, pandas will order the columns alphabetically.",
"_____no_output_____"
]
],
[
[
"army = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'deaths', 'battles', 'size', 'veterans', 'readiness', 'armored', 'deserters', 'origin'])",
"_____no_output_____"
]
],
[
[
"### Step 4. Set the 'origin' colum as the index of the dataframe",
"_____no_output_____"
]
],
[
[
"army = army.set_index('origin')\narmy",
"_____no_output_____"
]
],
[
[
"### Step 5. Print only the column veterans",
"_____no_output_____"
]
],
[
[
"army['veterans']",
"_____no_output_____"
]
],
[
[
"### Step 6. Print the columns 'veterans' and 'deaths'",
"_____no_output_____"
]
],
[
[
"army[['veterans', 'deaths']]",
"_____no_output_____"
]
],
[
[
"### Step 7. Print the name of all the columns.",
"_____no_output_____"
]
],
[
[
"army.columns",
"_____no_output_____"
]
],
[
[
"### Step 8. Select the 'deaths', 'size' and 'deserters' columns from Maine and Alaska",
"_____no_output_____"
]
],
[
[
"# Select all rows with the index label \"Maine\" and \"Alaska\"\narmy.loc[['Maine','Alaska'] , [\"deaths\",\"size\",\"deserters\"]]",
"_____no_output_____"
]
],
[
[
"### Step 9. Select the rows 3 to 7 and the columns 3 to 6",
"_____no_output_____"
]
],
[
[
"#\narmy.iloc[3:7, 3:6]",
"_____no_output_____"
]
],
[
[
"### Step 10. Select every row after the fourth row",
"_____no_output_____"
]
],
[
[
"army.iloc[3:]",
"_____no_output_____"
]
],
[
[
"### Step 11. Select every row up to the 4th row",
"_____no_output_____"
]
],
[
[
"army.iloc[:3]",
"_____no_output_____"
]
],
[
[
"### Step 12. Select the 3rd column up to the 7th column",
"_____no_output_____"
]
],
[
[
"# the first : means all\n# after the comma you select the range\n\narmy.iloc[: , 4:7]",
"_____no_output_____"
]
],
[
[
"### Step 13. Select rows where df.deaths is greater than 50",
"_____no_output_____"
]
],
[
[
"army[army['deaths'] > 50]",
"_____no_output_____"
]
],
[
[
"### Step 14. Select rows where df.deaths is greater than 500 or less than 50",
"_____no_output_____"
]
],
[
[
"army[(army['deaths'] > 500) | (army['deaths'] < 50)]",
"_____no_output_____"
]
],
[
[
"### Step 15. Select all the regiments not named \"Dragoons\"",
"_____no_output_____"
]
],
[
[
"army[(army['regiment'] != 'Dragoons')]",
"_____no_output_____"
]
],
[
[
"### Step 16. Select the rows called Texas and Arizona",
"_____no_output_____"
]
],
[
[
"army.loc[['Arizona', 'Texas']]",
"_____no_output_____"
]
],
[
[
"### Step 17. Select the third cell in the row named Arizona",
"_____no_output_____"
]
],
[
[
"army.loc[['Arizona'], ['deaths']]\n\n#OR\n\narmy.iloc[[0], army.columns.get_loc('deaths')]",
"_____no_output_____"
]
],
[
[
"### Step 18. Select the third cell down in the column named deaths",
"_____no_output_____"
]
],
[
[
"army.loc['Texas', 'deaths']\n\n#OR\n\narmy.iloc[[2], army.columns.get_loc('deaths')]\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7ef6b684d8f5d38c49d79b55b65f32c1db4e837 | 2,852 | ipynb | Jupyter Notebook | examples/reference/elements/matplotlib/Spread.ipynb | stonebig/holoviews | d5270c30dd1af38a785452aeac2fbabbe528e892 | [
"BSD-3-Clause"
] | 2 | 2020-08-13T00:11:46.000Z | 2021-01-31T22:13:21.000Z | examples/reference/elements/matplotlib/Spread.ipynb | stonebig/holoviews | d5270c30dd1af38a785452aeac2fbabbe528e892 | [
"BSD-3-Clause"
] | null | null | null | examples/reference/elements/matplotlib/Spread.ipynb | stonebig/holoviews | d5270c30dd1af38a785452aeac2fbabbe528e892 | [
"BSD-3-Clause"
] | null | null | null | 30.666667 | 518 | 0.588008 | [
[
[
"<div class=\"contentcontainer med left\" style=\"margin-left: -50px;\">\n<dl class=\"dl-horizontal\">\n <dt>Title</dt> <dd> Spread Element</dd>\n <dt>Dependencies</dt> <dd>Matplotlib</dd>\n <dt>Backends</dt> <dd><a href='./Spread.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/Spread.ipynb'>Bokeh</a></dd>\n</dl>\n</div>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport holoviews as hv\nhv.extension('matplotlib')",
"_____no_output_____"
]
],
[
[
"``Spread`` elements have the same data format as the [``ErrorBars``](ErrorBars.ipynb) element, namely x- and y-values with associated symmetric or asymmetric errors, but are interpreted as samples from a continuous distribution (just as ``Curve`` is the continuous version of ``Scatter``). These are often paired with an overlaid ``Curve`` to show an average trend along with a corresponding spread of values; see the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for examples.\n\nNote that as the ``Spread`` element is used to add information to a plot (typically a ``Curve``) the default alpha value is less that one, making it partially transparent. \n\n##### Symmetric",
"_____no_output_____"
],
[
"Given two value dimensions corresponding to the position on the y-axis and the error, ``Spread`` will visualize itself assuming symmetric errors:",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\nxs = np.linspace(0, np.pi*2, 20)\nerr = 0.2+np.random.rand(len(xs))\nhv.Spread((xs, np.sin(xs), err))",
"_____no_output_____"
]
],
[
[
"##### Asymmetric",
"_____no_output_____"
],
[
"Given three value dimensions corresponding to the position on the y-axis, the negative error and the positive error, ``Spread`` can be used to visualize assymmetric errors:",
"_____no_output_____"
]
],
[
[
"%%opts Spread (facecolor='indianred' alpha=1)\nxs = np.linspace(0, np.pi*2, 20)\nhv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))),\n vdims=['y', 'yerrneg', 'yerrpos'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e7ef8a2bbf2b148a04e5aa43fc75b1422f21af67 | 9,183 | ipynb | Jupyter Notebook | Chapter10/ex9.ipynb | m0baxter/MLBookStuff | 2f2ba275b3fe59b69c4e4f20f355f019e7bd1eac | [
"MIT"
] | null | null | null | Chapter10/ex9.ipynb | m0baxter/MLBookStuff | 2f2ba275b3fe59b69c4e4f20f355f019e7bd1eac | [
"MIT"
] | null | null | null | Chapter10/ex9.ipynb | m0baxter/MLBookStuff | 2f2ba275b3fe59b69c4e4f20f355f019e7bd1eac | [
"MIT"
] | null | null | null | 34.011111 | 124 | 0.483067 | [
[
[
"import numpy as np\nimport tensorflow as tf\n\nfrom datetime import datetime",
"_____no_output_____"
],
[
"mnist = input_data.read_data_sets(\"/tmp/data\")\n\n_, n = mnist.train.images.shape",
"Extracting /tmp/data/train-images-idx3-ubyte.gz\nExtracting /tmp/data/train-labels-idx1-ubyte.gz\nExtracting /tmp/data/t10k-images-idx3-ubyte.gz\nExtracting /tmp/data/t10k-labels-idx1-ubyte.gz\n"
],
[
"now = datetime.utcnow().strftime(\"%Y%m%d%H%M%S\")\nrootLogDir = \"tfLogs\"\nlogDir = \"{0}/run-{1}/\".format(rootLogDir, now)\n\nfileWriter = tf.summary.FileWriter( logDir, tf.get_default_graph() )",
"_____no_output_____"
],
[
"def mnistClassifier( X, y, nOut, nl = 1, nh = 100, alpha = 0.01, momentum = 0.9 ):\n\n if ( nl < 1 ):\n print( \"You need at least one hidden layer.\" )\n return\n\n if ( nh < 1 ):\n print( \"you need at least one neuron.\" )\n return\n\n with tf.name_scope( \"dnn\" ):\n layers = [ tf.layers.dense( X, nh, name = \"hidden1\", activation = tf.nn.relu ) ]\n\n for i in range(2, nl + 1):\n layers.append( tf.layers.dense( layers[-1], nh, name = \"hidden\" + str(i), activation = tf.nn.relu ) )\n\n logits = tf.layers.dense( layers[-1], nOut, name = \"output\" )\n\n with tf.name_scope(\"loss\"):\n crossEnt = tf.nn.sparse_softmax_cross_entropy_with_logits( labels = y, logits = logits)\n loss = tf.reduce_mean( crossEnt, name = \"loss\" )\n \n with tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean( tf.cast(correct, tf.float32) )\n \n with tf.name_scope(\"train\"):\n opt = tf.train.MomentumOptimizer( learning_rate = alpha, momentum = momentum)\n training = opt.minimize( loss )\n lossSummary = tf.summary.scalar(\"crossEntropy\", loss)\n \n with tf.name_scope(\"utility\"):\n init = tf.global_variables_initializer()\n saver = tf.train.Saver()\n \n return loss, training, accuracy, lossSummary, init, saver\n",
"_____no_output_____"
],
[
"X = tf.placeholder(tf.float32, shape = (None, n), name = \"X\")\ny = tf.placeholder(tf.int32, shape = (None), name = \"y\")\n\nloss, training, accuracy, lossSummary, init, saver = mnistClassifier( X, y, 10,\n nl = 4,\n nh = 200,\n alpha = 0.01,\n momentum = 0.9 )",
"_____no_output_____"
],
[
"nEpochs = 1000\nbatchSize = 64 #2048\n\nhiVal = 0\npatience = 0\n\nwith tf.Session() as sess:\n\n init.run()\n \n for epoch in range(nEpochs):\n for i in range( mnist.train.num_examples // batchSize ):\n \n batchX ,batchY = mnist.train.next_batch( batchSize )\n sess.run( training, feed_dict = { X : batchX, y : batchY } )\n \n trainAcc = accuracy.eval( feed_dict = { X : batchX, y : batchY } )\n valAcc = accuracy.eval( feed_dict = { X : mnist.validation.images,\n y : mnist.validation.labels } )\n\n print( epoch, \"Training:\", trainAcc, \"Validation:\", valAcc )\n\n if ( valAcc > hiVal ):\n hiVal = valAcc\n patience = 0\n\n else:\n patience += 1\n\n if ( patience >= 10):\n print(\"No imporvement on validation set after {0} epochs. Training competed\".format(patience))\n break\n \n print(\"saving model.\")\n saver.save(sess, \"./model.ckpt\")\n",
"0 Training: 0.96875 Validation: 0.957\n1 Training: 1.0 Validation: 0.9632\n2 Training: 0.984375 Validation: 0.9734\n3 Training: 1.0 Validation: 0.976\n4 Training: 1.0 Validation: 0.9784\n5 Training: 0.984375 Validation: 0.9784\n6 Training: 1.0 Validation: 0.9804\n7 Training: 0.96875 Validation: 0.9806\n8 Training: 1.0 Validation: 0.9826\n9 Training: 1.0 Validation: 0.9796\n10 Training: 0.984375 Validation: 0.9774\n11 Training: 0.984375 Validation: 0.9836\n12 Training: 1.0 Validation: 0.979\n13 Training: 1.0 Validation: 0.9786\n14 Training: 1.0 Validation: 0.9812\n15 Training: 0.984375 Validation: 0.9778\n16 Training: 1.0 Validation: 0.981\n17 Training: 1.0 Validation: 0.9828\n18 Training: 1.0 Validation: 0.9848\n19 Training: 1.0 Validation: 0.9842\n20 Training: 1.0 Validation: 0.985\n21 Training: 1.0 Validation: 0.9846\n22 Training: 1.0 Validation: 0.9848\n23 Training: 1.0 Validation: 0.9848\n24 Training: 1.0 Validation: 0.9846\n25 Training: 1.0 Validation: 0.985\n26 Training: 1.0 Validation: 0.985\n27 Training: 1.0 Validation: 0.9848\n28 Training: 1.0 Validation: 0.985\n29 Training: 1.0 Validation: 0.985\n30 Training: 1.0 Validation: 0.9852\n31 Training: 1.0 Validation: 0.9852\n32 Training: 1.0 Validation: 0.9854\n33 Training: 1.0 Validation: 0.9852\n34 Training: 1.0 Validation: 0.9854\n35 Training: 1.0 Validation: 0.9852\n36 Training: 1.0 Validation: 0.9854\n37 Training: 1.0 Validation: 0.9854\n38 Training: 1.0 Validation: 0.9852\n39 Training: 1.0 Validation: 0.9854\n40 Training: 1.0 Validation: 0.9854\n41 Training: 1.0 Validation: 0.9852\n42 Training: 1.0 Validation: 0.9848\nNo imporvement on validation set after 10 epochs. Training competed\nsaving model.\n"
],
[
"tf.reset_default_graph()\n#sess = tf.Session()\n\nX = tf.placeholder(tf.float32, shape = (None, n), name = \"X\")\ny = tf.placeholder(tf.int32, shape = (None), name = \"y\")\n\nloss, training, accuracy, lossSummary, init, saver = mnistClassifier( X, y, 10,\n nl = 4,\n nh = 200,\n alpha = 0.01,\n momentum = 0.9 )\n\nwith tf.Session() as sess:\n\n saver.restore( sess, \"./model.ckpt\" )\n testAcc = accuracy.eval( feed_dict = { X : mnist.test.images, y : mnist.test.labels })\n\n print( \"Accuracy on test set:\", testAcc )",
"INFO:tensorflow:Restoring parameters from ./model.ckpt\nAccuracy on test set: 0.9823\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7ef8f85087675e33538ec18e5d3ad7c09600723 | 99,641 | ipynb | Jupyter Notebook | introductory-tutorials/intro-to-julia/10. Multiple dispatch.ipynb | ljbelenky/JuliaTutorials | de4a74717e2debebfbddd815848da5292c1755e5 | [
"MIT"
] | null | null | null | introductory-tutorials/intro-to-julia/10. Multiple dispatch.ipynb | ljbelenky/JuliaTutorials | de4a74717e2debebfbddd815848da5292c1755e5 | [
"MIT"
] | null | null | null | introductory-tutorials/intro-to-julia/10. Multiple dispatch.ipynb | ljbelenky/JuliaTutorials | de4a74717e2debebfbddd815848da5292c1755e5 | [
"MIT"
] | null | null | null | 118.620238 | 54,936 | 0.684668 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7efb26f8b4ddf5e95b05144022bf9e0f6284c8d | 13,421 | ipynb | Jupyter Notebook | Bengali-AI/notebooks/Views.ipynb | Nandhagopalan/Struturing_Projects | 0684eae86a62936a65615c34f901433251949696 | [
"MIT"
] | null | null | null | Bengali-AI/notebooks/Views.ipynb | Nandhagopalan/Struturing_Projects | 0684eae86a62936a65615c34f901433251949696 | [
"MIT"
] | null | null | null | Bengali-AI/notebooks/Views.ipynb | Nandhagopalan/Struturing_Projects | 0684eae86a62936a65615c34f901433251949696 | [
"MIT"
] | null | null | null | 63.909524 | 9,504 | 0.829297 | [
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport sys\nimport numpy as np",
"_____no_output_____"
],
[
"sys.path.append('../src/')",
"_____no_output_____"
],
[
"from dataset import BengaliAiDataset",
"_____no_output_____"
],
[
"ds=BengaliAiDataset(folds=[0,1],img_height=137,img_width=236,mean=(0.485,0.456,0.406),\n std=(0.229,0.224,0.225))",
"_____no_output_____"
],
[
"len(ds)",
"_____no_output_____"
],
[
"ix=123\n\nimg=ds[ix]['image']\ngrap_root=ds[ix]['grapheme_root']\nvowel=ds[ix]['vowel_diacritic']\nconsonant=ds[ix]['consonant_diacritic']\n\nplt.imshow(np.transpose(img,[1,2,0]))",
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
],
[
"#import pretrainedmodels",
"_____no_output_____"
],
[
"check=pretrainedmodels.__dict__['resnet34'](pretrained='imagenet')\nprint(\"check the architecture and change the last linear layer\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7efc28a0317348b5edf5212602a5d88ef700afd | 1,288 | ipynb | Jupyter Notebook | my_experiments/VES.ipynb | BloonCorps/IAP2022 | 11a481790878defad0d2974b81ae109168306077 | [
"MIT"
] | null | null | null | my_experiments/VES.ipynb | BloonCorps/IAP2022 | 11a481790878defad0d2974b81ae109168306077 | [
"MIT"
] | null | null | null | my_experiments/VES.ipynb | BloonCorps/IAP2022 | 11a481790878defad0d2974b81ae109168306077 | [
"MIT"
] | null | null | null | 28 | 194 | 0.532609 | [
[
[
"# Solving Statistical Mechanics Using Variational Autoregressive Networks",
"_____no_output_____"
],
[
"Suppose we have some distribution:\n\n$$ p(x)=\\frac{e^{-\\beta E(x)}}{Z} $$\n\nWith the absolute free energy being:\n \n$$F = \\frac{1}{\\beta} \\ln(Z)$$\n\nNow suppose we have a distribution, $q_{\\theta}(x)$, whose parameters $\\theta$ we can optimize such that $q_{\\theta}(x)$ matches the target distribution $p(x)$ as close as possible. \n\n$D_{KL}(q_{\\theta} \\| p)$ can be used as an optimizable function.\n\nHowever, what is valuable is that minimizing $D_{KL}(q_{\\theta} \\| p)$ does not require samples from $p(x)$, or even calculating $p(x)$; notice that:\n\n$D_{KL}(q_{\\theta} \\| p) = \\sum q_{\\theta} \\ln (\\frac{q_{\\theta}(x)}{p(x)}) = \\beta(F_q - F)$ and $F_q = \\frac{1}{\\beta} \\sum q_{\\theta}\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown"
]
] |
e7efea88440a7f1b0b9e495a6bfb90bfa398395d | 32,190 | ipynb | Jupyter Notebook | ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | thepycoder/ai-platform-samples | f055b39f77df7b4fe0467c845f1ffff2b68bed3f | [
"Apache-2.0"
] | 1 | 2021-07-01T16:40:16.000Z | 2021-07-01T16:40:16.000Z | ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | amygdala/ai-platform-samples | 62ec18dc30f29eb6bdcfefe229d76e5fab18584d | [
"Apache-2.0"
] | null | null | null | ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | amygdala/ai-platform-samples | 62ec18dc30f29eb6bdcfefe229d76e5fab18584d | [
"Apache-2.0"
] | null | null | null | 30.981713 | 282 | 0.513886 | [
[
[
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"<table align=\"left\">\n\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>",
"_____no_output_____"
],
[
"#Vertex AI: Track parameters and metrics for custom training jobs",
"_____no_output_____"
],
[
"## Overview\n\nThis notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.\n\n### Dataset\n\nThis example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone\n### Objective\n\nIn this notebook, you will learn how to use Vertex SDK for Python to:\n\n * Track training parameters and prediction metrics for a custom training job.\n * Extract and perform analysis for all parameters and metrics within an Experiment.\n\n### Costs \n\n\nThis tutorial uses billable components of Google Cloud:\n\n* Vertex AI\n* Cloud Storage\n\nLearn about [Vertex AI\npricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\npricing](https://cloud.google.com/storage/pricing), and use the [Pricing\nCalculator](https://cloud.google.com/products/calculator/)\nto generate a cost estimate based on your projected usage.",
"_____no_output_____"
],
[
"### Set up your local development environment\n\n**If you are using Colab or Google Cloud Notebooks**, your environment already meets\nall the requirements to run this notebook. You can skip this step.",
"_____no_output_____"
],
[
"**Otherwise**, make sure your environment meets this notebook's requirements.\nYou need the following:\n\n* The Google Cloud SDK\n* Git\n* Python 3\n* virtualenv\n* Jupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to [Setting up a Python development\nenvironment](https://cloud.google.com/python/setup) and the [Jupyter\ninstallation guide](https://jupyter.org/install) provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)\n\n1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)\n\n1. [Install\n virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)\n and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n1. To install Jupyter, run `pip install jupyter` on the\ncommand-line in a terminal shell.\n\n1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n\n1. Open this notebook in the Jupyter Notebook Dashboard.",
"_____no_output_____"
],
[
"### Install additional packages\n\nRun the following commands to install the Vertex SDK for Python.",
"_____no_output_____"
]
],
[
[
"import sys\n\nif \"google.colab\" in sys.modules:\n USER_FLAG = \"\"\nelse:\n USER_FLAG = \"--user\"",
"_____no_output_____"
],
[
"!python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade",
"_____no_output_____"
]
],
[
[
"### Restart the kernel\n\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.",
"_____no_output_____"
]
],
[
[
"# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"_____no_output_____"
]
],
[
[
"## Before you begin\n\n### Select a GPU runtime\n\n**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select \"Runtime --> Change runtime type > GPU\"**",
"_____no_output_____"
],
[
"### Set up your Google Cloud project\n\n**The following steps are required, regardless of your notebook environment.**\n\n1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n\n1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).\n\n1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).\n\n1. Enter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.",
"_____no_output_____"
],
[
"#### Set your project ID\n\n**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.",
"_____no_output_____"
]
],
[
[
"import os\n\nPROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)",
"_____no_output_____"
]
],
[
[
"Otherwise, set your project ID here.",
"_____no_output_____"
]
],
[
[
"if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}",
"_____no_output_____"
]
],
[
[
"Set gcloud config to your project ID.",
"_____no_output_____"
]
],
[
[
"!gcloud config set project $PROJECT_ID",
"_____no_output_____"
]
],
[
[
"#### Timestamp\n\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"_____no_output_____"
]
],
[
[
"### Authenticate your Google Cloud account\n\n**If you are using Google Cloud Notebooks**, your environment is already\nauthenticated. Skip this step.",
"_____no_output_____"
],
[
"**If you are using Colab**, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\n\n**Otherwise**, follow these steps:\n\n1. In the Cloud Console, go to the [**Create service account key**\n page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).\n\n2. Click **Create service account**.\n\n3. In the **Service account name** field, enter a name, and\n click **Create**.\n\n4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n\n5. Click *Create*. A JSON file that contains your key downloads to your\nlocal environment.\n\n6. Enter the path to your service account key as the\n`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"_____no_output_____"
]
],
[
[
"### Create a Cloud Storage bucket\n\n**The following steps are required, regardless of your notebook environment.**\n\n\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex AI runs\nthe code from this package. In this tutorial, Vertex AI also saves the\ntrained model that results from your job in the same bucket. Using this model artifact, you can then\ncreate Vertex AI model and endpoint resources in order to serve\nonline predictions.\n\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\n\nYou may also change the `REGION` variable, which is used for operations\nthroughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are\navailable](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.",
"_____no_output_____"
]
],
[
[
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"[your-region]\" # @param {type:\"string\"}",
"_____no_output_____"
],
[
"if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"-aip-\" + TIMESTAMP",
"_____no_output_____"
]
],
[
[
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.",
"_____no_output_____"
]
],
[
[
"! gsutil mb -l $REGION $BUCKET_NAME",
"_____no_output_____"
]
],
[
[
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"_____no_output_____"
]
],
[
[
"! gsutil ls -al $BUCKET_NAME",
"_____no_output_____"
]
],
[
[
"### Import libraries and define constants",
"_____no_output_____"
],
[
"Import required libraries.\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom google.cloud import aiplatform\nfrom sklearn.metrics import mean_absolute_error, mean_squared_error\nfrom tensorflow.python.keras.utils import data_utils",
"_____no_output_____"
]
],
[
[
"## Initialize Vertex AI and set an _experiment_\n",
"_____no_output_____"
],
[
"Define experiment name.",
"_____no_output_____"
]
],
[
[
"EXPERIMENT_NAME = \"\" # @param {type:\"string\"}",
"_____no_output_____"
]
],
[
[
"If EXEPERIMENT_NAME is not set, set a default one below:",
"_____no_output_____"
]
],
[
[
"if EXPERIMENT_NAME == \"\" or EXPERIMENT_NAME is None:\n EXPERIMENT_NAME = \"my-experiment-\" + TIMESTAMP",
"_____no_output_____"
]
],
[
[
"Initialize the *client* for Vertex AI.",
"_____no_output_____"
]
],
[
[
"aiplatform.init(\n project=PROJECT_ID,\n location=REGION,\n staging_bucket=BUCKET_NAME,\n experiment=EXPERIMENT_NAME,\n)",
"_____no_output_____"
]
],
[
[
"## Tracking parameters and metrics in Vertex AI custom training jobs",
"_____no_output_____"
],
[
"This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone",
"_____no_output_____"
]
],
[
[
"!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv\n!gsutil cp abalone_train.csv {BUCKET_NAME}/data/\n\ngcs_csv_path = f\"{BUCKET_NAME}/data/abalone_train.csv\"",
"_____no_output_____"
]
],
[
[
"### Create a managed tabular dataset from a CSV\n\nA Managed dataset can be used to create an AutoML model or a custom model. ",
"_____no_output_____"
]
],
[
[
"ds = aiplatform.TabularDataset.create(display_name=\"abalone\", gcs_source=[gcs_csv_path])\n\nds.resource_name",
"_____no_output_____"
]
],
[
[
"### Write the training script\n\nRun the following cell to create the training script that is used in the sample custom training job.",
"_____no_output_____"
]
],
[
[
"%%writefile training_script.py\n\nimport pandas as pd\nimport argparse\nimport os\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--num_units', dest='num_units',\n default=64, type=int,\n help='Number of unit for first layer.')\nargs = parser.parse_args()\n# uncomment and bump up replica_count for distributed training\n# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n# tf.distribute.experimental_set_strategy(strategy)\n\ncol_names = [\"Length\", \"Diameter\", \"Height\", \"Whole weight\", \"Shucked weight\", \"Viscera weight\", \"Shell weight\", \"Age\"]\ntarget = \"Age\"\n\ndef aip_data_to_dataframe(wild_card_path):\n return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)\n for fp in tf.data.Dataset.list_files([wild_card_path])])\n\ndef get_features_and_labels(df):\n return df.drop(target, axis=1).values, df[target].values\n\ndef data_prep(wild_card_path):\n return get_features_and_labels(aip_data_to_dataframe(wild_card_path))\n\n\nmodel = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])\nmodel.compile(loss='mse', optimizer='adam')\n\nmodel.fit(*data_prep(os.environ[\"AIP_TRAINING_DATA_URI\"]),\n epochs=args.epochs ,\n validation_data=data_prep(os.environ[\"AIP_VALIDATION_DATA_URI\"]))\nprint(model.evaluate(*data_prep(os.environ[\"AIP_TEST_DATA_URI\"])))\n\n# save as Vertex AI Managed model\ntf.saved_model.save(model, os.environ[\"AIP_MODEL_DIR\"])",
"_____no_output_____"
]
],
[
[
"### Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata",
"_____no_output_____"
]
],
[
[
"job = aiplatform.CustomTrainingJob(\n display_name=\"train-abalone-dist-1-replica\",\n script_path=\"training_script.py\",\n container_uri=\"gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest\",\n requirements=[\"gcsfs==0.7.1\"],\n model_serving_container_image_uri=\"gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest\",\n)",
"_____no_output_____"
]
],
[
[
"Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.",
"_____no_output_____"
]
],
[
[
"aiplatform.start_run(\"custom-training-run-1\") # Change this to your desired run name\nparameters = {\"epochs\": 10, \"num_units\": 64}\naiplatform.log_params(parameters)\n\nmodel = job.run(\n ds,\n replica_count=1,\n model_display_name=\"abalone-model\",\n args=[f\"--epochs={parameters['epochs']}\", f\"--num_units={parameters['num_units']}\"],\n)",
"_____no_output_____"
]
],
[
[
"### Deploy Model and calculate prediction metrics",
"_____no_output_____"
],
[
"Deploy model to Google Cloud. This operation will take 10-20 mins.",
"_____no_output_____"
]
],
[
[
"endpoint = model.deploy(machine_type=\"n1-standard-4\")",
"_____no_output_____"
]
],
[
[
"Once model is deployed, perform online prediction using the `abalone_test` dataset and calculate prediction metrics.",
"_____no_output_____"
],
[
"Prepare the prediction dataset.",
"_____no_output_____"
]
],
[
[
"def read_data(uri):\n dataset_path = data_utils.get_file(\"auto-mpg.data\", uri)\n col_names = [\n \"Length\",\n \"Diameter\",\n \"Height\",\n \"Whole weight\",\n \"Shucked weight\",\n \"Viscera weight\",\n \"Shell weight\",\n \"Age\",\n ]\n dataset = pd.read_csv(\n dataset_path,\n names=col_names,\n na_values=\"?\",\n comment=\"\\t\",\n sep=\",\",\n skipinitialspace=True,\n )\n return dataset\n\n\ndef get_features_and_labels(df):\n target = \"Age\"\n return df.drop(target, axis=1).values, df[target].values\n\n\ntest_dataset, test_labels = get_features_and_labels(\n read_data(\n \"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv\"\n )\n)",
"_____no_output_____"
]
],
[
[
"Perform online prediction.",
"_____no_output_____"
]
],
[
[
"prediction = endpoint.predict(test_dataset.tolist())\nprediction",
"_____no_output_____"
]
],
[
[
"Calculate and track prediction evaluation metrics.",
"_____no_output_____"
]
],
[
[
"mse = mean_squared_error(test_labels, prediction.predictions)\nmae = mean_absolute_error(test_labels, prediction.predictions)\n\naiplatform.log_metrics({\"mse\": mse, \"mae\": mae})",
"_____no_output_____"
]
],
[
[
"### Extract all parameters and metrics created during this experiment.",
"_____no_output_____"
]
],
[
[
"aiplatform.get_experiment_df()",
"_____no_output_____"
]
],
[
[
"### View data in the Cloud Console",
"_____no_output_____"
],
[
"Parameters and metrics can also be viewed in the Cloud Console. \n",
"_____no_output_____"
]
],
[
[
"print(\"Vertex AI Experiments:\")\nprint(\n f\"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}\"\n)",
"_____no_output_____"
]
],
[
[
"## Cleaning up\n\nTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\nproject](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n\nOtherwise, you can delete the individual resources you created in this tutorial:\nTraining Job\nModel\nCloud Storage Bucket\n\n* Training Job\n* Model\n* Endpoint\n* Cloud Storage Bucket\n",
"_____no_output_____"
]
],
[
[
"delete_training_job = True\ndelete_model = True\ndelete_endpoint = True\n\n# Warning: Setting this to true will delete everything in your bucket\ndelete_bucket = False\n\n# Delete the training job\njob.delete()\n\n# Delete the model\nmodel.delete()\n\n# Delete the endpoint\nendpoint.delete()\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil -m rm -r $BUCKET_NAME",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7efee9c83315f4a75d2477814513ab3dee16009 | 2,245 | ipynb | Jupyter Notebook | content/lessons/04/End-To-End-Example/ETEE-Password-Program.ipynb | MahopacHS/spring-2020-Lamk0810 | 4b870cbd094d5813a0dc92afcbf7b0e37968ba53 | [
"MIT"
] | null | null | null | content/lessons/04/End-To-End-Example/ETEE-Password-Program.ipynb | MahopacHS/spring-2020-Lamk0810 | 4b870cbd094d5813a0dc92afcbf7b0e37968ba53 | [
"MIT"
] | null | null | null | content/lessons/04/End-To-End-Example/ETEE-Password-Program.ipynb | MahopacHS/spring-2020-Lamk0810 | 4b870cbd094d5813a0dc92afcbf7b0e37968ba53 | [
"MIT"
] | null | null | null | 34.015152 | 348 | 0.528285 | [
[
[
"# End-To-End Example: Password Program\n\nPassword Program:\n\n- 5 attempts for the password\n- On correct password, print: “Access Granted”, then end the program \n- On incorrect password “Invalid Password Attempt #” and give the user another try\n- After 5 attempts, print “You are locked out”. Then end the program.\n",
"_____no_output_____"
]
],
[
[
"secret = \"rhubarb\"\nattempts = 0\nwhile True:\n password = input(\"Enter Password: \")\n attempts= attempts + 1\n if password == secret:\n print(\"Access Granted!\")\n break \n print(\"Invalid password attempt #\",attempts)\n if attempts == 5:\n print(\"You are locked out\")\n break",
"Enter Password: sd\nInvalid password attempt # 1\nEnter Password: fds\nInvalid password attempt # 2\nEnter Password: sd\nInvalid password attempt # 3\nEnter Password: d\nInvalid password attempt # 4\nEnter Password: d\nInvalid password attempt # 5\nYou are locked out\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
e7f003ec6e510f789521d605f1b1ef880282e50f | 540,160 | ipynb | Jupyter Notebook | vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb | felixlapalma/diplodatos_2018 | ec36c646ca08902c676e6c947acfa7d328fcf22d | [
"MIT"
] | null | null | null | vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb | felixlapalma/diplodatos_2018 | ec36c646ca08902c676e6c947acfa7d328fcf22d | [
"MIT"
] | null | null | null | vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb | felixlapalma/diplodatos_2018 | ec36c646ca08902c676e6c947acfa7d328fcf22d | [
"MIT"
] | null | null | null | 386.10436 | 200,376 | 0.913963 | [
[
[
"# Final Lab\n\n*Felix Rojo Lapalma*\n\n## Main task\n\nIn this notebook, we will apply transfer learning techniques to finetune the [MobileNet](https://arxiv.org/pdf/1704.04861.pdf) CNN on [Cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset.\n\n## Procedures\n\nIn general, the main steps that we will follow are:\n\n1. Load data, analyze and split in *training*/*validation*/*testing* sets.\n2. Load CNN and analyze architecture.\n3. Adapt this CNN to our problem.\n4. Setup data augmentation techniques.\n5. Add some keras callbacks.\n6. Setup optimization algorithm with their hyperparameters.\n7. Train model!\n8. Choose best model/snapshot.\n9. Evaluate final model on the *testing* set.\n",
"_____no_output_____"
]
],
[
[
"# load libs\nimport os\nimport matplotlib.pyplot as plt\nfrom IPython.display import SVG\n# https://keras.io/applications/#documentation-for-individual-models\nfrom keras.applications.mobilenet import MobileNet\nfrom keras.datasets import cifar10\nfrom keras.models import Model\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.layers import Dense, GlobalAveragePooling2D,Dropout\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.utils import plot_model, to_categorical\nfrom sklearn.model_selection import train_test_split\nimport cv2\nimport numpy as np\nimport tensorflow as tf",
"Using TensorFlow backend.\n"
]
],
[
[
"#### cuda",
"_____no_output_____"
]
],
[
[
"cuda_flag=False\n\nif cuda_flag:\n # Setup one GPU for tensorflow (don't be greedy).\n os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n # The GPU id to use, \"0\", \"1\", etc.\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\" \n # Limit tensorflow gpu usage.\n # Maybe you should comment this lines if you run tensorflow on CPU.\n config = tf.ConfigProto()\n config.gpu_options.allow_growth = True\n config.gpu_options.per_process_gpu_memory_fraction = 0.3\n sess = tf.Session(config=config)\n",
"_____no_output_____"
]
],
[
[
"## 1. Load data, analyze and split in *training*/*validation*/*testing* sets",
"_____no_output_____"
]
],
[
[
"# Cifar-10 class names\n# We will create a dictionary for each type of label\n# This is a mapping from the int class name to \n# their corresponding string class name\nLABELS = {\n 0: \"airplane\",\n 1: \"automobile\",\n 2: \"bird\",\n 3: \"cat\",\n 4: \"deer\",\n 5: \"dog\",\n 6: \"frog\",\n 7: \"horse\",\n 8: \"ship\",\n 9: \"truck\"\n}\n\n# Load dataset from keras\n(x_train_data, y_train_data), (x_test_data, y_test_data) = cifar10.load_data()\n\n############\n# [COMPLETE] \n# Add some prints here to see the loaded data dimensions\n############\n\nprint(\"Cifar-10 x_train shape: {}\".format(x_train_data.shape))\nprint(\"Cifar-10 y_train shape: {}\".format(y_train_data.shape))\nprint(\"Cifar-10 x_test shape: {}\".format(x_test_data.shape))\nprint(\"Cifar-10 y_test shape: {}\".format(y_test_data.shape))",
"Cifar-10 x_train shape: (50000, 32, 32, 3)\nCifar-10 y_train shape: (50000, 1)\nCifar-10 x_test shape: (10000, 32, 32, 3)\nCifar-10 y_test shape: (10000, 1)\n"
],
[
"# from https://www.cs.toronto.edu/~kriz/cifar.html\n# The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. \n# The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. \"Automobile\" includes sedans, SUVs, things of that sort. \"Truck\" includes only big trucks. Neither includes pickup trucks. \n# Some constants\nIMG_ROWS = 32\nIMG_COLS = 32\nNUM_CLASSES = 10\nRANDOM_STATE = 2018\n############\n# [COMPLETE] \n# Analyze the amount of images for each class\n# Plot some images to explore how they look\n############\nfrom genlib import get_classes_distribution,plot_label_per_class\nfor y,yt in zip([y_train_data.flatten(),y_test_data.flatten()],['Train','Test']):\n print('{:>15s}'.format(yt))\n get_classes_distribution(y,LABELS)\n plot_label_per_class(y,LABELS)",
" Train\nairplane : 5000 or 10.00%\nautomobile : 5000 or 10.00%\nbird : 5000 or 10.00%\ncat : 5000 or 10.00%\ndeer : 5000 or 10.00%\ndog : 5000 or 10.00%\nfrog : 5000 or 10.00%\nhorse : 5000 or 10.00%\nship : 5000 or 10.00%\ntruck : 5000 or 10.00%\n"
]
],
[
[
"Todo parece ir de acuerdo a la documentación. Veamos las imagenes,",
"_____no_output_____"
]
],
[
[
"from genlib import sample_images_data,plot_sample_images\nfor xy,yt in zip([(x_train_data,y_train_data.flatten()),(x_test_data,y_test_data.flatten())],['Train','Test']):\n print('{:>15s}'.format(yt))\n train_sample_images, train_sample_labels = sample_images_data(*xy,LABELS)\n plot_sample_images(train_sample_images, train_sample_labels,LABELS)",
" Train\nTotal number of sample images to plot: 40\n"
],
[
"############\n# [COMPLETE] \n# Split training set in train/val sets\n# Use the sampling method that you want\n############\n#init seed\nnp.random.seed(seed=RANDOM_STATE)\n\nfull_set_flag=False # True: uses all images / False only a subset specified by TRAIN Samples and Val Frac\nVAL_FRAC=0.2\nTRAIN_SIZE_BFV=x_train_data.shape[0]\nTRAIN_FRAC=(1-VAL_FRAC)\n# calc\nTRAIN_SAMPLES_FULL=int(TRAIN_FRAC*TRAIN_SIZE_BFV) # if full_set_flag==True\nTRAIN_SAMPLES_RED=20000 # if full_set_flag==False\nVAL_SAMPLES_RED=int(VAL_FRAC*TRAIN_SAMPLES_RED) # if full_set_flag==False\n \nif full_set_flag:\n # Esta forma parece servir si barremos todo el set sino...\n #\n # Get Index\n train_idxs = np.random.choice(np.arange(TRAIN_SIZE_BFV), size=TRAIN_SAMPLES_FULL, replace=False)\n val_idx=np.array([x for x in np.arange(TRAIN_SIZE_BFV) if x not in train_idxs]) \nelse:\n train_idxs = np.random.choice(np.arange(TRAIN_SIZE_BFV), size=TRAIN_SAMPLES_RED, replace=False)\n val_idx=np.random.choice(train_idxs, size=VAL_SAMPLES_RED, replace=False)\n \n# Split\nx_val_data = x_train_data[val_idx, :, :, :]\ny_val_data = y_train_data[val_idx]\nx_train_data = x_train_data[train_idxs, :, :, :]\ny_train_data = y_train_data[train_idxs]\n####",
"_____no_output_____"
],
[
"####\nprint(\"Cifar-10 x_train shape: {}\".format(x_train_data.shape))\nprint(\"Cifar-10 y_train shape: {}\".format(y_train_data.shape))\nprint(\"Cifar-10 x_val shape: {}\".format(x_val_data.shape))\nprint(\"Cifar-10 y_val shape: {}\".format(y_val_data.shape))\nprint(\"Cifar-10 x_test shape: {}\".format(x_test_data.shape))\nprint(\"Cifar-10 y_test shape: {}\".format(y_test_data.shape))",
"Cifar-10 x_train shape: (20000, 32, 32, 3)\nCifar-10 y_train shape: (20000, 1)\nCifar-10 x_val shape: (4000, 32, 32, 3)\nCifar-10 y_val shape: (4000, 1)\nCifar-10 x_test shape: (10000, 32, 32, 3)\nCifar-10 y_test shape: (10000, 1)\n"
]
],
[
[
"Veamos si quedaron balanceados Train y Validation",
"_____no_output_____"
]
],
[
[
"for y,yt in zip([y_train_data.flatten(),y_val_data.flatten()],['Train','Validation']):\n print('{:>15s}'.format(yt))\n get_classes_distribution(y,LABELS)\n plot_label_per_class(y,LABELS)",
" Train\nairplane : 1950 or 9.75%\nautomobile : 1985 or 9.93%\nbird : 2012 or 10.06%\ncat : 1993 or 9.96%\ndeer : 1943 or 9.71%\ndog : 1994 or 9.97%\nfrog : 2028 or 10.14%\nhorse : 2030 or 10.15%\nship : 2012 or 10.06%\ntruck : 2053 or 10.27%\n"
],
[
"# In order to use the MobileNet CNN pre-trained on imagenet, we have\n# to resize our images to have one of the following static square shape: [(128, 128),\n# (160, 160), (192, 192), or (224, 224)].\n# If we try to resize all the dataset this will not fit on memory, so we have to save all\n# the images to disk, and then when loading those images, our datagenerator will resize them\n# to the desired shape on-the-fly.\n\n############\n# [COMPLETE] \n# Use the above function to save all your data, e.g.:\n# save_to_disk(x_train, y_train, 'train', 'cifar10_images')\n# save_to_disk(x_val, y_val, 'val', 'cifar10_images')\n# save_to_disk(x_test, y_test, 'test', 'cifar10_images')\n############\n\nsave_image_flag=False # To avoid saving images every time!!!\n\nif save_image_flag:\n from genlib import save_to_disk\n save_to_disk(x_train_data, y_train_data, 'train', output_dir='cifar10_images')\n save_to_disk(x_val_data, y_val_data, 'val', output_dir='cifar10_images')\n save_to_disk(x_test_data, y_test_data, 'test', output_dir='cifar10_images')",
"_____no_output_____"
]
],
[
[
"## 2. Load CNN and analyze architecture",
"_____no_output_____"
]
],
[
[
"#Model\nNO_EPOCHS = 25\nBATCH_SIZE = 32\nNET_IMG_ROWS = 128\nNET_IMG_COLS = 128",
"_____no_output_____"
],
[
"############\n# [COMPLETE] \n# Use the MobileNet class from Keras to load your base model, pre-trained on imagenet.\n# We wan't to load the pre-trained weights, but without the classification layer.\n# Check the notebook '3_transfer-learning' or https://keras.io/applications/#mobilenet to get more\n# info about how to load this network properly.\n############\n#Note that this model only supports the data format 'channels_last' (height, width, channels).\n#The default input size for this model is 224x224.\n\nbase_model = MobileNet(input_shape=(NET_IMG_ROWS, NET_IMG_COLS, 3), # Input image size\n weights='imagenet', # Use imagenet pre-trained weights\n include_top=False, # Drop classification layer\n pooling='avg') # Global AVG pooling for the \n # output feature vector",
"_____no_output_____"
]
],
[
[
"## 3. Adapt this CNN to our problem",
"_____no_output_____"
]
],
[
[
"############\n# [COMPLETE] \n# Having the CNN loaded, now we have to add some layers to adapt this network to our\n# classification problem.\n# We can choose to finetune just the new added layers, some particular layers or all the layer of the\n# model. Play with different settings and compare the results.\n############\n\n# get the output feature vector from the base model\nx = base_model.output\n# let's add a fully-connected layer\nx = Dense(1024, activation='relu')(x)\n# Add Drop Out Layer\nx=Dropout(0.5)(x)\n# and a logistic layer\npredictions = Dense(NUM_CLASSES, activation='softmax')(x)\n\n# this is the model we will train\nmodel = Model(inputs=base_model.input, outputs=predictions)",
"_____no_output_____"
],
[
"# Initial Model Summary\nmodel.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 128, 128, 3) 0 \n_________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 129, 129, 3) 0 \n_________________________________________________________________\nconv1 (Conv2D) (None, 64, 64, 32) 864 \n_________________________________________________________________\nconv1_bn (BatchNormalization (None, 64, 64, 32) 128 \n_________________________________________________________________\nconv1_relu (ReLU) (None, 64, 64, 32) 0 \n_________________________________________________________________\nconv_dw_1 (DepthwiseConv2D) (None, 64, 64, 32) 288 \n_________________________________________________________________\nconv_dw_1_bn (BatchNormaliza (None, 64, 64, 32) 128 \n_________________________________________________________________\nconv_dw_1_relu (ReLU) (None, 64, 64, 32) 0 \n_________________________________________________________________\nconv_pw_1 (Conv2D) (None, 64, 64, 64) 2048 \n_________________________________________________________________\nconv_pw_1_bn (BatchNormaliza (None, 64, 64, 64) 256 \n_________________________________________________________________\nconv_pw_1_relu (ReLU) (None, 64, 64, 64) 0 \n_________________________________________________________________\nconv_pad_2 (ZeroPadding2D) (None, 65, 65, 64) 0 \n_________________________________________________________________\nconv_dw_2 (DepthwiseConv2D) (None, 32, 32, 64) 576 \n_________________________________________________________________\nconv_dw_2_bn (BatchNormaliza (None, 32, 32, 64) 256 \n_________________________________________________________________\nconv_dw_2_relu (ReLU) (None, 32, 32, 64) 0 \n_________________________________________________________________\nconv_pw_2 (Conv2D) (None, 32, 32, 128) 8192 \n_________________________________________________________________\nconv_pw_2_bn (BatchNormaliza (None, 32, 32, 128) 512 \n_________________________________________________________________\nconv_pw_2_relu (ReLU) (None, 32, 32, 128) 0 \n_________________________________________________________________\nconv_dw_3 (DepthwiseConv2D) (None, 32, 32, 128) 1152 \n_________________________________________________________________\nconv_dw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 \n_________________________________________________________________\nconv_dw_3_relu (ReLU) (None, 32, 32, 128) 0 \n_________________________________________________________________\nconv_pw_3 (Conv2D) (None, 32, 32, 128) 16384 \n_________________________________________________________________\nconv_pw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 \n_________________________________________________________________\nconv_pw_3_relu (ReLU) (None, 32, 32, 128) 0 \n_________________________________________________________________\nconv_pad_4 (ZeroPadding2D) (None, 33, 33, 128) 0 \n_________________________________________________________________\nconv_dw_4 (DepthwiseConv2D) (None, 16, 16, 128) 1152 \n_________________________________________________________________\nconv_dw_4_bn (BatchNormaliza (None, 16, 16, 128) 512 \n_________________________________________________________________\nconv_dw_4_relu (ReLU) (None, 16, 16, 128) 0 \n_________________________________________________________________\nconv_pw_4 (Conv2D) (None, 16, 16, 256) 32768 \n_________________________________________________________________\nconv_pw_4_bn (BatchNormaliza (None, 16, 16, 256) 1024 \n_________________________________________________________________\nconv_pw_4_relu (ReLU) (None, 16, 16, 256) 0 \n_________________________________________________________________\nconv_dw_5 (DepthwiseConv2D) (None, 16, 16, 256) 2304 \n_________________________________________________________________\nconv_dw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 \n_________________________________________________________________\nconv_dw_5_relu (ReLU) (None, 16, 16, 256) 0 \n_________________________________________________________________\nconv_pw_5 (Conv2D) (None, 16, 16, 256) 65536 \n_________________________________________________________________\nconv_pw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 \n_________________________________________________________________\nconv_pw_5_relu (ReLU) (None, 16, 16, 256) 0 \n_________________________________________________________________\nconv_pad_6 (ZeroPadding2D) (None, 17, 17, 256) 0 \n_________________________________________________________________\nconv_dw_6 (DepthwiseConv2D) (None, 8, 8, 256) 2304 \n_________________________________________________________________\nconv_dw_6_bn (BatchNormaliza (None, 8, 8, 256) 1024 \n_________________________________________________________________\nconv_dw_6_relu (ReLU) (None, 8, 8, 256) 0 \n_________________________________________________________________\nconv_pw_6 (Conv2D) (None, 8, 8, 512) 131072 \n_________________________________________________________________\nconv_pw_6_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_6_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_7 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_7_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_7 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_7_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_8 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_8_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_8 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_8_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_9 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_9_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_9 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_9_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_10 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_10_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_10 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_10_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_11 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_11_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_11 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_11_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pad_12 (ZeroPadding2D) (None, 9, 9, 512) 0 \n_________________________________________________________________\nconv_dw_12 (DepthwiseConv2D) (None, 4, 4, 512) 4608 \n_________________________________________________________________\nconv_dw_12_bn (BatchNormaliz (None, 4, 4, 512) 2048 \n_________________________________________________________________\nconv_dw_12_relu (ReLU) (None, 4, 4, 512) 0 \n_________________________________________________________________\nconv_pw_12 (Conv2D) (None, 4, 4, 1024) 524288 \n_________________________________________________________________\nconv_pw_12_bn (BatchNormaliz (None, 4, 4, 1024) 4096 \n_________________________________________________________________\nconv_pw_12_relu (ReLU) (None, 4, 4, 1024) 0 \n_________________________________________________________________\nconv_dw_13 (DepthwiseConv2D) (None, 4, 4, 1024) 9216 \n_________________________________________________________________\nconv_dw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 \n_________________________________________________________________\nconv_dw_13_relu (ReLU) (None, 4, 4, 1024) 0 \n_________________________________________________________________\nconv_pw_13 (Conv2D) (None, 4, 4, 1024) 1048576 \n_________________________________________________________________\nconv_pw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 \n_________________________________________________________________\nconv_pw_13_relu (ReLU) (None, 4, 4, 1024) 0 \n_________________________________________________________________\nglobal_average_pooling2d_1 ( (None, 1024) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 1024) 1049600 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 1024) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 10250 \n=================================================================\nTotal params: 4,288,714\nTrainable params: 4,266,826\nNon-trainable params: 21,888\n_________________________________________________________________\n"
],
[
"model_png=False\nif model_png:\n plot_model(model, to_file='model.png')\n SVG(model_to_dot(model).create(prog='dot', format='svg'))",
"_____no_output_____"
],
[
"# let's visualize layer names and layer indices to see how many layers\n# we should freeze:\nfor i, layer in enumerate(model.layers):\n print(i, layer.name)",
"0 input_1\n1 conv1_pad\n2 conv1\n3 conv1_bn\n4 conv1_relu\n5 conv_dw_1\n6 conv_dw_1_bn\n7 conv_dw_1_relu\n8 conv_pw_1\n9 conv_pw_1_bn\n10 conv_pw_1_relu\n11 conv_pad_2\n12 conv_dw_2\n13 conv_dw_2_bn\n14 conv_dw_2_relu\n15 conv_pw_2\n16 conv_pw_2_bn\n17 conv_pw_2_relu\n18 conv_dw_3\n19 conv_dw_3_bn\n20 conv_dw_3_relu\n21 conv_pw_3\n22 conv_pw_3_bn\n23 conv_pw_3_relu\n24 conv_pad_4\n25 conv_dw_4\n26 conv_dw_4_bn\n27 conv_dw_4_relu\n28 conv_pw_4\n29 conv_pw_4_bn\n30 conv_pw_4_relu\n31 conv_dw_5\n32 conv_dw_5_bn\n33 conv_dw_5_relu\n34 conv_pw_5\n35 conv_pw_5_bn\n36 conv_pw_5_relu\n37 conv_pad_6\n38 conv_dw_6\n39 conv_dw_6_bn\n40 conv_dw_6_relu\n41 conv_pw_6\n42 conv_pw_6_bn\n43 conv_pw_6_relu\n44 conv_dw_7\n45 conv_dw_7_bn\n46 conv_dw_7_relu\n47 conv_pw_7\n48 conv_pw_7_bn\n49 conv_pw_7_relu\n50 conv_dw_8\n51 conv_dw_8_bn\n52 conv_dw_8_relu\n53 conv_pw_8\n54 conv_pw_8_bn\n55 conv_pw_8_relu\n56 conv_dw_9\n57 conv_dw_9_bn\n58 conv_dw_9_relu\n59 conv_pw_9\n60 conv_pw_9_bn\n61 conv_pw_9_relu\n62 conv_dw_10\n63 conv_dw_10_bn\n64 conv_dw_10_relu\n65 conv_pw_10\n66 conv_pw_10_bn\n67 conv_pw_10_relu\n68 conv_dw_11\n69 conv_dw_11_bn\n70 conv_dw_11_relu\n71 conv_pw_11\n72 conv_pw_11_bn\n73 conv_pw_11_relu\n74 conv_pad_12\n75 conv_dw_12\n76 conv_dw_12_bn\n77 conv_dw_12_relu\n78 conv_pw_12\n79 conv_pw_12_bn\n80 conv_pw_12_relu\n81 conv_dw_13\n82 conv_dw_13_bn\n83 conv_dw_13_relu\n84 conv_pw_13\n85 conv_pw_13_bn\n86 conv_pw_13_relu\n87 global_average_pooling2d_1\n88 dense_1\n89 dropout_1\n90 dense_2\n"
],
[
"# En esta instancia no pretendemos entrenar todas sino las ultimas agregadas \nfor layer in model.layers[:88]:\n layer.trainable = False\nfor layer in model.layers[88:]:\n layer.trainable = True",
"_____no_output_____"
],
[
"model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 128, 128, 3) 0 \n_________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 129, 129, 3) 0 \n_________________________________________________________________\nconv1 (Conv2D) (None, 64, 64, 32) 864 \n_________________________________________________________________\nconv1_bn (BatchNormalization (None, 64, 64, 32) 128 \n_________________________________________________________________\nconv1_relu (ReLU) (None, 64, 64, 32) 0 \n_________________________________________________________________\nconv_dw_1 (DepthwiseConv2D) (None, 64, 64, 32) 288 \n_________________________________________________________________\nconv_dw_1_bn (BatchNormaliza (None, 64, 64, 32) 128 \n_________________________________________________________________\nconv_dw_1_relu (ReLU) (None, 64, 64, 32) 0 \n_________________________________________________________________\nconv_pw_1 (Conv2D) (None, 64, 64, 64) 2048 \n_________________________________________________________________\nconv_pw_1_bn (BatchNormaliza (None, 64, 64, 64) 256 \n_________________________________________________________________\nconv_pw_1_relu (ReLU) (None, 64, 64, 64) 0 \n_________________________________________________________________\nconv_pad_2 (ZeroPadding2D) (None, 65, 65, 64) 0 \n_________________________________________________________________\nconv_dw_2 (DepthwiseConv2D) (None, 32, 32, 64) 576 \n_________________________________________________________________\nconv_dw_2_bn (BatchNormaliza (None, 32, 32, 64) 256 \n_________________________________________________________________\nconv_dw_2_relu (ReLU) (None, 32, 32, 64) 0 \n_________________________________________________________________\nconv_pw_2 (Conv2D) (None, 32, 32, 128) 8192 \n_________________________________________________________________\nconv_pw_2_bn (BatchNormaliza (None, 32, 32, 128) 512 \n_________________________________________________________________\nconv_pw_2_relu (ReLU) (None, 32, 32, 128) 0 \n_________________________________________________________________\nconv_dw_3 (DepthwiseConv2D) (None, 32, 32, 128) 1152 \n_________________________________________________________________\nconv_dw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 \n_________________________________________________________________\nconv_dw_3_relu (ReLU) (None, 32, 32, 128) 0 \n_________________________________________________________________\nconv_pw_3 (Conv2D) (None, 32, 32, 128) 16384 \n_________________________________________________________________\nconv_pw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 \n_________________________________________________________________\nconv_pw_3_relu (ReLU) (None, 32, 32, 128) 0 \n_________________________________________________________________\nconv_pad_4 (ZeroPadding2D) (None, 33, 33, 128) 0 \n_________________________________________________________________\nconv_dw_4 (DepthwiseConv2D) (None, 16, 16, 128) 1152 \n_________________________________________________________________\nconv_dw_4_bn (BatchNormaliza (None, 16, 16, 128) 512 \n_________________________________________________________________\nconv_dw_4_relu (ReLU) (None, 16, 16, 128) 0 \n_________________________________________________________________\nconv_pw_4 (Conv2D) (None, 16, 16, 256) 32768 \n_________________________________________________________________\nconv_pw_4_bn (BatchNormaliza (None, 16, 16, 256) 1024 \n_________________________________________________________________\nconv_pw_4_relu (ReLU) (None, 16, 16, 256) 0 \n_________________________________________________________________\nconv_dw_5 (DepthwiseConv2D) (None, 16, 16, 256) 2304 \n_________________________________________________________________\nconv_dw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 \n_________________________________________________________________\nconv_dw_5_relu (ReLU) (None, 16, 16, 256) 0 \n_________________________________________________________________\nconv_pw_5 (Conv2D) (None, 16, 16, 256) 65536 \n_________________________________________________________________\nconv_pw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 \n_________________________________________________________________\nconv_pw_5_relu (ReLU) (None, 16, 16, 256) 0 \n_________________________________________________________________\nconv_pad_6 (ZeroPadding2D) (None, 17, 17, 256) 0 \n_________________________________________________________________\nconv_dw_6 (DepthwiseConv2D) (None, 8, 8, 256) 2304 \n_________________________________________________________________\nconv_dw_6_bn (BatchNormaliza (None, 8, 8, 256) 1024 \n_________________________________________________________________\nconv_dw_6_relu (ReLU) (None, 8, 8, 256) 0 \n_________________________________________________________________\nconv_pw_6 (Conv2D) (None, 8, 8, 512) 131072 \n_________________________________________________________________\nconv_pw_6_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_6_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_7 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_7_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_7 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_7_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_8 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_8_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_8 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_8_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_9 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_9_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_9 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_9_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_10 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_10_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_10 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_10_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_dw_11 (DepthwiseConv2D) (None, 8, 8, 512) 4608 \n_________________________________________________________________\nconv_dw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_dw_11_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pw_11 (Conv2D) (None, 8, 8, 512) 262144 \n_________________________________________________________________\nconv_pw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 \n_________________________________________________________________\nconv_pw_11_relu (ReLU) (None, 8, 8, 512) 0 \n_________________________________________________________________\nconv_pad_12 (ZeroPadding2D) (None, 9, 9, 512) 0 \n_________________________________________________________________\nconv_dw_12 (DepthwiseConv2D) (None, 4, 4, 512) 4608 \n_________________________________________________________________\nconv_dw_12_bn (BatchNormaliz (None, 4, 4, 512) 2048 \n_________________________________________________________________\nconv_dw_12_relu (ReLU) (None, 4, 4, 512) 0 \n_________________________________________________________________\nconv_pw_12 (Conv2D) (None, 4, 4, 1024) 524288 \n_________________________________________________________________\nconv_pw_12_bn (BatchNormaliz (None, 4, 4, 1024) 4096 \n_________________________________________________________________\nconv_pw_12_relu (ReLU) (None, 4, 4, 1024) 0 \n_________________________________________________________________\nconv_dw_13 (DepthwiseConv2D) (None, 4, 4, 1024) 9216 \n_________________________________________________________________\nconv_dw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 \n_________________________________________________________________\nconv_dw_13_relu (ReLU) (None, 4, 4, 1024) 0 \n_________________________________________________________________\nconv_pw_13 (Conv2D) (None, 4, 4, 1024) 1048576 \n_________________________________________________________________\nconv_pw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 \n_________________________________________________________________\nconv_pw_13_relu (ReLU) (None, 4, 4, 1024) 0 \n_________________________________________________________________\nglobal_average_pooling2d_1 ( (None, 1024) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 1024) 1049600 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 1024) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 10250 \n=================================================================\nTotal params: 4,288,714\nTrainable params: 1,059,850\nNon-trainable params: 3,228,864\n_________________________________________________________________\n"
]
],
[
[
"## 4. Setup data augmentation techniques",
"_____no_output_____"
]
],
[
[
"############\n# [COMPLETE] \n# Use data augmentation to train your model.\n# Use the Keras ImageDataGenerator class for this porpouse.\n# Note: Given that we want to load our images from disk, instead of using \n# ImageDataGenerator.flow method, we have to use ImageDataGenerator.flow_from_directory \n# method in the following way:\n# generator_train = dataget_train.flow_from_directory('resized_images/train', \n# target_size=(128, 128), batch_size=32)\n# generator_val = dataget_train.flow_from_directory('resized_images/val', \n# target_size=(128, 128), batch_size=32)\n# Note that we have to resize our images to finetune the MobileNet CNN, this is done using \n# the target_size argument in flow_from_directory. Remember to set the target_size to one of\n# the valid listed here: [(128, 128), (160, 160), (192, 192), or (224, 224)].\n############\ndata_get=ImageDataGenerator()\ngenerator_train = data_get.flow_from_directory(directory='cifar10_images/train',\n target_size=(128, 128), batch_size=BATCH_SIZE)\ngenerator_val = data_get.flow_from_directory(directory='cifar10_images/val', \n target_size=(128, 128), batch_size=BATCH_SIZE)\n",
"Found 40000 images belonging to 10 classes.\nFound 10000 images belonging to 10 classes.\n"
]
],
[
[
"## 5. Add some keras callbacks",
"_____no_output_____"
]
],
[
[
"############\n# [COMPLETE] \n# Load and set some Keras callbacks here!\n############\n\nEXP_ID='experiment_003/'\n\nfrom keras.callbacks import ModelCheckpoint, TensorBoard\n\nif not os.path.exists(EXP_ID):\n os.makedirs(EXP_ID)\n\ncallbacks = [\n ModelCheckpoint(filepath=os.path.join(EXP_ID, 'weights.{epoch:02d}-{val_loss:.2f}.hdf5'),\n monitor='val_loss', \n verbose=1, \n save_best_only=False, \n save_weights_only=False, \n mode='auto'),\n TensorBoard(log_dir=os.path.join(EXP_ID, 'logs'), \n write_graph=True, \n write_images=False)\n]\n",
"_____no_output_____"
]
],
[
[
"## 6. Setup optimization algorithm with their hyperparameters",
"_____no_output_____"
]
],
[
[
"############\n# [COMPLETE] \n# Choose some optimization algorithm and explore different hyperparameters.\n# Compile your model.\n############\nfrom keras.optimizers import SGD\nfrom keras.losses import categorical_crossentropy\n#model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), \n# loss='categorical_crossentropy',\n# metrics=['accuracy'])\n\n\nmodel.compile(loss=categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## 7. Train model!",
"_____no_output_____"
]
],
[
[
"generator_train.n",
"_____no_output_____"
],
[
"############\n# [COMPLETE] \n# Use fit_generator to train your model.\n# e.g.:\n# model.fit_generator(\n# generator_train,\n# epochs=50,\n# validation_data=generator_val,\n# steps_per_epoch=generator_train.n // 32,\n# validation_steps=generator_val.n // 32)\n############\nif full_set_flag:\n steps_per_epoch=generator_train.n // BATCH_SIZE\n validation_steps=generator_val.n // BATCH_SIZE\nelse:\n steps_per_epoch=TRAIN_SAMPLES_RED // BATCH_SIZE\n validation_steps=VAL_SAMPLES_RED // BATCH_SIZE \n\n\nmodel.fit_generator(generator_train,\n epochs=NO_EPOCHS,\n validation_data=generator_val,\n steps_per_epoch=steps_per_epoch,\n validation_steps=validation_steps,\n callbacks=callbacks)",
"Epoch 1/25\n625/625 [==============================] - 1911s 3s/step - loss: 0.7648 - acc: 0.7352 - val_loss: 2.4989 - val_acc: 0.2167\n\nEpoch 00001: saving model to experiment_003/weights.01-2.50.hdf5\nEpoch 2/25\n625/625 [==============================] - 1927s 3s/step - loss: 0.7447 - acc: 0.7426 - val_loss: 2.7681 - val_acc: 0.1904\n\nEpoch 00002: saving model to experiment_003/weights.02-2.77.hdf5\nEpoch 3/25\n625/625 [==============================] - 1902s 3s/step - loss: 0.6890 - acc: 0.7630 - val_loss: 2.9040 - val_acc: 0.2019\n\nEpoch 00003: saving model to experiment_003/weights.03-2.90.hdf5\nEpoch 4/25\n625/625 [==============================] - 1933s 3s/step - loss: 0.6982 - acc: 0.7597 - val_loss: 2.9734 - val_acc: 0.1787\n\nEpoch 00004: saving model to experiment_003/weights.04-2.97.hdf5\nEpoch 5/25\n625/625 [==============================] - 1914s 3s/step - loss: 0.6404 - acc: 0.7810 - val_loss: 2.3613 - val_acc: 0.2074\n\nEpoch 00005: saving model to experiment_003/weights.05-2.36.hdf5\nEpoch 6/25\n625/625 [==============================] - 1903s 3s/step - loss: 0.6643 - acc: 0.7724 - val_loss: 2.6470 - val_acc: 0.2183\n\nEpoch 00006: saving model to experiment_003/weights.06-2.65.hdf5\nEpoch 7/25\n625/625 [==============================] - 1924s 3s/step - loss: 0.6096 - acc: 0.7885 - val_loss: 2.4154 - val_acc: 0.2025\n\nEpoch 00007: saving model to experiment_003/weights.07-2.42.hdf5\nEpoch 8/25\n625/625 [==============================] - 1935s 3s/step - loss: 0.6471 - acc: 0.7776 - val_loss: 2.5618 - val_acc: 0.2140\n\nEpoch 00008: saving model to experiment_003/weights.08-2.56.hdf5\nEpoch 9/25\n625/625 [==============================] - 2020s 3s/step - loss: 0.5878 - acc: 0.7964 - val_loss: 3.1497 - val_acc: 0.1823\n\nEpoch 00009: saving model to experiment_003/weights.09-3.15.hdf5\nEpoch 10/25\n625/625 [==============================] - 1981s 3s/step - loss: 0.6049 - acc: 0.7921 - val_loss: 3.1617 - val_acc: 0.1673\n\nEpoch 00010: saving model to experiment_003/weights.10-3.16.hdf5\nEpoch 11/25\n624/625 [============================>.] - ETA: 5s - loss: 0.5667 - acc: 0.8012 "
]
],
[
[
"## 8. Choose best model/snapshot",
"_____no_output_____"
]
],
[
[
"############\n# [COMPLETE] \n# Analyze and compare your results. Choose the best model and snapshot, \n# justify your election. \n############\n",
"_____no_output_____"
]
],
[
[
"## 9. Evaluate final model on the *testing* set",
"_____no_output_____"
]
],
[
[
"############\n# [COMPLETE] \n# Evaluate your model on the testing set.\n############\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f00d1aafb608e6c77ca2a626633e287ac02f5c | 48,909 | ipynb | Jupyter Notebook | tutorials/SingleQubitGates/SingleQubitGates.ipynb | JohanC68/QuantumKatas | 11eea1da7e5b493d141a0a35889032a126022f05 | [
"MIT"
] | null | null | null | tutorials/SingleQubitGates/SingleQubitGates.ipynb | JohanC68/QuantumKatas | 11eea1da7e5b493d141a0a35889032a126022f05 | [
"MIT"
] | null | null | null | tutorials/SingleQubitGates/SingleQubitGates.ipynb | JohanC68/QuantumKatas | 11eea1da7e5b493d141a0a35889032a126022f05 | [
"MIT"
] | 1 | 2020-12-10T16:54:22.000Z | 2020-12-10T16:54:22.000Z | 53.923925 | 565 | 0.574823 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7f00d8fa9c76e298ab633753e14b52d20232e2f | 295,451 | ipynb | Jupyter Notebook | Project-2/FinalProjectFeatureSelection.ipynb | JasonCZH4/SCNU-CS-2018-DataMining | aba4cb2045d70808a7fa2af75600d7e66b5c0151 | [
"MIT"
] | 2 | 2021-07-04T04:34:51.000Z | 2021-12-23T01:42:39.000Z | Project-2/FinalProjectFeatureSelection.ipynb | charfole/SCNU-CS-2018-DataMining | aba4cb2045d70808a7fa2af75600d7e66b5c0151 | [
"MIT"
] | null | null | null | Project-2/FinalProjectFeatureSelection.ipynb | charfole/SCNU-CS-2018-DataMining | aba4cb2045d70808a7fa2af75600d7e66b5c0151 | [
"MIT"
] | 1 | 2021-07-04T04:37:04.000Z | 2021-07-04T04:37:04.000Z | 38.783276 | 168 | 0.2691 | [
[
[
"# 导入库",
"_____no_output_____"
]
],
[
[
"import pandas as pd\r\nimport numpy as np\r\nfrom sklearn.svm import LinearSVR, LinearSVC\r\nfrom sklearn.svm import *\r\nfrom sklearn.linear_model import Lasso, LogisticRegression, LinearRegression\r\nfrom sklearn.tree import DecisionTreeRegressor,DecisionTreeClassifier\r\nfrom sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, GradientBoostingRegressor, GradientBoostingClassifier\r\nfrom sklearn.feature_selection import SelectFromModel\r\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\r\nfrom sklearn.decomposition import PCA,LatentDirichletAllocation\r\nfrom sklearn.metrics import *\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.pipeline import Pipeline\r\nfrom sklearn.preprocessing import StandardScaler",
"_____no_output_____"
]
],
[
[
"# 读取数据集",
"_____no_output_____"
]
],
[
[
"filePath = './data/138rows_after.xlsx'\ndataFrame = pd.read_excel(filePath)\ndataArray = np.array(dataFrame)\ndataFrame",
"_____no_output_____"
]
],
[
[
"# 获取标签列",
"_____no_output_____"
]
],
[
[
"name = [column for column in dataFrame]\r\nname = name[5:]\r\npd.DataFrame(name)",
"_____no_output_____"
]
],
[
[
"# 查看数据规模",
"_____no_output_____"
]
],
[
[
"X_withLabel = dataArray[:92,5:]\r\nX_all = dataArray[:,5:] \r\ny_data = dataArray[:92,3]\r\ny_label= dataArray[:92,4].astype(int)\r\nprint(\"有标签数据的规模:\",X_withLabel.shape)\r\nprint(\"所有数据的规模:\",X_all.shape)\r\nprint(\"回归标签的规模:\",y_data.shape)\r\nprint(\"分类标签的规模:\",y_label.shape)",
"有标签数据的规模: (92, 76)\n所有数据的规模: (138, 76)\n回归标签的规模: (92,)\n分类标签的规模: (92,)\n"
]
],
[
[
"# 回归",
"_____no_output_____"
],
[
"## 利用Lasso进行特征选择",
"_____no_output_____"
]
],
[
[
"lasso = Lasso(alpha = 0.5,max_iter=5000).fit(X_withLabel, y_data)\r\nmodelLasso = SelectFromModel(lasso, prefit=True)\r\nX_Lasso = modelLasso.transform(X_withLabel)\r\n\r\nLassoIndexMask = modelLasso.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值\r\nLassoIndexMask = LassoIndexMask.tolist() \r\nLassoIndexTrue = []\r\nLassoIndexFalse = []\r\n\r\nfor i in range(len(LassoIndexMask)): # 记录下被筛选的indicator的序号\r\n if (LassoIndexMask[i]==True):\r\n LassoIndexTrue.append(i)\r\n if (LassoIndexMask[i]==False):\r\n LassoIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(LassoIndexTrue)):\r\n print(i+1,\":\",name[LassoIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(LassoIndexFalse)):\r\n print(i+1,\":\",name[LassoIndexFalse[i]])\r\n\r\ndataFrameOfLassoRegressionFeature = dataFrame\r\nfor i in range(len(LassoIndexFalse)):\r\n dataFrameOfLassoRegressionFeature = dataFrameOfLassoRegressionFeature.drop([name[LassoIndexFalse[i]]],axis=1)\r\ndataFrameOfLassoRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/LassoFeatureSelectionOfData.xlsx')\r\ndataFrameOfLassoRegressionFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 θ 节律, µV\n2 : FP1-A1 α 节律, µV\n3 : FP2-A2 δ 节律,µV\n4 : FP2-A2 θ 节律, µV\n5 : FP2-A2 α 节律, µV\n6 : FP2-A2 β(LF)节律, µV\n7 : F3-A1 α 节律, µV\n8 : F4-A2 α 节律, µV\n9 : FZ-A2 δ 节律,µV\n10 : C3-A1 α 节律, µV\n11 : C4-A2 θ 节律, µV\n12 : C4-A2 α 节律, µV\n13 : C4-A2 β(LF)节律, µV\n14 : CZ-A1 α 节律, µV\n15 : P3-A1 δ 节律,µV\n16 : P4-A2 α 节律, µV\n17 : P4-A2 β(LF)节律, µV\n18 : PZ-A2 δ 节律,µV\n19 : PZ-A2 α 节律, µV\n20 : PZ-A2 β(LF)节律, µV\n21 : O1-A1 δ 节律,µV\n22 : O1-A1 θ 节律, µV\n23 : O1-A1 α 节律, µV\n24 : O2-A2 δ 节律,µV\n25 : O2-A2 θ 节律, µV\n26 : F7-A1 δ 节律,µV\n27 : F8-A2 δ 节律,µV\n28 : T3-A1 θ 节律, µV\n29 : T3-A1 α 节律, µV\n30 : T3-A1 β(LF)节律, µV\n31 : T4-A2 δ 节律,µV\n32 : T4-A2 α 节律, µV\n33 : T4-A2 β(LF)节律, µV\n34 : T5-A1 δ 节律,µV\n35 : T5-A1 θ 节律, µV\n36 : T5-A1 α 节律, µV\n37 : T6-A2 θ 节律, µV\n38 : T6-A2 α 节律, µV\n39 : T6-A2 β(LF)节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 β(LF)节律, µV\n3 : F3-A1 δ 节律,µV\n4 : F3-A1 θ 节律, µV\n5 : F3-A1 β(LF)节律, µV\n6 : F4-A2 δ 节律,µV\n7 : F4-A2 θ 节律, µV\n8 : F4-A2 β(LF)节律, µV\n9 : FZ-A2 θ 节律, µV\n10 : FZ-A2 α 节律, µV\n11 : FZ-A2 β(LF)节律, µV\n12 : C3-A1 δ 节律,µV\n13 : C3-A1 θ 节律, µV\n14 : C3-A1 β(LF)节律, µV\n15 : C4-A2 δ 节律,µV\n16 : CZ-A1 δ 节律,µV\n17 : CZ-A1 θ 节律, µV\n18 : CZ-A1 β(LF)节律, µV\n19 : P3-A1 θ 节律, µV\n20 : P3-A1 α 节律, µV\n21 : P3-A1 β(LF)节律, µV\n22 : P4-A2 δ 节律,µV\n23 : P4-A2 θ 节律, µV\n24 : PZ-A2 θ 节律, µV\n25 : O1-A1 β(LF)节律, µV\n26 : O2-A2 α 节律, µV\n27 : O2-A2 β(LF)节律, µV\n28 : F7-A1 θ 节律, µV\n29 : F7-A1 α 节律, µV\n30 : F7-A1 β(LF)节律, µV\n31 : F8-A2 θ 节律, µV\n32 : F8-A2 α 节律, µV\n33 : F8-A2 β(LF)节律, µV\n34 : T3-A1 δ 节律,µV\n35 : T4-A2 θ 节律, µV\n36 : T5-A1 β(LF)节律, µV\n37 : T6-A2 δ 节律,µV\n"
]
],
[
[
"## 利用SVR进行特征选择",
"_____no_output_____"
]
],
[
[
"lsvr = LinearSVR(C=10,max_iter=10000,loss='squared_epsilon_insensitive',dual=False).fit(X_withLabel, y_data)\r\nmodelLSVR = SelectFromModel(lsvr, prefit=True)\r\nX_LSVR = modelLSVR.transform(X_withLabel)\r\n\r\nSVRIndexMask = modelLSVR.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,SVRIndexMask].tolist() # 被筛选出来的列的值\r\nSVRIndexMask = SVRIndexMask.tolist() \r\nSVRIndexTrue = []\r\nSVRIndexFalse = []\r\n\r\nfor i in range(len(SVRIndexMask)): # 记录下被筛选的indicator的序号\r\n if (SVRIndexMask[i]==True):\r\n SVRIndexTrue.append(i)\r\n if (SVRIndexMask[i]==False):\r\n SVRIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(SVRIndexTrue)):\r\n print(i+1,\":\",name[SVRIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(SVRIndexFalse)):\r\n print(i+1,\":\",name[SVRIndexFalse[i]])\r\n\r\ndataFrameOfLSVRegressionFeature = dataFrame\r\nfor i in range(len(SVRIndexFalse)):\r\n dataFrameOfLSVRegressionFeature = dataFrameOfLSVRegressionFeature.drop([name[SVRIndexFalse[i]]],axis=1)\r\ndataFrameOfLSVRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/LSVRFeatureSelectionOfLabel.xlsx')\r\ndataFrameOfLSVRegressionFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 θ 节律, µV\n2 : FP1-A1 β(LF)节律, µV\n3 : FP2-A2 δ 节律,µV\n4 : FP2-A2 θ 节律, µV\n5 : FP2-A2 β(LF)节律, µV\n6 : F3-A1 θ 节律, µV\n7 : F4-A2 β(LF)节律, µV\n8 : C3-A1 β(LF)节律, µV\n9 : CZ-A1 θ 节律, µV\n10 : CZ-A1 β(LF)节律, µV\n11 : P3-A1 δ 节律,µV\n12 : P3-A1 θ 节律, µV\n13 : P3-A1 α 节律, µV\n14 : P4-A2 δ 节律,µV\n15 : P4-A2 θ 节律, µV\n16 : P4-A2 α 节律, µV\n17 : P4-A2 β(LF)节律, µV\n18 : O1-A1 θ 节律, µV\n19 : O1-A1 β(LF)节律, µV\n20 : O2-A2 θ 节律, µV\n21 : O2-A2 β(LF)节律, µV\n22 : F7-A1 θ 节律, µV\n23 : F7-A1 β(LF)节律, µV\n24 : F8-A2 δ 节律,µV\n25 : F8-A2 α 节律, µV\n26 : F8-A2 β(LF)节律, µV\n27 : T4-A2 β(LF)节律, µV\n28 : T5-A1 β(LF)节律, µV\n29 : T6-A2 δ 节律,µV\n30 : T6-A2 θ 节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 α 节律, µV\n3 : FP2-A2 α 节律, µV\n4 : F3-A1 δ 节律,µV\n5 : F3-A1 α 节律, µV\n6 : F3-A1 β(LF)节律, µV\n7 : F4-A2 δ 节律,µV\n8 : F4-A2 θ 节律, µV\n9 : F4-A2 α 节律, µV\n10 : FZ-A2 δ 节律,µV\n11 : FZ-A2 θ 节律, µV\n12 : FZ-A2 α 节律, µV\n13 : FZ-A2 β(LF)节律, µV\n14 : C3-A1 δ 节律,µV\n15 : C3-A1 θ 节律, µV\n16 : C3-A1 α 节律, µV\n17 : C4-A2 δ 节律,µV\n18 : C4-A2 θ 节律, µV\n19 : C4-A2 α 节律, µV\n20 : C4-A2 β(LF)节律, µV\n21 : CZ-A1 δ 节律,µV\n22 : CZ-A1 α 节律, µV\n23 : P3-A1 β(LF)节律, µV\n24 : PZ-A2 δ 节律,µV\n25 : PZ-A2 θ 节律, µV\n26 : PZ-A2 α 节律, µV\n27 : PZ-A2 β(LF)节律, µV\n28 : O1-A1 δ 节律,µV\n29 : O1-A1 α 节律, µV\n30 : O2-A2 δ 节律,µV\n31 : O2-A2 α 节律, µV\n32 : F7-A1 δ 节律,µV\n33 : F7-A1 α 节律, µV\n34 : F8-A2 θ 节律, µV\n35 : T3-A1 δ 节律,µV\n36 : T3-A1 θ 节律, µV\n37 : T3-A1 α 节律, µV\n38 : T3-A1 β(LF)节律, µV\n39 : T4-A2 δ 节律,µV\n40 : T4-A2 θ 节律, µV\n41 : T4-A2 α 节律, µV\n42 : T5-A1 δ 节律,µV\n43 : T5-A1 θ 节律, µV\n44 : T5-A1 α 节律, µV\n45 : T6-A2 α 节律, µV\n46 : T6-A2 β(LF)节律, µV\n"
]
],
[
[
"## 利用树进行特征选择",
"_____no_output_____"
]
],
[
[
"decisionTree = DecisionTreeRegressor(min_samples_leaf=1,random_state=1).fit(X_withLabel, y_data)\r\nmodelDecisionTree = SelectFromModel(decisionTree, prefit=True)\r\nX_DecisionTree = modelDecisionTree.transform(X_withLabel)\r\n\r\ndecisionTreeIndexMask = modelDecisionTree.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值\r\ndecisionTreeIndexMask = decisionTreeIndexMask.tolist() \r\ndecisionTreeIndexTrue = []\r\ndecisionTreeIndexFalse = []\r\n\r\nfor i in range(len(decisionTreeIndexMask)): # 记录下被筛选的indicator的序号\r\n if (decisionTreeIndexMask[i]==True):\r\n decisionTreeIndexTrue.append(i)\r\n if (decisionTreeIndexMask[i]==False):\r\n decisionTreeIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(decisionTreeIndexTrue)):\r\n print(i+1,\":\",name[decisionTreeIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(decisionTreeIndexFalse)):\r\n print(i+1,\":\",name[decisionTreeIndexFalse[i]])\r\n\r\ndataFrameOfDecisionTreeRegressionFeature = dataFrame\r\nfor i in range(len(decisionTreeIndexFalse)):\r\n dataFrameOfDecisionTreeRegressionFeature = dataFrameOfDecisionTreeRegressionFeature.drop([name[decisionTreeIndexFalse[i]]],axis=1)\r\ndataFrameOfDecisionTreeRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/DecisionTreeFeatureSelectionOfData.xlsx')\r\ndataFrameOfDecisionTreeRegressionFeature",
"被筛选后剩下的特征:\n1 : F4-A2 θ 节律, µV\n2 : F4-A2 α 节律, µV\n3 : FZ-A2 θ 节律, µV\n4 : FZ-A2 β(LF)节律, µV\n5 : C3-A1 θ 节律, µV\n6 : C3-A1 β(LF)节律, µV\n7 : CZ-A1 β(LF)节律, µV\n8 : P3-A1 δ 节律,µV\n9 : P3-A1 β(LF)节律, µV\n10 : PZ-A2 α 节律, µV\n11 : O2-A2 δ 节律,µV\n12 : O2-A2 α 节律, µV\n13 : F8-A2 δ 节律,µV\n14 : T3-A1 θ 节律, µV\n15 : T5-A1 β(LF)节律, µV\n16 : T6-A2 α 节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 θ 节律, µV\n3 : FP1-A1 α 节律, µV\n4 : FP1-A1 β(LF)节律, µV\n5 : FP2-A2 δ 节律,µV\n6 : FP2-A2 θ 节律, µV\n7 : FP2-A2 α 节律, µV\n8 : FP2-A2 β(LF)节律, µV\n9 : F3-A1 δ 节律,µV\n10 : F3-A1 θ 节律, µV\n11 : F3-A1 α 节律, µV\n12 : F3-A1 β(LF)节律, µV\n13 : F4-A2 δ 节律,µV\n14 : F4-A2 β(LF)节律, µV\n15 : FZ-A2 δ 节律,µV\n16 : FZ-A2 α 节律, µV\n17 : C3-A1 δ 节律,µV\n18 : C3-A1 α 节律, µV\n19 : C4-A2 δ 节律,µV\n20 : C4-A2 θ 节律, µV\n21 : C4-A2 α 节律, µV\n22 : C4-A2 β(LF)节律, µV\n23 : CZ-A1 δ 节律,µV\n24 : CZ-A1 θ 节律, µV\n25 : CZ-A1 α 节律, µV\n26 : P3-A1 θ 节律, µV\n27 : P3-A1 α 节律, µV\n28 : P4-A2 δ 节律,µV\n29 : P4-A2 θ 节律, µV\n30 : P4-A2 α 节律, µV\n31 : P4-A2 β(LF)节律, µV\n32 : PZ-A2 δ 节律,µV\n33 : PZ-A2 θ 节律, µV\n34 : PZ-A2 β(LF)节律, µV\n35 : O1-A1 δ 节律,µV\n36 : O1-A1 θ 节律, µV\n37 : O1-A1 α 节律, µV\n38 : O1-A1 β(LF)节律, µV\n39 : O2-A2 θ 节律, µV\n40 : O2-A2 β(LF)节律, µV\n41 : F7-A1 δ 节律,µV\n42 : F7-A1 θ 节律, µV\n43 : F7-A1 α 节律, µV\n44 : F7-A1 β(LF)节律, µV\n45 : F8-A2 θ 节律, µV\n46 : F8-A2 α 节律, µV\n47 : F8-A2 β(LF)节律, µV\n48 : T3-A1 δ 节律,µV\n49 : T3-A1 α 节律, µV\n50 : T3-A1 β(LF)节律, µV\n51 : T4-A2 δ 节律,µV\n52 : T4-A2 θ 节律, µV\n53 : T4-A2 α 节律, µV\n54 : T4-A2 β(LF)节律, µV\n55 : T5-A1 δ 节律,µV\n56 : T5-A1 θ 节律, µV\n57 : T5-A1 α 节律, µV\n58 : T6-A2 δ 节律,µV\n59 : T6-A2 θ 节律, µV\n60 : T6-A2 β(LF)节律, µV\n"
]
],
[
[
"## 利用随机森林进行特征选择",
"_____no_output_____"
]
],
[
[
"randomForest = RandomForestRegressor().fit(X_withLabel, y_data)\r\nmodelrandomForest = SelectFromModel(randomForest, prefit=True)\r\nX_randomForest = modelrandomForest.transform(X_withLabel)\r\n\r\nrandomForestIndexMask = modelrandomForest.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,randomForestIndexMask].tolist() # 被筛选出来的列的值\r\nrandomForestIndexMask = randomForestIndexMask.tolist() \r\nrandomForestIndexTrue = []\r\nrandomForestIndexFalse = []\r\n\r\nfor i in range(len(randomForestIndexMask)): # 记录下被筛选的indicator的序号\r\n if (randomForestIndexMask[i]==True):\r\n randomForestIndexTrue.append(i)\r\n if (randomForestIndexMask[i]==False):\r\n randomForestIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(randomForestIndexTrue)):\r\n print(i+1,\":\",name[randomForestIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(randomForestIndexFalse)):\r\n print(i+1,\":\",name[randomForestIndexFalse[i]])\r\n\r\ndataFrameOfRandomForestRegressionFeature = dataFrame\r\nfor i in range(len(randomForestIndexFalse)):\r\n dataFrameOfRandomForestRegressionFeature = dataFrameOfRandomForestRegressionFeature.drop([name[randomForestIndexFalse[i]]],axis=1)\r\ndataFrameOfRandomForestRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/RandomForestFeatureSelectionOfData.xlsx')\r\ndataFrameOfRandomForestRegressionFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 θ 节律, µV\n2 : FP1-A1 α 节律, µV\n3 : FP2-A2 θ 节律, µV\n4 : FP2-A2 β(LF)节律, µV\n5 : F3-A1 θ 节律, µV\n6 : F4-A2 θ 节律, µV\n7 : C3-A1 θ 节律, µV\n8 : C4-A2 δ 节律,µV\n9 : C4-A2 θ 节律, µV\n10 : P3-A1 δ 节律,µV\n11 : P4-A2 θ 节律, µV\n12 : PZ-A2 β(LF)节律, µV\n13 : O1-A1 θ 节律, µV\n14 : O2-A2 δ 节律,µV\n15 : O2-A2 θ 节律, µV\n16 : O2-A2 β(LF)节律, µV\n17 : F7-A1 θ 节律, µV\n18 : F8-A2 δ 节律,µV\n19 : F8-A2 θ 节律, µV\n20 : F8-A2 α 节律, µV\n21 : T3-A1 θ 节律, µV\n22 : T3-A1 β(LF)节律, µV\n23 : T4-A2 δ 节律,µV\n24 : T4-A2 θ 节律, µV\n25 : T4-A2 β(LF)节律, µV\n26 : T5-A1 θ 节律, µV\n27 : T5-A1 β(LF)节律, µV\n28 : T6-A2 θ 节律, µV\n29 : T6-A2 β(LF)节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 β(LF)节律, µV\n3 : FP2-A2 δ 节律,µV\n4 : FP2-A2 α 节律, µV\n5 : F3-A1 δ 节律,µV\n6 : F3-A1 α 节律, µV\n7 : F3-A1 β(LF)节律, µV\n8 : F4-A2 δ 节律,µV\n9 : F4-A2 α 节律, µV\n10 : F4-A2 β(LF)节律, µV\n11 : FZ-A2 δ 节律,µV\n12 : FZ-A2 θ 节律, µV\n13 : FZ-A2 α 节律, µV\n14 : FZ-A2 β(LF)节律, µV\n15 : C3-A1 δ 节律,µV\n16 : C3-A1 α 节律, µV\n17 : C3-A1 β(LF)节律, µV\n18 : C4-A2 α 节律, µV\n19 : C4-A2 β(LF)节律, µV\n20 : CZ-A1 δ 节律,µV\n21 : CZ-A1 θ 节律, µV\n22 : CZ-A1 α 节律, µV\n23 : CZ-A1 β(LF)节律, µV\n24 : P3-A1 θ 节律, µV\n25 : P3-A1 α 节律, µV\n26 : P3-A1 β(LF)节律, µV\n27 : P4-A2 δ 节律,µV\n28 : P4-A2 α 节律, µV\n29 : P4-A2 β(LF)节律, µV\n30 : PZ-A2 δ 节律,µV\n31 : PZ-A2 θ 节律, µV\n32 : PZ-A2 α 节律, µV\n33 : O1-A1 δ 节律,µV\n34 : O1-A1 α 节律, µV\n35 : O1-A1 β(LF)节律, µV\n36 : O2-A2 α 节律, µV\n37 : F7-A1 δ 节律,µV\n38 : F7-A1 α 节律, µV\n39 : F7-A1 β(LF)节律, µV\n40 : F8-A2 β(LF)节律, µV\n41 : T3-A1 δ 节律,µV\n42 : T3-A1 α 节律, µV\n43 : T4-A2 α 节律, µV\n44 : T5-A1 δ 节律,µV\n45 : T5-A1 α 节律, µV\n46 : T6-A2 δ 节律,µV\n47 : T6-A2 α 节律, µV\n"
]
],
[
[
"## 利用GBDT进行特征选择",
"_____no_output_____"
]
],
[
[
"GBDTRegressor = GradientBoostingRegressor().fit(X_withLabel, y_data)\r\nmodelGBDTRegressor = SelectFromModel(GBDTRegressor, prefit=True)\r\nX_GBDTRegressor = modelGBDTRegressor.transform(X_withLabel)\r\n\r\nGBDTRegressorIndexMask = modelGBDTRegressor.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,GBDTRegressorIndexMask].tolist() # 被筛选出来的列的值\r\nGBDTRegressorIndexMask = GBDTRegressorIndexMask.tolist() \r\nGBDTRegressorIndexTrue = []\r\nGBDTRegressorIndexFalse = []\r\n\r\nfor i in range(len(GBDTRegressorIndexMask)): # 记录下被筛选的indicator的序号\r\n if (GBDTRegressorIndexMask[i]==True):\r\n GBDTRegressorIndexTrue.append(i)\r\n if (GBDTRegressorIndexMask[i]==False):\r\n GBDTRegressorIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(GBDTRegressorIndexTrue)):\r\n print(i+1,\":\",name[GBDTRegressorIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(GBDTRegressorIndexFalse)):\r\n print(i+1,\":\",name[GBDTRegressorIndexFalse[i]])\r\n\r\ndataFrameOfGBDTRegressionFeature = dataFrame\r\nfor i in range(len(GBDTRegressorIndexFalse)):\r\n dataFrameOfGBDTRegressionFeature = dataFrameOfGBDTRegressionFeature.drop([name[GBDTRegressorIndexFalse[i]]],axis=1)\r\ndataFrameOfGBDTRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/GBDTRegressorFeatureSelectionOfData.xlsx')\r\ndataFrameOfGBDTRegressionFeature",
"被筛选后剩下的特征:\n1 : FP2-A2 θ 节律, µV\n2 : FP2-A2 β(LF)节律, µV\n3 : F3-A1 θ 节律, µV\n4 : C3-A1 δ 节律,µV\n5 : C3-A1 θ 节律, µV\n6 : C4-A2 δ 节律,µV\n7 : C4-A2 θ 节律, µV\n8 : CZ-A1 θ 节律, µV\n9 : P3-A1 δ 节律,µV\n10 : P3-A1 α 节律, µV\n11 : P4-A2 θ 节律, µV\n12 : P4-A2 α 节律, µV\n13 : PZ-A2 α 节律, µV\n14 : PZ-A2 β(LF)节律, µV\n15 : O1-A1 θ 节律, µV\n16 : O2-A2 δ 节律,µV\n17 : O2-A2 θ 节律, µV\n18 : O2-A2 β(LF)节律, µV\n19 : F8-A2 δ 节律,µV\n20 : F8-A2 α 节律, µV\n21 : F8-A2 β(LF)节律, µV\n22 : T3-A1 θ 节律, µV\n23 : T4-A2 δ 节律,µV\n24 : T4-A2 θ 节律, µV\n25 : T4-A2 β(LF)节律, µV\n26 : T6-A2 θ 节律, µV\n27 : T6-A2 β(LF)节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 θ 节律, µV\n3 : FP1-A1 α 节律, µV\n4 : FP1-A1 β(LF)节律, µV\n5 : FP2-A2 δ 节律,µV\n6 : FP2-A2 α 节律, µV\n7 : F3-A1 δ 节律,µV\n8 : F3-A1 α 节律, µV\n9 : F3-A1 β(LF)节律, µV\n10 : F4-A2 δ 节律,µV\n11 : F4-A2 θ 节律, µV\n12 : F4-A2 α 节律, µV\n13 : F4-A2 β(LF)节律, µV\n14 : FZ-A2 δ 节律,µV\n15 : FZ-A2 θ 节律, µV\n16 : FZ-A2 α 节律, µV\n17 : FZ-A2 β(LF)节律, µV\n18 : C3-A1 α 节律, µV\n19 : C3-A1 β(LF)节律, µV\n20 : C4-A2 α 节律, µV\n21 : C4-A2 β(LF)节律, µV\n22 : CZ-A1 δ 节律,µV\n23 : CZ-A1 α 节律, µV\n24 : CZ-A1 β(LF)节律, µV\n25 : P3-A1 θ 节律, µV\n26 : P3-A1 β(LF)节律, µV\n27 : P4-A2 δ 节律,µV\n28 : P4-A2 β(LF)节律, µV\n29 : PZ-A2 δ 节律,µV\n30 : PZ-A2 θ 节律, µV\n31 : O1-A1 δ 节律,µV\n32 : O1-A1 α 节律, µV\n33 : O1-A1 β(LF)节律, µV\n34 : O2-A2 α 节律, µV\n35 : F7-A1 δ 节律,µV\n36 : F7-A1 θ 节律, µV\n37 : F7-A1 α 节律, µV\n38 : F7-A1 β(LF)节律, µV\n39 : F8-A2 θ 节律, µV\n40 : T3-A1 δ 节律,µV\n41 : T3-A1 α 节律, µV\n42 : T3-A1 β(LF)节律, µV\n43 : T4-A2 α 节律, µV\n44 : T5-A1 δ 节律,µV\n45 : T5-A1 θ 节律, µV\n46 : T5-A1 α 节律, µV\n47 : T5-A1 β(LF)节律, µV\n48 : T6-A2 δ 节律,µV\n49 : T6-A2 α 节律, µV\n"
]
],
[
[
"# 分类",
"_____no_output_____"
],
[
"## 利用Lasso进行特征选择",
"_____no_output_____"
]
],
[
[
"lasso = Lasso(alpha = 0.3,max_iter=5000).fit(X_withLabel, y_label)\r\nmodelLasso = SelectFromModel(lasso, prefit=True)\r\nX_Lasso = modelLasso.transform(X_withLabel)\r\n\r\nLassoIndexMask = modelLasso.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值\r\nLassoIndexMask = LassoIndexMask.tolist() \r\nLassoIndexTrue = []\r\nLassoIndexFalse = []\r\n\r\nfor i in range(len(LassoIndexMask)): # 记录下被筛选的indicator的序号\r\n if (LassoIndexMask[i]==True):\r\n LassoIndexTrue.append(i)\r\n if (LassoIndexMask[i]==False):\r\n LassoIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(LassoIndexTrue)):\r\n print(i+1,\":\",name[LassoIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(LassoIndexFalse)):\r\n print(i+1,\":\",name[LassoIndexFalse[i]])\r\n\r\ndataFrameOfLassoClassificationFeature = dataFrame\r\nfor i in range(len(LassoIndexFalse)):\r\n dataFrameOfLassoClassificationFeature = dataFrameOfLassoClassificationFeature.drop([name[LassoIndexFalse[i]]],axis=1)\r\ndataFrameOfLassoClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/LassoFeatureSelectionOfLabel.xlsx')\r\ndataFrameOfLassoClassificationFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 α 节律, µV\n2 : FZ-A2 δ 节律,µV\n3 : C4-A2 δ 节律,µV\n4 : CZ-A1 α 节律, µV\n5 : P3-A1 δ 节律,µV\n6 : P4-A2 α 节律, µV\n7 : PZ-A2 δ 节律,µV\n8 : O2-A2 δ 节律,µV\n9 : F7-A1 δ 节律,µV\n10 : F7-A1 α 节律, µV\n11 : T3-A1 α 节律, µV\n12 : T4-A2 δ 节律,µV\n13 : T4-A2 α 节律, µV\n14 : T5-A1 δ 节律,µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 θ 节律, µV\n3 : FP1-A1 β(LF)节律, µV\n4 : FP2-A2 δ 节律,µV\n5 : FP2-A2 θ 节律, µV\n6 : FP2-A2 α 节律, µV\n7 : FP2-A2 β(LF)节律, µV\n8 : F3-A1 δ 节律,µV\n9 : F3-A1 θ 节律, µV\n10 : F3-A1 α 节律, µV\n11 : F3-A1 β(LF)节律, µV\n12 : F4-A2 δ 节律,µV\n13 : F4-A2 θ 节律, µV\n14 : F4-A2 α 节律, µV\n15 : F4-A2 β(LF)节律, µV\n16 : FZ-A2 θ 节律, µV\n17 : FZ-A2 α 节律, µV\n18 : FZ-A2 β(LF)节律, µV\n19 : C3-A1 δ 节律,µV\n20 : C3-A1 θ 节律, µV\n21 : C3-A1 α 节律, µV\n22 : C3-A1 β(LF)节律, µV\n23 : C4-A2 θ 节律, µV\n24 : C4-A2 α 节律, µV\n25 : C4-A2 β(LF)节律, µV\n26 : CZ-A1 δ 节律,µV\n27 : CZ-A1 θ 节律, µV\n28 : CZ-A1 β(LF)节律, µV\n29 : P3-A1 θ 节律, µV\n30 : P3-A1 α 节律, µV\n31 : P3-A1 β(LF)节律, µV\n32 : P4-A2 δ 节律,µV\n33 : P4-A2 θ 节律, µV\n34 : P4-A2 β(LF)节律, µV\n35 : PZ-A2 θ 节律, µV\n36 : PZ-A2 α 节律, µV\n37 : PZ-A2 β(LF)节律, µV\n38 : O1-A1 δ 节律,µV\n39 : O1-A1 θ 节律, µV\n40 : O1-A1 α 节律, µV\n41 : O1-A1 β(LF)节律, µV\n42 : O2-A2 θ 节律, µV\n43 : O2-A2 α 节律, µV\n44 : O2-A2 β(LF)节律, µV\n45 : F7-A1 θ 节律, µV\n46 : F7-A1 β(LF)节律, µV\n47 : F8-A2 δ 节律,µV\n48 : F8-A2 θ 节律, µV\n49 : F8-A2 α 节律, µV\n50 : F8-A2 β(LF)节律, µV\n51 : T3-A1 δ 节律,µV\n52 : T3-A1 θ 节律, µV\n53 : T3-A1 β(LF)节律, µV\n54 : T4-A2 θ 节律, µV\n55 : T4-A2 β(LF)节律, µV\n56 : T5-A1 θ 节律, µV\n57 : T5-A1 α 节律, µV\n58 : T5-A1 β(LF)节律, µV\n59 : T6-A2 δ 节律,µV\n60 : T6-A2 θ 节律, µV\n61 : T6-A2 α 节律, µV\n62 : T6-A2 β(LF)节律, µV\n"
]
],
[
[
"## 利用SVC进行特征选择",
"_____no_output_____"
]
],
[
[
"lsvc = LinearSVC(C=10,max_iter=10000,dual=False).fit(X_withLabel, y_label.ravel())\r\nmodelLSVC = SelectFromModel(lsvc, prefit=True)\r\nX_LSVR = modelLSVR.transform(X_withLabel)\r\n\r\nSVCIndexMask = modelLSVC.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,SVCIndexMask].tolist() # 被筛选出来的列的值\r\nSVCIndexMask = SVCIndexMask.tolist() \r\nSVCIndexTrue = []\r\nSVCIndexFalse = []\r\n\r\nfor i in range(len(SVCIndexMask)): # 记录下被筛选的indicator的序号\r\n if (SVCIndexMask[i]==True):\r\n SVCIndexTrue.append(i)\r\n if (SVCIndexMask[i]==False):\r\n SVCIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(SVCIndexTrue)):\r\n print(i+1,\":\",name[SVCIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(SVCIndexFalse)):\r\n print(i+1,\":\",name[SVCIndexFalse[i]])\r\n\r\ndataFrameOfLSVClassificationFeature = dataFrame\r\nfor i in range(len(SVCIndexFalse)):\r\n dataFrameOfLSVClassificationFeature = dataFrameOfLSVClassificationFeature.drop([name[SVCIndexFalse[i]]],axis=1)\r\ndataFrameOfLSVClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/LSVCFeatureSelectionOfLabel.xlsx')\r\ndataFrameOfLSVClassificationFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 θ 节律, µV\n2 : FP2-A2 θ 节律, µV\n3 : FP2-A2 α 节律, µV\n4 : FP2-A2 β(LF)节律, µV\n5 : FZ-A2 β(LF)节律, µV\n6 : C3-A1 θ 节律, µV\n7 : C3-A1 β(LF)节律, µV\n8 : C4-A2 δ 节律,µV\n9 : C4-A2 θ 节律, µV\n10 : C4-A2 α 节律, µV\n11 : CZ-A1 δ 节律,µV\n12 : CZ-A1 θ 节律, µV\n13 : CZ-A1 α 节律, µV\n14 : P3-A1 β(LF)节律, µV\n15 : P4-A2 θ 节律, µV\n16 : P4-A2 β(LF)节律, µV\n17 : PZ-A2 β(LF)节律, µV\n18 : O2-A2 δ 节律,µV\n19 : O2-A2 θ 节律, µV\n20 : O2-A2 α 节律, µV\n21 : F7-A1 θ 节律, µV\n22 : F7-A1 α 节律, µV\n23 : F7-A1 β(LF)节律, µV\n24 : F8-A2 β(LF)节律, µV\n25 : T4-A2 δ 节律,µV\n26 : T4-A2 θ 节律, µV\n27 : T4-A2 α 节律, µV\n28 : T4-A2 β(LF)节律, µV\n29 : T5-A1 θ 节律, µV\n30 : T6-A2 δ 节律,µV\n31 : T6-A2 α 节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 α 节律, µV\n3 : FP1-A1 β(LF)节律, µV\n4 : FP2-A2 δ 节律,µV\n5 : F3-A1 δ 节律,µV\n6 : F3-A1 θ 节律, µV\n7 : F3-A1 α 节律, µV\n8 : F3-A1 β(LF)节律, µV\n9 : F4-A2 δ 节律,µV\n10 : F4-A2 θ 节律, µV\n11 : F4-A2 α 节律, µV\n12 : F4-A2 β(LF)节律, µV\n13 : FZ-A2 δ 节律,µV\n14 : FZ-A2 θ 节律, µV\n15 : FZ-A2 α 节律, µV\n16 : C3-A1 δ 节律,µV\n17 : C3-A1 α 节律, µV\n18 : C4-A2 β(LF)节律, µV\n19 : CZ-A1 β(LF)节律, µV\n20 : P3-A1 δ 节律,µV\n21 : P3-A1 θ 节律, µV\n22 : P3-A1 α 节律, µV\n23 : P4-A2 δ 节律,µV\n24 : P4-A2 α 节律, µV\n25 : PZ-A2 δ 节律,µV\n26 : PZ-A2 θ 节律, µV\n27 : PZ-A2 α 节律, µV\n28 : O1-A1 δ 节律,µV\n29 : O1-A1 θ 节律, µV\n30 : O1-A1 α 节律, µV\n31 : O1-A1 β(LF)节律, µV\n32 : O2-A2 β(LF)节律, µV\n33 : F7-A1 δ 节律,µV\n34 : F8-A2 δ 节律,µV\n35 : F8-A2 θ 节律, µV\n36 : F8-A2 α 节律, µV\n37 : T3-A1 δ 节律,µV\n38 : T3-A1 θ 节律, µV\n39 : T3-A1 α 节律, µV\n40 : T3-A1 β(LF)节律, µV\n41 : T5-A1 δ 节律,µV\n42 : T5-A1 α 节律, µV\n43 : T5-A1 β(LF)节律, µV\n44 : T6-A2 θ 节律, µV\n45 : T6-A2 β(LF)节律, µV\n"
]
],
[
[
"## 利用树进行特征选择",
"_____no_output_____"
]
],
[
[
"decisionTree = DecisionTreeClassifier(random_state=1).fit(X_withLabel, y_label)\r\nmodelDecisionTree = SelectFromModel(decisionTree, prefit=True)\r\nX_DecisionTree = modelDecisionTree.transform(X_withLabel)\r\n\r\ndecisionTreeIndexMask = modelDecisionTree.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值\r\ndecisionTreeIndexMask = decisionTreeIndexMask.tolist() \r\ndecisionTreeIndexTrue = []\r\ndecisionTreeIndexFalse = []\r\n\r\nfor i in range(len(decisionTreeIndexMask)): # 记录下被筛选的indicator的序号\r\n if (decisionTreeIndexMask[i]==True):\r\n decisionTreeIndexTrue.append(i)\r\n if (decisionTreeIndexMask[i]==False):\r\n decisionTreeIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(decisionTreeIndexTrue)):\r\n print(i+1,\":\",name[decisionTreeIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(decisionTreeIndexFalse)):\r\n print(i+1,\":\",name[decisionTreeIndexFalse[i]])\r\n\r\ndataFrameOfDecisionTreeClassificationFeature = dataFrame\r\nfor i in range(len(decisionTreeIndexFalse)):\r\n dataFrameOfDecisionTreeClassificationFeature = dataFrameOfDecisionTreeClassificationFeature.drop([name[decisionTreeIndexFalse[i]]],axis=1)\r\ndataFrameOfDecisionTreeClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/DecisionTreeFeatureSelectionOfLabel.xlsx')\r\ndataFrameOfDecisionTreeClassificationFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 α 节律, µV\n3 : F3-A1 θ 节律, µV\n4 : C3-A1 θ 节律, µV\n5 : CZ-A1 δ 节律,µV\n6 : CZ-A1 β(LF)节律, µV\n7 : P3-A1 α 节律, µV\n8 : PZ-A2 β(LF)节律, µV\n9 : O2-A2 δ 节律,µV\n10 : O2-A2 β(LF)节律, µV\n11 : F7-A1 θ 节律, µV\n12 : T4-A2 δ 节律,µV\n13 : T5-A1 α 节律, µV\n14 : T6-A2 α 节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 θ 节律, µV\n2 : FP1-A1 β(LF)节律, µV\n3 : FP2-A2 δ 节律,µV\n4 : FP2-A2 θ 节律, µV\n5 : FP2-A2 α 节律, µV\n6 : FP2-A2 β(LF)节律, µV\n7 : F3-A1 δ 节律,µV\n8 : F3-A1 α 节律, µV\n9 : F3-A1 β(LF)节律, µV\n10 : F4-A2 δ 节律,µV\n11 : F4-A2 θ 节律, µV\n12 : F4-A2 α 节律, µV\n13 : F4-A2 β(LF)节律, µV\n14 : FZ-A2 δ 节律,µV\n15 : FZ-A2 θ 节律, µV\n16 : FZ-A2 α 节律, µV\n17 : FZ-A2 β(LF)节律, µV\n18 : C3-A1 δ 节律,µV\n19 : C3-A1 α 节律, µV\n20 : C3-A1 β(LF)节律, µV\n21 : C4-A2 δ 节律,µV\n22 : C4-A2 θ 节律, µV\n23 : C4-A2 α 节律, µV\n24 : C4-A2 β(LF)节律, µV\n25 : CZ-A1 θ 节律, µV\n26 : CZ-A1 α 节律, µV\n27 : P3-A1 δ 节律,µV\n28 : P3-A1 θ 节律, µV\n29 : P3-A1 β(LF)节律, µV\n30 : P4-A2 δ 节律,µV\n31 : P4-A2 θ 节律, µV\n32 : P4-A2 α 节律, µV\n33 : P4-A2 β(LF)节律, µV\n34 : PZ-A2 δ 节律,µV\n35 : PZ-A2 θ 节律, µV\n36 : PZ-A2 α 节律, µV\n37 : O1-A1 δ 节律,µV\n38 : O1-A1 θ 节律, µV\n39 : O1-A1 α 节律, µV\n40 : O1-A1 β(LF)节律, µV\n41 : O2-A2 θ 节律, µV\n42 : O2-A2 α 节律, µV\n43 : F7-A1 δ 节律,µV\n44 : F7-A1 α 节律, µV\n45 : F7-A1 β(LF)节律, µV\n46 : F8-A2 δ 节律,µV\n47 : F8-A2 θ 节律, µV\n48 : F8-A2 α 节律, µV\n49 : F8-A2 β(LF)节律, µV\n50 : T3-A1 δ 节律,µV\n51 : T3-A1 θ 节律, µV\n52 : T3-A1 α 节律, µV\n53 : T3-A1 β(LF)节律, µV\n54 : T4-A2 θ 节律, µV\n55 : T4-A2 α 节律, µV\n56 : T4-A2 β(LF)节律, µV\n57 : T5-A1 δ 节律,µV\n58 : T5-A1 θ 节律, µV\n59 : T5-A1 β(LF)节律, µV\n60 : T6-A2 δ 节律,µV\n61 : T6-A2 θ 节律, µV\n62 : T6-A2 β(LF)节律, µV\n"
]
],
[
[
"## 利用随机森林进行特征选择",
"_____no_output_____"
]
],
[
[
"randomForest = RandomForestRegressor().fit(X_withLabel, y_label)\r\nmodelrandomForest = SelectFromModel(randomForest, prefit=True)\r\nX_randomForest = modelrandomForest.transform(X_withLabel)\r\n\r\nrandomForestIndexMask = modelrandomForest.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,randomForestIndexMask].tolist() # 被筛选出来的列的值\r\nrandomForestIndexMask = randomForestIndexMask.tolist() \r\nrandomForestIndexTrue = []\r\nrandomForestIndexFalse = []\r\n\r\nfor i in range(len(randomForestIndexMask)): # 记录下被筛选的indicator的序号\r\n if (randomForestIndexMask[i]==True):\r\n randomForestIndexTrue.append(i)\r\n if (randomForestIndexMask[i]==False):\r\n randomForestIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(randomForestIndexTrue)):\r\n print(i+1,\":\",name[randomForestIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(randomForestIndexFalse)):\r\n print(i+1,\":\",name[randomForestIndexFalse[i]])\r\n\r\ndataFrameOfRandomForestClassificationFeature = dataFrame\r\nfor i in range(len(randomForestIndexFalse)):\r\n dataFrameOfRandomForestClassificationFeature = dataFrameOfRandomForestClassificationFeature.drop([name[randomForestIndexFalse[i]]],axis=1)\r\ndataFrameOfRandomForestClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/RandomForestFeatureSelectionOfLabel.xlsx')\r\ndataFrameOfRandomForestClassificationFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 θ 节律, µV\n2 : FP2-A2 β(LF)节律, µV\n3 : F4-A2 α 节律, µV\n4 : F4-A2 β(LF)节律, µV\n5 : FZ-A2 β(LF)节律, µV\n6 : C3-A1 β(LF)节律, µV\n7 : C4-A2 δ 节律,µV\n8 : C4-A2 θ 节律, µV\n9 : C4-A2 α 节律, µV\n10 : CZ-A1 α 节律, µV\n11 : P3-A1 δ 节律,µV\n12 : P3-A1 α 节律, µV\n13 : P3-A1 β(LF)节律, µV\n14 : P4-A2 δ 节律,µV\n15 : P4-A2 θ 节律, µV\n16 : P4-A2 α 节律, µV\n17 : PZ-A2 β(LF)节律, µV\n18 : O2-A2 δ 节律,µV\n19 : O2-A2 β(LF)节律, µV\n20 : F7-A1 θ 节律, µV\n21 : F8-A2 α 节律, µV\n22 : F8-A2 β(LF)节律, µV\n23 : T3-A1 θ 节律, µV\n24 : T4-A2 δ 节律,µV\n25 : T4-A2 θ 节律, µV\n26 : T4-A2 α 节律, µV\n27 : T4-A2 β(LF)节律, µV\n28 : T5-A1 δ 节律,µV\n29 : T5-A1 θ 节律, µV\n30 : T5-A1 β(LF)节律, µV\n31 : T6-A2 θ 节律, µV\n32 : T6-A2 α 节律, µV\n33 : T6-A2 β(LF)节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 α 节律, µV\n3 : FP1-A1 β(LF)节律, µV\n4 : FP2-A2 δ 节律,µV\n5 : FP2-A2 θ 节律, µV\n6 : FP2-A2 α 节律, µV\n7 : F3-A1 δ 节律,µV\n8 : F3-A1 θ 节律, µV\n9 : F3-A1 α 节律, µV\n10 : F3-A1 β(LF)节律, µV\n11 : F4-A2 δ 节律,µV\n12 : F4-A2 θ 节律, µV\n13 : FZ-A2 δ 节律,µV\n14 : FZ-A2 θ 节律, µV\n15 : FZ-A2 α 节律, µV\n16 : C3-A1 δ 节律,µV\n17 : C3-A1 θ 节律, µV\n18 : C3-A1 α 节律, µV\n19 : C4-A2 β(LF)节律, µV\n20 : CZ-A1 δ 节律,µV\n21 : CZ-A1 θ 节律, µV\n22 : CZ-A1 β(LF)节律, µV\n23 : P3-A1 θ 节律, µV\n24 : P4-A2 β(LF)节律, µV\n25 : PZ-A2 δ 节律,µV\n26 : PZ-A2 θ 节律, µV\n27 : PZ-A2 α 节律, µV\n28 : O1-A1 δ 节律,µV\n29 : O1-A1 θ 节律, µV\n30 : O1-A1 α 节律, µV\n31 : O1-A1 β(LF)节律, µV\n32 : O2-A2 θ 节律, µV\n33 : O2-A2 α 节律, µV\n34 : F7-A1 δ 节律,µV\n35 : F7-A1 α 节律, µV\n36 : F7-A1 β(LF)节律, µV\n37 : F8-A2 δ 节律,µV\n38 : F8-A2 θ 节律, µV\n39 : T3-A1 δ 节律,µV\n40 : T3-A1 α 节律, µV\n41 : T3-A1 β(LF)节律, µV\n42 : T5-A1 α 节律, µV\n43 : T6-A2 δ 节律,µV\n"
]
],
[
[
"## 利用GBDT进行特征选择",
"_____no_output_____"
]
],
[
[
"GBDTClassifier = GradientBoostingClassifier().fit(X_withLabel, y_label)\r\nmodelGBDTClassifier = SelectFromModel(GBDTClassifier, prefit=True)\r\nX_GBDTClassifier = modelGBDTClassifier.transform(X_withLabel)\r\n\r\nGBDTClassifierIndexMask = modelGBDTClassifier.get_support() # 获取筛选的mask\r\nvalue = X_withLabel[:,GBDTClassifierIndexMask].tolist() # 被筛选出来的列的值\r\nGBDTClassifierIndexMask = GBDTClassifierIndexMask.tolist() \r\nGBDTClassifierIndexTrue = []\r\nGBDTClassifierIndexFalse = []\r\n\r\nfor i in range(len(GBDTClassifierIndexMask)): # 记录下被筛选的indicator的序号\r\n if (GBDTClassifierIndexMask[i]==True):\r\n GBDTClassifierIndexTrue.append(i)\r\n if (GBDTClassifierIndexMask[i]==False):\r\n GBDTClassifierIndexFalse.append(i)\r\nprint(\"被筛选后剩下的特征:\")\r\nfor i in range(len(GBDTClassifierIndexTrue)):\r\n print(i+1,\":\",name[GBDTClassifierIndexTrue[i]])\r\nprint(\"\\n被筛选后去掉的特征:\")\r\nfor i in range(len(GBDTClassifierIndexFalse)):\r\n print(i+1,\":\",name[GBDTClassifierIndexFalse[i]])\r\n\r\ndataFrameOfGBDTClassificationFeature = dataFrame\r\nfor i in range(len(GBDTClassifierIndexFalse)):\r\n dataFrameOfGBDTClassificationFeature = dataFrameOfGBDTClassificationFeature.drop([name[GBDTClassifierIndexFalse[i]]],axis=1)\r\ndataFrameOfGBDTClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/GBDTClassifierFeatureSelectionOfLabel.xlsx')\r\ndataFrameOfGBDTClassificationFeature",
"被筛选后剩下的特征:\n1 : FP1-A1 α 节律, µV\n2 : FP2-A2 θ 节律, µV\n3 : FP2-A2 β(LF)节律, µV\n4 : C4-A2 θ 节律, µV\n5 : P3-A1 α 节律, µV\n6 : P4-A2 α 节律, µV\n7 : P4-A2 β(LF)节律, µV\n8 : PZ-A2 β(LF)节律, µV\n9 : O2-A2 δ 节律,µV\n10 : F7-A1 δ 节律,µV\n11 : F8-A2 δ 节律,µV\n12 : F8-A2 β(LF)节律, µV\n13 : T3-A1 θ 节律, µV\n14 : T4-A2 δ 节律,µV\n15 : T4-A2 θ 节律, µV\n16 : T5-A1 α 节律, µV\n\n被筛选后去掉的特征:\n1 : FP1-A1 δ 节律,µV\n2 : FP1-A1 θ 节律, µV\n3 : FP1-A1 β(LF)节律, µV\n4 : FP2-A2 δ 节律,µV\n5 : FP2-A2 α 节律, µV\n6 : F3-A1 δ 节律,µV\n7 : F3-A1 θ 节律, µV\n8 : F3-A1 α 节律, µV\n9 : F3-A1 β(LF)节律, µV\n10 : F4-A2 δ 节律,µV\n11 : F4-A2 θ 节律, µV\n12 : F4-A2 α 节律, µV\n13 : F4-A2 β(LF)节律, µV\n14 : FZ-A2 δ 节律,µV\n15 : FZ-A2 θ 节律, µV\n16 : FZ-A2 α 节律, µV\n17 : FZ-A2 β(LF)节律, µV\n18 : C3-A1 δ 节律,µV\n19 : C3-A1 θ 节律, µV\n20 : C3-A1 α 节律, µV\n21 : C3-A1 β(LF)节律, µV\n22 : C4-A2 δ 节律,µV\n23 : C4-A2 α 节律, µV\n24 : C4-A2 β(LF)节律, µV\n25 : CZ-A1 δ 节律,µV\n26 : CZ-A1 θ 节律, µV\n27 : CZ-A1 α 节律, µV\n28 : CZ-A1 β(LF)节律, µV\n29 : P3-A1 δ 节律,µV\n30 : P3-A1 θ 节律, µV\n31 : P3-A1 β(LF)节律, µV\n32 : P4-A2 δ 节律,µV\n33 : P4-A2 θ 节律, µV\n34 : PZ-A2 δ 节律,µV\n35 : PZ-A2 θ 节律, µV\n36 : PZ-A2 α 节律, µV\n37 : O1-A1 δ 节律,µV\n38 : O1-A1 θ 节律, µV\n39 : O1-A1 α 节律, µV\n40 : O1-A1 β(LF)节律, µV\n41 : O2-A2 θ 节律, µV\n42 : O2-A2 α 节律, µV\n43 : O2-A2 β(LF)节律, µV\n44 : F7-A1 θ 节律, µV\n45 : F7-A1 α 节律, µV\n46 : F7-A1 β(LF)节律, µV\n47 : F8-A2 θ 节律, µV\n48 : F8-A2 α 节律, µV\n49 : T3-A1 δ 节律,µV\n50 : T3-A1 α 节律, µV\n51 : T3-A1 β(LF)节律, µV\n52 : T4-A2 α 节律, µV\n53 : T4-A2 β(LF)节律, µV\n54 : T5-A1 δ 节律,µV\n55 : T5-A1 θ 节律, µV\n56 : T5-A1 β(LF)节律, µV\n57 : T6-A2 δ 节律,µV\n58 : T6-A2 θ 节律, µV\n59 : T6-A2 α 节律, µV\n60 : T6-A2 β(LF)节律, µV\n"
]
],
[
[
"# 测试选取的特征",
"_____no_output_____"
],
[
"## 读入PCA和LDA降维后的数据",
"_____no_output_____"
],
[
"## 获取特征选取后的数据",
"_____no_output_____"
]
],
[
[
"RegressionFeatureSelection = [dataFrameOfLassoRegressionFeature,dataFrameOfLSVRegressionFeature,dataFrameOfDecisionTreeRegressionFeature,\r\n dataFrameOfRandomForestRegressionFeature,dataFrameOfGBDTRegressionFeature]\r\n\r\nClassificationFeatureSelection = [dataFrameOfLassoClassificationFeature,dataFrameOfLSVClassificationFeature,dataFrameOfDecisionTreeClassificationFeature,\r\n dataFrameOfRandomForestClassificationFeature,dataFrameOfGBDTClassificationFeature]",
"_____no_output_____"
]
],
[
[
"## 筛选回归的特征",
"_____no_output_____"
]
],
[
[
"allMSEResult=[]\r\nallr2Result=[]\r\n\r\nprint(\"LR测试结果\")\r\nfor i in range(len(RegressionFeatureSelection)):\r\n tempArray = np.array(RegressionFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,3]\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=LinearRegression()\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempMSE=[]\r\n tempr2=[]\r\n tempMSE.append(mean_squared_error(test_y,pred_y))\r\n tempr2.append(r2_score(test_y,pred_y))\r\n if(i==len(RegressionFeatureSelection)-1):\r\n allMSEResult.append(min(tempMSE))\r\n allr2Result.append(max(tempr2))\r\n print('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\n print('Coefficient of determination: %.2f'\r\n % r2_score(test_y, pred_y))\r\n\r\nprint(\"\\nSVR测试结果\")\r\nfor i in range(len(RegressionFeatureSelection)):\r\n tempArray = np.array(RegressionFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,3]\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=SVR()\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempMSE=[]\r\n tempr2=[]\r\n tempMSE.append(mean_squared_error(test_y,pred_y))\r\n tempr2.append(r2_score(test_y,pred_y))\r\n if(i==len(RegressionFeatureSelection)-1):\r\n allMSEResult.append(min(tempMSE))\r\n allr2Result.append(max(tempr2))\r\n print('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\n print('Coefficient of determination: %.2f'\r\n % r2_score(test_y, pred_y))\r\n \r\nprint(\"\\n决策树测试结果\")\r\nfor i in range(len(RegressionFeatureSelection)):\r\n tempArray = np.array(RegressionFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,3]\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=DecisionTreeRegressor(random_state=4)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempMSE=[]\r\n tempr2=[]\r\n tempMSE.append(mean_squared_error(test_y,pred_y))\r\n tempr2.append(r2_score(test_y,pred_y))\r\n if(i==len(RegressionFeatureSelection)-1):\r\n allMSEResult.append(min(tempMSE))\r\n allr2Result.append(max(tempr2))\r\n print('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\n print('Coefficient of determination: %.2f'\r\n % r2_score(test_y, pred_y))\r\n\r\nprint(\"\\nGBDT测试结果\")\r\nfor i in range(len(RegressionFeatureSelection)):\r\n tempArray = np.array(RegressionFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,3]\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=GradientBoostingRegressor(random_state=4)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempMSE=[]\r\n tempr2=[]\r\n tempMSE.append(mean_squared_error(test_y,pred_y))\r\n tempr2.append(r2_score(test_y,pred_y))\r\n if(i==len(RegressionFeatureSelection)-1):\r\n allMSEResult.append(min(tempMSE))\r\n allr2Result.append(max(tempr2))\r\n print('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\n print('Coefficient of determination: %.2f'\r\n % r2_score(test_y, pred_y))\r\n \r\nprint(\"\\n随机森林测试结果\")\r\nfor i in range(len(RegressionFeatureSelection)):\r\n tempArray = np.array(RegressionFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,3]\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=RandomForestRegressor(random_state=4)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempMSE=[]\r\n tempr2=[]\r\n tempMSE.append(mean_squared_error(test_y,pred_y))\r\n tempr2.append(r2_score(test_y,pred_y))\r\n if(i==len(RegressionFeatureSelection)-1):\r\n allMSEResult.append(min(tempMSE))\r\n allr2Result.append(max(tempr2))\r\n print('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\n print('Coefficient of determination: %.2f'\r\n % r2_score(test_y, pred_y))\r\n \r\nmodelNamelist = ['LR','SVR','决策树','GBDT','随机森林']\r\nfor i in range(5):\r\n if(i==0):\r\n print()\r\n print(modelNamelist[i]+\"测试结果\")\r\n print('Best MSE -',i+1,': %.2f'\r\n % (allMSEResult)[i])\r\n print('Best R2-Score -',i+1,': %.2f\\n'\r\n % (allr2Result)[i])",
"_____no_output_____"
]
],
[
[
"## 原始特征回归表现",
"_____no_output_____"
]
],
[
[
"print(\"LR测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,3].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=LinearRegression()\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\nprint('R2-Score: %.2f'\r\n % r2_score(test_y, pred_y))\r\n\r\nprint(\"\\nSVR测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,3].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=SVR()\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\nprint('R2-Score: %.2f'\r\n % r2_score(test_y, pred_y))\r\n\r\nprint(\"\\n决策树测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,3].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=DecisionTreeRegressor(random_state=0)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\nprint('R2-Score: %.2f'\r\n % r2_score(test_y, pred_y))\r\n\r\nprint(\"\\nGBDT测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,3].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=GradientBoostingRegressor(random_state=0)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\nprint('R2-Score: %.2f'\r\n % r2_score(test_y, pred_y))\r\n\r\nprint(\"\\n随机森林测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,3].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=RandomForestRegressor(random_state=0)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Mean squared error: %.2f'\r\n % mean_squared_error(test_y, pred_y))\r\nprint('R2-Score: %.2f'\r\n % r2_score(test_y, pred_y))",
"_____no_output_____"
]
],
[
[
"## 筛选分类的特征",
"_____no_output_____"
]
],
[
[
"allAccuracyResult=[]\r\nallF1Result=[]\r\nprint(\"LR测试结果\")\r\nfor i in range(len(ClassificationFeatureSelection)):\r\n tempArray = np.array(ClassificationFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,4].astype(int)\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=LogisticRegression(max_iter=10000)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempAccuracy=[]\r\n tempF1=[]\r\n tempAccuracy.append(accuracy_score(test_y,pred_y))\r\n tempF1.append(f1_score(test_y,pred_y))\r\n if(i==len(ClassificationFeatureSelection)-1):\r\n allAccuracyResult.append(max(tempAccuracy))\r\n allF1Result.append(max(tempF1))\r\n print('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\n print('F1-Score: %.2f\\n'\r\n % f1_score(test_y, pred_y))\r\n\r\nprint(\"\\nSVC测试结果\")\r\nfor i in range(len(ClassificationFeatureSelection)):\r\n tempArray = np.array(ClassificationFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,4].astype(int)\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=SVC()\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempAccuracy=[]\r\n tempF1=[]\r\n tempAccuracy.append(accuracy_score(test_y,pred_y))\r\n tempF1.append(f1_score(test_y,pred_y))\r\n if(i==len(ClassificationFeatureSelection)-1):\r\n allAccuracyResult.append(max(tempAccuracy))\r\n allF1Result.append(max(tempF1))\r\n print('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\n print('F1-Score: %.2f\\n'\r\n % f1_score(test_y, pred_y))\r\n \r\nprint(\"\\n决策树测试结果\")\r\nfor i in range(len(ClassificationFeatureSelection)):\r\n tempArray = np.array(ClassificationFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,4].astype(int)\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=DecisionTreeClassifier(random_state=0)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempAccuracy=[]\r\n tempF1=[]\r\n tempAccuracy.append(accuracy_score(test_y,pred_y))\r\n tempF1.append(f1_score(test_y,pred_y))\r\n if(i==len(ClassificationFeatureSelection)-1):\r\n allAccuracyResult.append(max(tempAccuracy))\r\n allF1Result.append(max(tempF1))\r\n print('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\n print('F1-Score: %.2f\\n'\r\n % f1_score(test_y, pred_y))\r\n\r\nprint(\"\\nGBDT测试结果\")\r\nfor i in range(len(ClassificationFeatureSelection)):\r\n tempArray = np.array(ClassificationFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,4].astype(int)\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=GradientBoostingClassifier(random_state=0)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempAccuracy=[]\r\n tempF1=[]\r\n tempAccuracy.append(accuracy_score(test_y,pred_y))\r\n tempF1.append(f1_score(test_y,pred_y))\r\n if(i==len(ClassificationFeatureSelection)-1):\r\n allAccuracyResult.append(max(tempAccuracy))\r\n allF1Result.append(max(tempF1))\r\n print('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\n print('F1-Score: %.2f\\n'\r\n % f1_score(test_y, pred_y))\r\n \r\nprint(\"\\n随机森林测试结果\")\r\nfor i in range(len(ClassificationFeatureSelection)):\r\n tempArray = np.array(ClassificationFeatureSelection[i])[:92,:]\r\n temp_X = tempArray[:,5:]\r\n temp_y = tempArray[:,4].astype(int)\r\n train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\n clf=RandomForestClassifier(random_state=0)\r\n clf.fit(train_X,train_y)\r\n pred_y = clf.predict(test_X)\r\n if(i==0):\r\n tempAccuracy=[]\r\n tempF1=[]\r\n tempAccuracy.append(accuracy_score(test_y,pred_y))\r\n tempF1.append(f1_score(test_y,pred_y))\r\n if(i==len(ClassificationFeatureSelection)-1):\r\n allAccuracyResult.append(max(tempAccuracy))\r\n allF1Result.append(max(tempF1))\r\n print('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\n print('F1-Score: %.2f\\n'\r\n % f1_score(test_y, pred_y))\r\n\r\nmodelNamelist = ['LR','SVR','决策树','GBDT','随机森林']\r\nfor i in range(5):\r\n if(i==0):\r\n print()\r\n print(modelNamelist[i]+\"测试结果\")\r\n print('Best Accuracy -',i+1,': %.2f'\r\n % (allAccuracyResult)[i])\r\n print('Best F1-Score -',i+1,': %.2f\\n'\r\n % (allF1Result)[i])\r\n",
"_____no_output_____"
]
],
[
[
"## 原始特征分类表现",
"_____no_output_____"
]
],
[
[
"print(\"LR测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,4].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=LogisticRegression(max_iter=10000)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\nprint('F1-Score: %.2f'\r\n % f1_score(test_y, pred_y))\r\n\r\nprint(\"\\nSVR测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,4].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=SVC()\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\nprint('F1-Score: %.2f'\r\n % f1_score(test_y, pred_y))\r\n\r\nprint(\"\\n决策树测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,4].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=DecisionTreeClassifier(random_state=0)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\nprint('F1-Score: %.2f'\r\n % f1_score(test_y, pred_y))\r\n\r\nprint(\"\\nGBDT测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,4].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=GradientBoostingClassifier(random_state=0)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\nprint('F1-Score: %.2f'\r\n % f1_score(test_y, pred_y))\r\n\r\nprint(\"\\n随机森林测试结果\")\r\ntempArray = dataArray[:92,:]\r\ntemp_X = tempArray[:,5:]\r\ntemp_y = tempArray[:,4].astype(int)\r\ntrain_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4)\r\nclf=RandomForestClassifier(random_state=0)\r\nclf.fit(train_X,train_y)\r\npred_y = clf.predict(test_X)\r\nprint('Accuracy: %.2f'\r\n % accuracy_score(test_y, pred_y))\r\nprint('F1-Score: %.2f'\r\n % f1_score(test_y, pred_y))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f029cfe9e3cfe9913500999f0785c3a736c719 | 45,112 | ipynb | Jupyter Notebook | class02a_igraph_R.ipynb | curiositymap/Networks-in-Computational-Biology | c7734cf2c03c7a794ab6990d433b1614c1837b58 | [
"Apache-2.0"
] | 11 | 2020-09-17T14:59:30.000Z | 2022-03-29T16:35:39.000Z | class02a_igraph_R.ipynb | curiositymap/Networks-in-Computational-Biology | c7734cf2c03c7a794ab6990d433b1614c1837b58 | [
"Apache-2.0"
] | null | null | null | class02a_igraph_R.ipynb | curiositymap/Networks-in-Computational-Biology | c7734cf2c03c7a794ab6990d433b1614c1837b58 | [
"Apache-2.0"
] | 5 | 2020-03-12T19:21:56.000Z | 2022-03-28T08:23:58.000Z | 121.268817 | 32,680 | 0.828715 | [
[
[
"# CSX46: Class session 2\n## *Introduction to the igraph package and the Pathway Commons network in SIF format*\n\n### Objective: load a network of human molecular interactions and create three igraph `Graph` objects from it (one for protein-protein interactions, one for metabolism interactions, and one for directed protein-protein interactions)",
"_____no_output_____"
],
[
"OK, we are going to read in the Pathway Commons data in SIF format. Recall that a SIF file is a tab-separated value file. You can find the file as `shared/pathway_commons.sif`. Load it into a data frame `pcdf` using the built-in function `read.table`. Don't forget to specify that the separator is the tab `\\t`, and that there is no quoting allowed (`quote=\"\"`). Use the `col.names` argument to name the three columns `species1`, `interaction_type`, and `species2`. Make sure to specify that there is no header and that `stringsAsFactors=FALSE`.\n\nFor help on using `read.table`, just type ?read.table\n\nNote: for each row, the `interaction_type` column contains one of 11 different interaction types (identified by a string, like `interacts-with` or `controls-production-of`). ",
"_____no_output_____"
]
],
[
[
"pcdf <- read.table(\"shared/pathway_commons.sif\",\n sep=\"\\t\",\n quote=\"\",\n comment.char=\"\",\n stringsAsFactors=FALSE,\n header=FALSE,\n col.names=c(\"species1\",\"interaction_type\",\"species2\"))\n",
"_____no_output_____"
]
],
[
[
"Let's take a peek at `pcdf` using the `head` function:",
"_____no_output_____"
]
],
[
[
"head(pcdf)",
"_____no_output_____"
],
[
"library(igraph)\n\ninteraction_types_ppi <- c(\"interacts-with\",\n \"in-complex-with\",\n \"neighbor-of\")\n\ninteraction_types_metab <- c(\"controls-production-of\",\n \"consumption-controlled-by\",\n \"controls-production-of\",\n \"controls-transport-of-chemical\")\n\ninteraction_types_ppd <- c(\"catalysis-precedes\",\n \"controls-phosphorylation-of\",\n \"controls-state-change-of\",\n \"controls-transport-of\",\n \"controls-expression-of\")",
"\nAttaching package: ‘igraph’\n\n\nThe following objects are masked from ‘package:stats’:\n\n decompose, spectrum\n\n\nThe following object is masked from ‘package:base’:\n\n union\n\n\n"
]
],
[
[
"Subset data frame `pcdf` to obtain only the rows whose interactions are in `interaction_types_ppi`, and select only columns 1 and 3:",
"_____no_output_____"
]
],
[
[
"pcdf_ppi <- pcdf[pcdf$interaction_type %in% interaction_types_ppi,c(1,3)]",
"_____no_output_____"
]
],
[
[
"Use the `igraph` function `graph_from_data_farme` to build a network from the edge-list data in `pcdf_ppi`; use `print` to see a summary of the graph:",
"_____no_output_____"
]
],
[
[
"graph_ppi <- graph_from_data_frame(pcdf_ppi,\n directed=FALSE)\nprint(graph_ppi)",
"IGRAPH ba9e496 UN-- 17020 523498 -- \n+ attr: name (v/c)\n+ edges from ba9e496 (vertex names):\n [1] A1BG--ABCC6 A1BG--ANXA7 A1BG--CDKN1A A1BG--CRISP3 A1BG--GDPD1 \n [6] A1BG--GRB2 A1BG--GRB7 A1BG--HNF4A A1BG--ONECUT1 A1BG--PIK3CA \n[11] A1BG--PIK3R1 A1BG--PRDX4 A1BG--PTPN11 A1BG--SETD7 A1BG--SMN1 \n[16] A1BG--SMN2 A1BG--SNCA A1BG--SOS1 A1BG--TK1 A1CF--ACBD3 \n[21] A1CF--ACLY A1CF--APOBEC1 A1CF--APOBEC1 A1CF--ATF2 A1CF--CELF2 \n[26] A1CF--CTNNB1 A1CF--E2F1 A1CF--E2F3 A1CF--E2F4 A1CF--FHL3 \n[31] A1CF--HNF1A A1CF--HNF4A A1CF--JUN A1CF--KAT5 A1CF--KHSRP \n[36] A1CF--MBD2 A1CF--MBD3 A1CF--NRF1 A1CF--RBL2 A1CF--REL \n+ ... omitted several edges\n"
]
],
[
[
"Do the same for the metabolic network:",
"_____no_output_____"
]
],
[
[
"pcdf_metab <- pcdf[pcdf$interaction_type %in% interaction_types_metab, c(1,3)]\ngraph_metab <- graph_from_data_frame(pcdf_metab,\n directed=TRUE)\nprint(graph_metab)",
"IGRAPH 77472bf DN-- 7620 38145 -- \n+ attr: name (v/c)\n+ edges from 77472bf (vertex names):\n [1] A4GALT->CHEBI:17659 A4GALT->CHEBI:17950 A4GALT->CHEBI:18307\n [4] A4GALT->CHEBI:18313 A4GALT->CHEBI:58223 A4GALT->CHEBI:67119\n [7] A4GNT ->CHEBI:17659 A4GNT ->CHEBI:58223 AAAS ->CHEBI:1604 \n[10] AAAS ->CHEBI:2274 AACS ->CHEBI:13705 AACS ->CHEBI:15345\n[13] AACS ->CHEBI:17369 AACS ->CHEBI:18361 AACS ->CHEBI:29888\n[16] AACS ->CHEBI:57286 AACS ->CHEBI:57287 AACS ->CHEBI:57288\n[19] AACS ->CHEBI:57392 AACS ->CHEBI:58280 AADAC ->CHEBI:17790\n[22] AADAC ->CHEBI:40574 AADAC ->CHEBI:4743 AADAC ->CHEBI:85505\n+ ... omitted several edges\n"
]
],
[
[
"Do the same for the directed protein-protein interactions:",
"_____no_output_____"
]
],
[
[
"pcdf_ppd <- pcdf[pcdf$interaction_type %in% interaction_types_ppd, c(1,3)]\ngraph_ppd <- graph_from_data_frame(pcdf_ppd,\n directed=TRUE)\nprint(graph_ppd)",
"IGRAPH DN-- 16063 359713 -- \n+ attr: name (v/c), interaction_type (e/c)\nIGRAPH DN-- 16063 359713 -- \n+ attr: name (v/c), interaction_type (e/c)\n+ edges (vertex names):\n [1] A1BG ->A2M A1BG ->AKT1 A1BG ->AKT1 A2M ->APOA1 \n [5] A2M ->CDC42 A2M ->RAC1 A2M ->RAC2 A2M ->RAC3 \n [9] A2M ->RHOA A2M ->RHOBTB1 A2M ->RHOBTB2 A2M ->RHOB \n[13] A2M ->RHOC A2M ->RHOD A2M ->RHOF A2M ->RHOG \n[17] A2M ->RHOH A2M ->RHOJ A2M ->RHOQ A2M ->RHOT1 \n[21] A2M ->RHOT2 A2M ->RHOU A2M ->RHOV A4GALT->ABO \n[25] A4GALT->AK3 A4GALT->B3GALNT1 A4GALT->B3GALT1 A4GALT->B3GALT2 \n[29] A4GALT->B3GALT4 A4GALT->B3GALT5 A4GALT->B3GALT6 A4GALT->B3GAT2 \n+ ... omitted several edges\n"
]
],
[
[
"Question: of the three networks that you just created, which has the most edges?",
"_____no_output_____"
],
[
"Next, we need to create a small graph. Let's make a three-vertex undirected graph from an edge-list. Let's connect all vertices to all other vertices: 1<->2, 2<->3, 3<->1. We'll once again use graph_from_data_farme to do this:",
"_____no_output_____"
]
],
[
[
"testgraph <- graph_from_data_frame(data.frame(c(1,2,3), c(2,3,1)), directed=FALSE)",
"_____no_output_____"
]
],
[
[
"Now let's plot the small test graph:",
"_____no_output_____"
]
],
[
[
"plot(testgraph)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f02ab92845ae16e119fa775c6f2c740354a85d | 36,185 | ipynb | Jupyter Notebook | Trial_Run.ipynb | RWJohns/ComputerVision_Steel | 0ed56f8d2169f31253286cd4c834aa27c37fd489 | [
"MIT"
] | null | null | null | Trial_Run.ipynb | RWJohns/ComputerVision_Steel | 0ed56f8d2169f31253286cd4c834aa27c37fd489 | [
"MIT"
] | null | null | null | Trial_Run.ipynb | RWJohns/ComputerVision_Steel | 0ed56f8d2169f31253286cd4c834aa27c37fd489 | [
"MIT"
] | null | null | null | 50.117729 | 1,675 | 0.578499 | [
[
[
"import os\nimport numpy as np\nimport pandas as pd\n\nimport os\nimport cv2\nfrom pathlib import Path\n\nfrom skimage.io import imsave, imread\n\nimport tensorflow as tf\n\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.callbacks import ModelCheckpoint\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.models import load_model\n\nfrom tensorflow.python.framework import ops\nops.reset_default_graph()\n",
"_____no_output_____"
],
[
"#This csv loads the info that will become the masks which is in run-length encoding\ntrain_df = pd.read_csv('~/Data/Metis/Steel/train.csv')\ntrain_df.shape",
"_____no_output_____"
],
[
"data_path = \"/Users/robjohns/Data/Metis/Steel/train_images/\"\ntrain_data_path = os.path.join(data_path)\nimages = os.listdir(train_data_path)\nprint(len(images))",
"12568\n"
],
[
"def name_and_mask(start_idx):\n #in data set, each images has 4 rows, this grabs all 4 and makes sure image name matches\n \n col = start_idx\n img_names = [str(i).split(\"_\")[0] for i in train_df.iloc[col:col+4, 0].values]\n if not (img_names[0] == img_names[1] == img_names[2] == img_names[3]):\n raise ValueError\n \n # This takes the 4 values of tagged pixels for each of the 4 defect tags \n #for the current image\n labels = train_df.iloc[col:col+4, 1]\n \n #makes an empty mask that is 256x1600 pixels with 4 layers for each pixel\n mask = np.zeros((256, 1600, 4), dtype=np.uint8)\n\n \n #\n for idx, label in enumerate(labels.values):\n \n# 4 times, once for each layer, the mask label is processed\n# the output will leave all 0's for the mask we made above if there is no code\n# or it will be converted to changing the mask on that layer\n \n if label is not np.nan:\n mask_label = np.zeros(1600*256, dtype=np.uint8)\n label = label.split(\" \")\n\n#makes a list out of non-zero labels, alternating between positions and lengths\n \n positions = map(int, label[0::2])\n length = map(int, label[1::2])\n \n#makes lists of positions and lengths by iterating every other, \n#and forces them to become int \n \n for pos, le in zip(positions, length):\n mask_label[pos-1:pos+le-1] = 1\n mask[:, :, idx] = mask_label.reshape(256, 1600, order='F')\n# the positions called in label are turned to 1 in the mask for this layer\n \n return img_names[0], mask",
"_____no_output_____"
],
[
"#Make a full table of 4 masks per image with dims [numpix,vertpix,horpix,masks(4)]",
"_____no_output_____"
],
[
"#Build Masks\n\nyname=[]\ny=[]\nfor idx in range(0,128,4):\n pix,masks=name_and_mask(idx)\n indy=int(idx/4)\n y.append(masks.astype(np.float32))\n yname.append(pix)\ny=np.stack(y, axis=0) \ny_train=y\n\n\ny_test=[]\nfor idx in range(0,128,4):\n pix,masks=name_and_mask(idx)\n indy=int(idx/4)\n y_test.append(masks.astype(np.float32))\ny_test=np.stack(y_test, axis=0)",
"_____no_output_____"
],
[
"#process image files into numpy arrays \nx=[]\nfor idx in range(0,128,4): \n name, mask = name_and_mask(idx)\n abs_path = \"/Users/robjohns/Data/Metis/Steel/train_images/\"\n filename=abs_path+name\n impath = Path(filename)\n img = cv2.imread(filename)\n x.append(img.astype(np.float32))\nx=np.stack(x, axis=0) \nx_train=x\nx_train=x_train/255\n\nx_test=[]\nfor idx in range(0,128,4): \n name, mask = name_and_mask(idx)\n abs_path = \"/Users/robjohns/Data/Metis/Steel/train_images/\"\n filename=abs_path+name\n impath = Path(filename)\n img = cv2.imread(filename)\n x_test.append(img.astype(np.float32))\nx_test=np.stack(x_test, axis=0)\nx_test=x_test/255",
"_____no_output_____"
],
[
"x_test.shape",
"_____no_output_____"
],
[
"\nim_idx_list = []\nim_idx_list_onlydf = []\nim_idx_list_nodf = []\n\n\nfor col in range(0, len(train_df), 4):\n img_names = [str(i).split(\"_\")[0] for i in train_df.iloc[col:col+4, 0].values]\n if not (img_names[0] == img_names[1] == img_names[2] == img_names[3]):\n raise ValueError\n \n \n \n labels = train_df.iloc[col:col+4, 1]\n if labels.isna().all():\n im_idx_list_nodf.append(col)\n im_idx_list.append(col)\n \n \n elif (labels.isna() == [False, True, True, True]).all():\n im_idx_list.append(col)\n im_idx_list_onlydf.append(col)\n \n elif (labels.isna() == [True, False, True, True]).all():\n im_idx_list.append(col)\n im_idx_list_onlydf.append(col)\n \n elif (labels.isna() == [True, True, False, True]).all():\n im_idx_list.append(col)\n im_idx_list_onlydf.append(col)\n \n elif (labels.isna() == [True, True, True, False]).all():\n im_idx_list.append(col)\n im_idx_list_onlydf.append(col)\n \n else:\n im_idx_list.append(col)\n im_idx_list_onlydf.append(col)",
"_____no_output_____"
],
[
"class TFSCGen(keras.utils.Sequence):\n \"\"\"Generator class for Tensorflow Speech Competition data\n \n args:\n - gen_path (patlib.Path): a path pointing to a directory containing training examples\n examples should be stored in format `gen_path/[class_name]/[file_name].wav`\n - batch_size (int): size of batches to return\n - shuffle (bool): whether or not data is to be shuffled between batches\n \n \"\"\"\n \n \n def __init__(self, gen_path=im_idx_list, batch_size = 32, shuffle=True):\n \n \n self.gen_files = im_idx_list\n \n self.batch_size = batch_size\n self.shuffle = shuffle\n \n if self.shuffle:\n random.shuffle(self.gen_files)\n\n def __len__(self):\n \"\"\"returns the number of examples\"\"\"\n \n return int(np.ceil(len(self.gen_files) / float(self.batch_size)))\n \n def on_epoch_end(self):\n \"\"\"shuffles data after an epoch runs (but only if self.shuffle is set)\"\"\"\n \n if self.shuffle:\n random.shuffle(self.gen_files)\n \n\n def __getitem__(self, idx):\n \"\"\"function to return the batch given the batch index\n \n args:\n idx (int): this is the batch index generated by keras\n \n \"\"\"\n \n \n start_idx = idx*self.batch_size\n batch_rows = self.gen_files[start_idx:start_idx+self.batch_size]\n \n x=[]\n for idx in batch_rows: \n name, mask = name_and_mask(idx)\n abs_path = \"/Users/robjohns/Data/Metis/Steel/train_images/\"\n #need to generalize this for tests\n \n filename=abs_path+name\n impath = Path(filename)\n img = cv2.imread(filename)\n x.append(img.astype(np.float32))\n x=np.stack(x, axis=0) \n x=x/255\n \n \n y=[]\n for idx in batch_rows:\n pix,masks=name_and_mask(idx)\n indy=int(idx/4)\n y.append(masks.astype(np.float32))\n yname.append(pix)\n y=np.stack(y, axis=0) \n \n \n \n return x,y",
"_____no_output_____"
],
[
"#put TFSCGen() where x,y would be in model call\n\n# example: conv_model.fit(TFSCGen(), validation_data=TFSCGen(tfsc_val),\n epochs=100,\n callbacks=[\n keras.callbacks.ReduceLROnPlateau(patience=3, verbose=True), \n keras.callbacks.EarlyStopping(patience=8, restore_best_weights=True, verbose=True)\n ]) \n",
"_____no_output_____"
],
[
"#do test train split here, you need to send x_train, y_train, x_test, and y_test to model",
"_____no_output_____"
],
[
"len(x),len(y),len(yname)",
"_____no_output_____"
],
[
"np.save('imgs_masks.npy', y)\nnp.save('imgs.npy', x)\nnp.save('imgs_names.npy', yname)",
"_____no_output_____"
],
[
"train_df.shape",
"_____no_output_____"
],
[
"50272/4",
"_____no_output_____"
],
[
"def dice_coef(y_true, y_pred):\n smooth = 1.\n y_true_f = K.flatten(y_true)\n y_pred_f = K.flatten(y_pred)\n intersection = K.sum(y_true_f * y_pred_f)\n return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)\n\n\ndef dice_coef_loss(y_true, y_pred):\n return -dice_coef(y_true, y_pred)",
"_____no_output_____"
],
[
"# unet from https://github.com/jocicmarko/ultrasound-nerve-segmentation",
"_____no_output_____"
],
[
"def get_unet():\n inputs = Input((256, 1600, 3))\n conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)\n conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)\n pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)\n\n conv2 = Conv2D(48, (3, 3), activation='relu', padding='same')(pool1)\n conv2 = Conv2D(48, (3, 3), activation='relu', padding='same')(conv2)\n pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)\n\n conv3 = Conv2D(72, (3, 3), activation='relu', padding='same')(pool2)\n conv3 = Conv2D(72, (3, 3), activation='relu', padding='same')(conv3)\n pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)\n\n conv4 = Conv2D(108, (3, 3), activation='relu', padding='same')(pool3)\n conv4 = Conv2D(108, (3, 3), activation='relu', padding='same')(conv4)\n pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)\n\n conv5 = Conv2D(162, (3, 3), activation='relu', padding='same')(pool4)\n conv5 = Conv2D(162, (3, 3), activation='relu', padding='same')(conv5)\n\n up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)\n conv6 = Conv2D(108, (3, 3), activation='relu', padding='same')(up6)\n conv6 = Conv2D(108, (3, 3), activation='relu', padding='same')(conv6)\n\n up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)\n conv7 = Conv2D(72, (3, 3), activation='relu', padding='same')(up7)\n conv7 = Conv2D(72, (3, 3), activation='relu', padding='same')(conv7)\n\n up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)\n conv8 = Conv2D(48, (3, 3), activation='relu', padding='same')(up8)\n conv8 = Conv2D(48, (3, 3), activation='relu', padding='same')(conv8)\n\n up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)\n conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)\n conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)\n\n conv10 = Conv2D(4, (1, 1), activation='sigmoid')(conv9)\n\n model = Model(inputs=[inputs], outputs=[conv10])\n\n model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, metrics=[dice_coef])\n\n return model\n",
"_____no_output_____"
],
[
"unet_model = get_unet()\n\nunet_model.summary()\n#check dimensions match expected output. ",
"Model: \"model_2\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_3 (InputLayer) [(None, 256, 1600, 3 0 \n__________________________________________________________________________________________________\nconv2d_38 (Conv2D) (None, 256, 1600, 32 896 input_3[0][0] \n__________________________________________________________________________________________________\nconv2d_39 (Conv2D) (None, 256, 1600, 32 9248 conv2d_38[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_8 (MaxPooling2D) (None, 128, 800, 32) 0 conv2d_39[0][0] \n__________________________________________________________________________________________________\nconv2d_40 (Conv2D) (None, 128, 800, 48) 13872 max_pooling2d_8[0][0] \n__________________________________________________________________________________________________\nconv2d_41 (Conv2D) (None, 128, 800, 48) 20784 conv2d_40[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_9 (MaxPooling2D) (None, 64, 400, 48) 0 conv2d_41[0][0] \n__________________________________________________________________________________________________\nconv2d_42 (Conv2D) (None, 64, 400, 72) 31176 max_pooling2d_9[0][0] \n__________________________________________________________________________________________________\nconv2d_43 (Conv2D) (None, 64, 400, 72) 46728 conv2d_42[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_10 (MaxPooling2D) (None, 32, 200, 72) 0 conv2d_43[0][0] \n__________________________________________________________________________________________________\nconv2d_44 (Conv2D) (None, 32, 200, 108) 70092 max_pooling2d_10[0][0] \n__________________________________________________________________________________________________\nconv2d_45 (Conv2D) (None, 32, 200, 108) 105084 conv2d_44[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_11 (MaxPooling2D) (None, 16, 100, 108) 0 conv2d_45[0][0] \n__________________________________________________________________________________________________\nconv2d_46 (Conv2D) (None, 16, 100, 162) 157626 max_pooling2d_11[0][0] \n__________________________________________________________________________________________________\nconv2d_47 (Conv2D) (None, 16, 100, 162) 236358 conv2d_46[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_8 (Conv2DTrans (None, 32, 200, 256) 166144 conv2d_47[0][0] \n__________________________________________________________________________________________________\nconcatenate_8 (Concatenate) (None, 32, 200, 364) 0 conv2d_transpose_8[0][0] \n conv2d_45[0][0] \n__________________________________________________________________________________________________\nconv2d_48 (Conv2D) (None, 32, 200, 108) 353916 concatenate_8[0][0] \n__________________________________________________________________________________________________\nconv2d_49 (Conv2D) (None, 32, 200, 108) 105084 conv2d_48[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_9 (Conv2DTrans (None, 64, 400, 128) 55424 conv2d_49[0][0] \n__________________________________________________________________________________________________\nconcatenate_9 (Concatenate) (None, 64, 400, 200) 0 conv2d_transpose_9[0][0] \n conv2d_43[0][0] \n__________________________________________________________________________________________________\nconv2d_50 (Conv2D) (None, 64, 400, 72) 129672 concatenate_9[0][0] \n__________________________________________________________________________________________________\nconv2d_51 (Conv2D) (None, 64, 400, 72) 46728 conv2d_50[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_10 (Conv2DTran (None, 128, 800, 64) 18496 conv2d_51[0][0] \n__________________________________________________________________________________________________\nconcatenate_10 (Concatenate) (None, 128, 800, 112 0 conv2d_transpose_10[0][0] \n conv2d_41[0][0] \n__________________________________________________________________________________________________\nconv2d_52 (Conv2D) (None, 128, 800, 48) 48432 concatenate_10[0][0] \n__________________________________________________________________________________________________\nconv2d_53 (Conv2D) (None, 128, 800, 48) 20784 conv2d_52[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_11 (Conv2DTran (None, 256, 1600, 32 6176 conv2d_53[0][0] \n__________________________________________________________________________________________________\nconcatenate_11 (Concatenate) (None, 256, 1600, 64 0 conv2d_transpose_11[0][0] \n conv2d_39[0][0] \n__________________________________________________________________________________________________\nconv2d_54 (Conv2D) (None, 256, 1600, 32 18464 concatenate_11[0][0] \n__________________________________________________________________________________________________\nconv2d_55 (Conv2D) (None, 256, 1600, 32 9248 conv2d_54[0][0] \n__________________________________________________________________________________________________\nconv2d_56 (Conv2D) (None, 256, 1600, 4) 132 conv2d_55[0][0] \n==================================================================================================\nTotal params: 1,670,564\nTrainable params: 1,670,564\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"def train_and_predict():\n\n #'Loading and preprocessing train data\n \n \n imgs_train, imgs_mask_train = x_train, y_train\n\n \n \n \n #imgs_train = imgs_train/255\n \n mean = np.mean(imgs_train) # mean for data centering\n std = np.std(imgs_train) # std for data normalization\n\n #imgs_train -= mean\n #imgs_train /= std\n\n \n\n \n #Creating and compiling model\n \n \n model = get_unet()\n model_checkpoint = ModelCheckpoint('weights.h5', monitor='val_loss', save_best_only=True)\n\n \n # Fitting model\n \n model.fit(imgs_train, imgs_mask_train, batch_size=32, epochs=2, verbose=1, shuffle=True,\n validation_split=0,\n callbacks=[model_checkpoint])\n\n \n #Loading and preprocessing test data\n \n imgs_test, imgs_id_test = x_test, y_test\n \n\n imgs_test # /= 255.\n #imgs_test -= mean\n #imgs_test /= std\n\n \n # Loading saved weights.\n model.load_weights('weights.h5')\n\n \n #Predicting masks on test data\n \n imgs_mask_test = model.predict(imgs_test, verbose=1)\n \n #convert masks to a table of run length encoding\n \n ",
"_____no_output_____"
],
[
"train_and_predict()",
"Train on 32 samples\nEpoch 1/2\nWARNING:tensorflow:Can save best model only with val_loss available, skipping.\n32/32 [==============================] - 185s 6s/sample - loss: -0.0088 - dice_coef: 0.0088\nEpoch 2/2\nWARNING:tensorflow:Can save best model only with val_loss available, skipping.\n32/32 [==============================] - 157s 5s/sample - loss: -0.0098 - dice_coef: 0.0098\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f03dde214ec629e5a1c74a3a1de09db8ce0587 | 861,832 | ipynb | Jupyter Notebook | _notebooks/2022-02-04-data-analysis-course-project.ipynb | sandeshkatakam/My-Machine_learning-Blog | 2d71f3bcac3662617b54d6b90a46c85a6ebc6830 | [
"Apache-2.0"
] | 1 | 2022-02-01T11:58:52.000Z | 2022-02-01T11:58:52.000Z | _notebooks/2022-02-04-data-analysis-course-project.ipynb | sandeshkatakam/My-Machine_learning-Blog | 2d71f3bcac3662617b54d6b90a46c85a6ebc6830 | [
"Apache-2.0"
] | 5 | 2022-02-01T12:00:39.000Z | 2022-02-18T03:44:00.000Z | _notebooks/2022-02-04-data-analysis-course-project.ipynb | sandeshkatakam/My-Machine_learning-Blog | 2d71f3bcac3662617b54d6b90a46c85a6ebc6830 | [
"Apache-2.0"
] | null | null | null | 202.736297 | 89,526 | 0.875418 | [
[
[
"# Axis Bank Stock Data Analysis Project Blog Post\n> Data Analysis of axis bank stock market time-series dataset.\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [jupyter]\n- image: images/stockdataimg.jpg",
"_____no_output_____"
],
[
"## AxisBank Stock Data Analysis\n\nThe project is based on the dataset I obtained from kaggle. The Analysis I am performing is on the 'AXISBANK' stock market data from 2019-2021.AXISBANK is one of the stocks listed in NIFTY50 index. The NIFTY 50 is a benchmark Indian stock market index that represents the weighted average of 50 of the largest Indian companies listed on the National Stock Exchange. It is one of the two main stock indices used in India, the other being the BSE SENSEX. The Analysis is performed on the stock quote data of \"AXIS BANK\" from the dataset of NIFTY50 Stock Market data obtained from kaggle repo. \n\nAxis Bank Limited, formerly known as UTI Bank (1993–2007), is an Indian banking and financial services company headquartered in Mumbai, Maharashtra.It sells financial services to large and mid-size companies, SMEs and retail businesses.\n\nThe bank was founded on 3 December 1993 as UTI Bank, opening its registered office in Ahmedabad and a corporate office in Mumbai. The bank was promoted jointly by the Administrator of the Unit Trust of India (UTI), Life Insurance Corporation of India (LIC), General Insurance Corporation, National Insurance Company, The New India Assurance Company, The Oriental Insurance Corporation and United India Insurance Company. The first branch was inaugurated on 2 April 1994 in Ahmedabad by Manmohan Singh, then finance minister of India \\\nI chose this dataset because of the importance of NIFTY50 listed stocks on Indian economy. In most ways the NIFTY50 presents how well the Indian capital markets are doing.\n",
"_____no_output_____"
],
[
"## Downloading the Dataset\n\nIn this section of the Jupyter notebook we are going to download an interesting data set from kaggle dataset repositories. We are using python library called OpenDatasets for downloading from kaggle. While downloading we are asked for kaggle user id and API token key for accessing the dataset from kaggle. Kaggle is a platform used for obtaining datasets and various other datascience tasks. ",
"_____no_output_____"
]
],
[
[
"!pip install jovian opendatasets --upgrade --quiet",
"_____no_output_____"
]
],
[
[
"Let's begin by downloading the data, and listing the files within the dataset.",
"_____no_output_____"
]
],
[
[
"# Change this\ndataset_url = 'https://www.kaggle.com/rohanrao/nifty50-stock-market-data'",
"_____no_output_____"
],
[
"import opendatasets as od\nod.download(dataset_url)",
"Skipping, found downloaded files in \"./nifty50-stock-market-data\" (use force=True to force download)\n"
]
],
[
[
"The dataset has been downloaded and extracted.",
"_____no_output_____"
]
],
[
[
"# Change this\ndata_dir = './nifty50-stock-market-data'",
"_____no_output_____"
],
[
"import os\nos.listdir(data_dir)",
"_____no_output_____"
]
],
[
[
"Let us save and upload our work to Jovian before continuing.",
"_____no_output_____"
]
],
[
[
"project_name = \"nifty50-stockmarket-data\" # change this (use lowercase letters and hyphens only)",
"_____no_output_____"
],
[
"!pip install jovian --upgrade -q",
"_____no_output_____"
],
[
"import jovian",
"_____no_output_____"
],
[
"jovian.commit(project=project_name)",
"_____no_output_____"
]
],
[
[
"## Data Preparation and Cleaning\n\nData Preparation and Cleansing constitutes the first part of the Data Analysis project for any dataset. We do this process inorder to obtain retain valuable data from the data frame, one that is relevant for our analysis. The process is also used to remove erroneous values from the dataset(ex. NaN to 0). After the preparation of data and cleansing, the data can be used for analysis.</br>\nIn our dataframe we have a lot of non-releavant information, so we are going to drop few columns in the dataframe and fix some of the elements in data frame for better analysis. We are also going to change the Date column into DateTime format which can be further used to group the data by months/year.\n\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n",
"_____no_output_____"
],
[
"axis_df= pd.read_csv(data_dir + \"/AXISBANK.csv\")",
"_____no_output_____"
],
[
"axis_df.info()\n",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 5306 entries, 0 to 5305\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Date 5306 non-null object \n 1 Symbol 5306 non-null object \n 2 Series 5306 non-null object \n 3 Prev Close 5306 non-null float64\n 4 Open 5306 non-null float64\n 5 High 5306 non-null float64\n 6 Low 5306 non-null float64\n 7 Last 5306 non-null float64\n 8 Close 5306 non-null float64\n 9 VWAP 5306 non-null float64\n 10 Volume 5306 non-null int64 \n 11 Turnover 5306 non-null float64\n 12 Trades 2456 non-null float64\n 13 Deliverable Volume 4797 non-null float64\n 14 %Deliverble 4797 non-null float64\ndtypes: float64(11), int64(1), object(3)\nmemory usage: 621.9+ KB\n"
],
[
"axis_df.describe()\n",
"_____no_output_____"
],
[
"axis_df",
"_____no_output_____"
],
[
"axis_df['Symbol'] = np.where(axis_df['Symbol'] == 'UTIBANK', 'AXISBANK', axis_df['Symbol'])\naxis_df",
"_____no_output_____"
],
[
"axis_new_df = axis_df.drop(['Last','Series', 'VWAP', 'Trades','Deliverable Volume','%Deliverble'], axis=1)\n\naxis_new_df",
"_____no_output_____"
],
[
"def getIndexes(dfObj, value):\n ''' Get index positions of value in dataframe i.e. dfObj.'''\n listOfPos = list()\n # Get bool dataframe with True at positions where the given value exists\n result = dfObj.isin([value])\n # Get list of columns that contains the value\n seriesObj = result.any()\n columnNames = list(seriesObj[seriesObj == True].index)\n # Iterate over list of columns and fetch the rows indexes where value exists\n for col in columnNames:\n rows = list(result[col][result[col] == True].index)\n for row in rows:\n listOfPos.append((row, col))\n # Return a list of tuples indicating the positions of value in the dataframe\n return listOfPos\n",
"_____no_output_____"
],
[
"listOfPosition_axis = getIndexes(axis_df, '2019-01-01')\nlistOfPosition_axis",
"_____no_output_____"
],
[
"axis_new_df.drop(axis_new_df.loc[0:4728].index, inplace = True)",
"_____no_output_____"
],
[
"axis_new_df",
"_____no_output_____"
]
],
[
[
"## Summary of the operations done till now:\n1. we have taken a csv file containing stock data of AXIS BANK from the data set of nifty50 stocks and performed data cleansing operations on them.</br>\n2. Originally, the data from the data set is noticed as stock price quotations from the year 2001 but for our analysis we have taken data for the years 2019-2021</br>\n3. Then we have dropped the columns that are not relevant for our analysis by using pandas dataframe operations.",
"_____no_output_____"
]
],
[
[
"axis_new_df.reset_index(drop=True, inplace=True)\naxis_new_df",
"_____no_output_____"
],
[
"axis_new_df['Date'] = pd.to_datetime(axis_new_df['Date']) # we changed the Dates into Datetime format from the object format\naxis_new_df.info() ",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 577 entries, 0 to 576\nData columns (total 9 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Date 577 non-null datetime64[ns]\n 1 Symbol 577 non-null object \n 2 Prev Close 577 non-null float64 \n 3 Open 577 non-null float64 \n 4 High 577 non-null float64 \n 5 Low 577 non-null float64 \n 6 Close 577 non-null float64 \n 7 Volume 577 non-null int64 \n 8 Turnover 577 non-null float64 \ndtypes: datetime64[ns](1), float64(6), int64(1), object(1)\nmemory usage: 40.7+ KB\n"
],
[
"axis_new_df['Daily Lag'] = axis_new_df['Close'].shift(1) # Added a new column Daily Lag to calculate daily returns of the stock\naxis_new_df['Daily Returns'] = (axis_new_df['Daily Lag']/axis_new_df['Close']) -1\n",
"_____no_output_____"
],
[
"axis_dailyret_df = axis_new_df.drop(['Prev Close', 'Open','High', 'Low','Close','Daily Lag'], axis = 1)",
"_____no_output_____"
],
[
"axis_dailyret_df",
"_____no_output_____"
],
[
"import jovian",
"_____no_output_____"
],
[
"jovian.commit()",
"_____no_output_____"
]
],
[
[
"## Exploratory Analysis and Visualization\n\n\n#### Here we compute the mean, max/min stock quotes of the stock AXISBANK. We specifically compute the mean of the Daily returns column. we are going to do the analysis by first converting the index datewise to month wise to have a good consolidated dataframe to analyze in broad timeline. we are going to divide the data frame into three for the years 2019, 2020, 2021 respectively, in order to analyze the yearly performance of the stock.\n",
"_____no_output_____"
],
[
"Let's begin by importing`matplotlib.pyplot` and `seaborn`.",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nsns.set_style('darkgrid')\nmatplotlib.rcParams['font.size'] = 10\nmatplotlib.rcParams['figure.figsize'] = (15, 5)\nmatplotlib.rcParams['figure.facecolor'] = '#00000000'",
"_____no_output_____"
]
],
[
[
"Here we are going to explore the daily Returns column by plotting a line graph of daily returns v/s Months. Now we can see that daily returns are growing across months in the years 2019-2021.",
"_____no_output_____"
]
],
[
[
"\naxis_dailyret_plot=axis_dailyret_df.groupby(axis_dailyret_df['Date'].dt.strftime('%B'))['Daily Returns'].sum().sort_values()\nplt.plot(axis_dailyret_plot)",
"_____no_output_____"
],
[
"axis_new_df['Year'] = pd.DatetimeIndex(axis_new_df['Date']).year\naxis_new_df\n",
"_____no_output_____"
],
[
"axis2019_df = axis_new_df[axis_new_df.Year == 2019 ]\naxis2020_df = axis_new_df[axis_new_df.Year == 2020 ]\naxis2021_df = axis_new_df[axis_new_df.Year == 2021 ]",
"_____no_output_____"
],
[
"axis2019_df.reset_index(drop = True, inplace = True)\naxis2019_df",
"_____no_output_____"
],
[
"axis2020_df.reset_index(drop = True, inplace = True)\naxis2020_df",
"_____no_output_____"
],
[
"axis2021_df.reset_index(drop=True, inplace=True)\naxis2021_df",
"_____no_output_____"
]
],
[
[
"## Summary of above exploratory Analysis:\nIn the above code cells, we performed plotting of the data by exploring a column from the data. We have divided the DataFrame into three data frames containing the stock quote data from year-wise i.e., for the years 2019, 2020, 2021. For dividing the DataFrame year-wise we have added a new column called 'Year' which is generated from the DataTime values of the column \"Date\".\n\n\n",
"_____no_output_____"
]
],
[
[
"axis_range_df = axis_dailyret_df['Daily Returns'].max() - axis_dailyret_df['Daily Returns'].min()\naxis_range_df",
"_____no_output_____"
],
[
"axis_mean_df = axis_dailyret_df['Daily Returns'].mean()\naxis_mean_df",
"_____no_output_____"
]
],
[
[
"In the above two code cells, we have computed the range i.e. the difference between maximum and minimum value of the column. We have also calculated the mean of the daily returns of the Axis Bank stock.",
"_____no_output_____"
],
[
"## Exploratory Analysis of stock quotes year-wise for Axis Bank:\nIn this section we have plotted the Closing values of the stock throughout the year for the years 2019,2020,2021. We have only partial data for 2021(i.e. till Apr 2021). We have also done a plot to compare the performance throughout the year for the years 2019 and 2020(since we had full data for the respective years).\n",
"_____no_output_____"
]
],
[
[
"plt.plot(axis2019_df['Date'],axis2019_df['Close'] )\nplt.title('Closing Values of stock for the year 2019')\nplt.xlabel(None)\nplt.ylabel('Closing price of the stock')",
"_____no_output_____"
],
[
"plt.plot(axis2020_df['Date'],axis2020_df['Close'])\nplt.title('Closing Values of stock for the year 2020')\nplt.xlabel(None)\nplt.ylabel('Closing price of the stock')",
"_____no_output_____"
],
[
"plt.plot(axis2021_df['Date'],axis2021_df['Close'])\nplt.title('Closing Values of stock for the year 2021 Till April Month')\nplt.xlabel(None)\nplt.ylabel('Closing price of the stock')",
"_____no_output_____"
]
],
[
[
"**TODO** - Explore one or more columns by plotting a graph below, and add some explanation about it",
"_____no_output_____"
]
],
[
[
"plt.style.use('fivethirtyeight')\nplt.plot(axis2019_df['Date'], axis2019_df['Close'],linewidth=3, label = '2019')\nplt.plot(axis2020_df[\"Date\"],axis2020_df['Close'],linewidth=3, label = '2020')\nplt.legend(loc='best' )\nplt.title('Closing Values of stock for the years 2019 and 2020')\nplt.xlabel(None)\nplt.ylabel('Closing price of the stock')\n",
"_____no_output_____"
],
[
"print(plt.style.available)",
"['Solarize_Light2', '_classic_test_patch', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark', 'seaborn-dark-palette', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-pastel', 'seaborn-poster', 'seaborn-talk', 'seaborn-ticks', 'seaborn-white', 'seaborn-whitegrid', 'tableau-colorblind10']\n"
]
],
[
[
"Let us save and upload our work to Jovian before continuing",
"_____no_output_____"
]
],
[
[
"import jovian",
"_____no_output_____"
],
[
"jovian.commit()",
"_____no_output_____"
]
],
[
[
"## Asking and Answering Questions\n\nIn this section, we are going to answer some of the questions regarding the dataset using various data analysis libraries like Numpy, Pandas, Matplotlib and seaborn. By using the tools we can see how useful the libraries come in handy while doing Inference on a dataset.\n\n",
"_____no_output_____"
],
[
"> Instructions (delete this cell)\n>\n> - Ask at least 5 interesting questions about your dataset\n> - Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn\n> - Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary\n> - Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does\n\n",
"_____no_output_____"
],
[
"### Q1: What was the change in price and volume of the stock traded overtime?",
"_____no_output_____"
]
],
[
[
"plt.plot(axis2019_df['Date'], axis2019_df['Close'],linewidth=3, label = '2019')\nplt.plot(axis2020_df[\"Date\"],axis2020_df['Close'],linewidth=3, label = '2020')\nplt.plot(axis2021_df[\"Date\"], axis2021_df['Close'],linewidth = 3, label = '2021')\nplt.legend(loc='best' )\nplt.title('Closing Price of stock for the years 2019-2021(Till April)')\nplt.xlabel(None)\nplt.ylabel('Closing price of the stock')",
"_____no_output_____"
],
[
"print('The Maximum closing price of the stock during 2019-2021 is',axis_new_df['Close'].max())\nprint('The Minimum closing price of the stock during 2019-2021 is',axis_new_df['Close'].min())\nprint('The Index for the Maximum closing price in the dataframe is',getIndexes(axis_new_df, axis_new_df['Close'].max()))\nprint('The Index for the Minimum closing price in the dataframe is',getIndexes(axis_new_df, axis_new_df['Close'].min()))\nprint(axis_new_df.iloc[104])\nprint(axis_new_df.iloc[303])\n",
"The Maximum closing price of the stock during 2019-2021 is 822.8\nThe Minimum closing price of the stock during 2019-2021 is 303.15\nThe Index for the Maximum closing price in the dataframe is [(105, 'Prev Close'), (104, 'Close'), (105, 'Daily Lag')]\nThe Index for the Minimum closing price in the dataframe is [(304, 'Prev Close'), (303, 'Close'), (304, 'Daily Lag')]\nDate 2019-06-04 00:00:00\nSymbol AXISBANK\nPrev Close 812.65\nOpen 807.55\nHigh 827.75\nLow 805.5\nClose 822.8\nVolume 9515354\nTurnover 778700415970000.0\nDaily Lag 812.65\nDaily Returns -0.012336\nYear 2019\nName: 104, dtype: object\nDate 2020-03-24 00:00:00\nSymbol AXISBANK\nPrev Close 308.65\nOpen 331.95\nHigh 337.5\nLow 291.0\nClose 303.15\nVolume 50683611\nTurnover 1578313503950000.0\nDaily Lag 308.65\nDaily Returns 0.018143\nYear 2020\nName: 303, dtype: object\n"
]
],
[
[
"* As we can see from the above one of the two plots there was a dip in the closing price during the year 2020. The Maximum Closing price occurred on 2019-06-04(Close = 822.8). The lowest of closing price during the years occurred on 2020-03-24(Close = 303.15). This can say that the start of the pandemic has caused the steep down curve for the stock's closing price.",
"_____no_output_____"
]
],
[
[
"plt.plot(axis2019_df[\"Date\"],axis2019_df[\"Volume\"],linewidth=2, label = '2019')\nplt.plot(axis2020_df[\"Date\"],axis2020_df[\"Volume\"],linewidth=2, label = '2020')\nplt.plot(axis2021_df[\"Date\"],axis2021_df[\"Volume\"],linewidth=2, label = '2021')\nplt.legend(loc='best')\nplt.title('Volume of stock traded in the years 2019-2021(till April)')\nplt.ylabel('Volume')\nplt.xlabel(None)\n",
"_____no_output_____"
],
[
"print('The Maximum volume of the stock traded during 2019-2021 is',axis_new_df['Volume'].max())\nprint('The Minimum volume of the stock traded during 2019-2021 is',axis_new_df['Volume'].min())\nprint('The Index for the Maximum volume stock traded in the dataframe is',getIndexes(axis_new_df, axis_new_df['Volume'].max()))\nprint('The Index for the Minimum volume stock traded in the dataframe is',getIndexes(axis_new_df, axis_new_df['Volume'].min()))\nprint(axis_new_df.iloc[357])\nprint(axis_new_df.iloc[200])",
"The Maximum volume of the stock traded during 2019-2021 is 96190274\nThe Minimum volume of the stock traded during 2019-2021 is 965772\nThe Index for the Maximum volume stock traded in the dataframe is [(357, 'Volume')]\nThe Index for the Minimum volume stock traded in the dataframe is [(200, 'Volume')]\nDate 2020-06-16 00:00:00\nSymbol AXISBANK\nPrev Close 389.6\nOpen 404.9\nHigh 405.0\nLow 360.4\nClose 381.55\nVolume 96190274\nTurnover 3654065942305001.0\nDaily Lag 389.6\nDaily Returns 0.021098\nYear 2020\nName: 357, dtype: object\nDate 2019-10-27 00:00:00\nSymbol AXISBANK\nPrev Close 708.6\nOpen 711.0\nHigh 715.05\nLow 708.55\nClose 710.1\nVolume 965772\nTurnover 68696126654999.992188\nDaily Lag 708.6\nDaily Returns -0.002112\nYear 2019\nName: 200, dtype: object\n"
]
],
[
[
"As we can see from the above graph a lot of volume of trade happened during 2020. That means the stock was transacted a lot during the year 2020. The highest Volumed of stock is traded on 2020-06-16(Volume =96190274) and the Minimum volume of the stock traded during 2019-2021 is on 2019-10-27(Volume = 965772)",
"_____no_output_____"
],
[
"### Q2: What was the daily return of the stock on average?\n\nThe daily return measures the price change in a stock's price as a percentage of the previous day's closing price. A positive return means the stock has grown in value, while a negative return means it has lost value. we will also attempt to calculate the maximum daily return of the stock during 2019-2021.",
"_____no_output_____"
]
],
[
[
"#axis_new_df['Daily Returns'].plot(title='Axis Bank Daily Returns')\nplt.plot(axis_new_df['Date'],axis_new_df['Daily Returns'], linewidth=2 ,label = 'Daily Returns')\nplt.legend(loc='best' )\nplt.title('Daily Returns of stock for the years 2019-2021(Till April)')\nplt.xlabel(None)\nplt.ylabel('Daily Returns of the stock')",
"_____no_output_____"
],
[
"plt.plot(axis_new_df['Date'],axis_new_df['Daily Returns'], linestyle='--', marker='o')\nplt.title('Daily Returns of stock for the years 2019-2021(Till April)')\nplt.xlabel(None)\nplt.ylabel('Daily Returns of the stock')",
"_____no_output_____"
],
[
"print('The Maximum daily return during the years 2020 is',axis_new_df['Daily Returns'].max())\nindex = getIndexes(axis_new_df, axis_new_df['Daily Returns'].max())\naxis_new_df.iloc[302]",
"The Maximum daily return during the years 2020 is 0.3871699335817269\n"
],
[
"def getIndexes(dfObj, value):\n ''' Get index positions of value in dataframe i.e. dfObj.'''\n listOfPos = list()\n # Get bool dataframe with True at positions where the given value exists\n result = dfObj.isin([value])\n # Get list of columns that contains the value\n seriesObj = result.any()\n columnNames = list(seriesObj[seriesObj == True].index)\n # Iterate over list of columns and fetch the rows indexes where value exists\n for col in columnNames:\n rows = list(result[col][result[col] == True].index)\n for row in rows:\n listOfPos.append((row, col))\n # Return a list of tuples indicating the positions of value in the dataframe\n return listOfPos",
"_____no_output_____"
]
],
[
[
"As we can see from the plot there were high daily returns for the stock around late March 2020 and then there was ups and downs from April- July 2020 . we can see that the most changes in daily returns occurred during April 2020 - July 2020 and at other times the daily returns were almost flat. The maximum daily returns for the stock during 2019-2021 occurred on 2020-03-23(observed from the pandas table above).",
"_____no_output_____"
]
],
[
[
"Avgdailyret_2019 =axis2019_df['Daily Returns'].sum()/len(axis2019_df['Daily Returns'])\nAvgdailyret_2020 =axis2020_df['Daily Returns'].sum()/len(axis2020_df['Daily Returns'])\nAvgdailyret_2021 =axis2021_df['Daily Returns'].sum()/len(axis2021_df['Daily Returns'])\n\n# create a dataset\ndata_dailyret = {'2019': Avgdailyret_2019, '2020':Avgdailyret_2020, '2021':Avgdailyret_2021}\nYears = list(data_dailyret.keys())\nAvgdailyret = list(data_dailyret.values())\n\n# plotting a bar chart\nplt.figure(figsize=(10, 7))\nplt.bar(Years, Avgdailyret, color ='maroon',width = 0.3)\nplt.xlabel(\"Years\")\nplt.ylabel(\"Average Daily Returns of the Stock Traded\")\nplt.title(\"Average Daily Returns of the Stock over the years 2019-2021(Till April) (in 10^7)\")\nplt.show()\n ",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 7))\nsns.distplot(axis_new_df['Daily Returns'].dropna(), bins=100, color='purple')\nplt.title(' Histogram of Daily Returns')\nplt.tight_layout()",
"/opt/conda/lib/python3.9/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n"
]
],
[
[
"### Q3: What is the Average Trading volume of the stock for past three years?",
"_____no_output_____"
]
],
[
[
"Avgvol_2019 =axis2019_df['Volume'].sum()/len(axis2019_df['Volume'])\nAvgvol_2020 =axis2020_df['Volume'].sum()/len(axis2020_df['Volume'])\nAvgvol_2021 =axis2021_df['Volume'].sum()/len(axis2021_df['Volume'])\n# create a dataset\ndata_volume = {'2019': Avgvol_2019, '2020':Avgvol_2020, '2021':Avgvol_2021}\nYears = list(data_volume.keys())\nAvgVol = list(data_volume.values())\n# plotting a bar chart\nplt.figure(figsize=(13, 7))\nplt.bar(Years, AvgVol, color ='maroon',width = 0.3)\nplt.xlabel(\"Years\")\nplt.ylabel(\"Average Volume of the Stock Traded\")\nplt.title(\"Average Trading volume of the Stock over the years 2019-2021(Till April) (in 10^7)\")\nplt.show()\n ",
"_____no_output_____"
]
],
[
[
"From the above plot we can say that more volume of the Axis Bank stock is traded during the year 2020. We can see a significant rise in the trading volume of the stock from 2019 to 2020. ",
"_____no_output_____"
],
[
"### Q4: What is the Average Closing price of the stock for past three years?",
"_____no_output_____"
]
],
[
[
"Avgclose_2019 =axis2019_df['Close'].sum()/len(axis2019_df['Close'])\nAvgclose_2020 =axis2020_df['Close'].sum()/len(axis2020_df['Close'])\nAvgclose_2021 =axis2021_df['Close'].sum()/len(axis2021_df['Close'])\n# create a dataset\ndata_volume = {'2019': Avgclose_2019, '2020':Avgclose_2020, '2021':Avgclose_2021}\nYears = list(data_volume.keys())\nAvgClose = list(data_volume.values())\n# plotting a bar chart\nplt.figure(figsize=(13, 7))\nplt.bar(Years, AvgClose, color ='maroon',width = 0.3)\nplt.xlabel(\"Years\")\nplt.ylabel(\"Average Closding Price of the Stock Traded\")\nplt.title(\"Average Closing price of the Stock over the years 2019-2021(Till April) (in 10^7)\")\nplt.show()\n ",
"_____no_output_____"
]
],
[
[
"We have seen the Trading Volume of the stock is more during the year 2020. In contrast, the Year 2020 has the lowest average closing price among the other two. But for the years 2019 and 2021 the Average closing price is almost same, there is not much change in the value.",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"Let us save and upload our work to Jovian before continuing.",
"_____no_output_____"
]
],
[
[
"import jovian",
"_____no_output_____"
],
[
"jovian.commit()",
"_____no_output_____"
]
],
[
[
"## Inferences and Conclusion\n\nInferences : The above data analysis is done on the data set of stock quotes for AXIS BANK during the years 2019-2021. From the Analysis we can say that during the year 2020 there has been a lot of unsteady growth, there has been rise in the volume of stock traded on the exchange, that means there has been a lot of transactions of the stock. The stock has seen a swift traffic in buy/sell during the year 2020 and has fallen back to normal in the year 2021. In contrast to the volume of the stock the closing price of the stock has decreased during the year 2020, which can be concluded as the volume of the stock traded has no relation to the price change of the stock(while most people think there can be a correlation among the two values). The price decrease for the stock may have been due to the pandemic rise in India during the year 2020. ",
"_____no_output_____"
]
],
[
[
"import jovian",
"_____no_output_____"
],
[
"jovian.commit()",
"_____no_output_____"
]
],
[
[
"## References and Future Work\n\nFuture Ideas for the Analyis:\n* I am planning to go forward with this basic Analysis of the AXISBANK stock quotes and build a Machine Learning model predicting the future stock prices.\n* I plan to automate the Data Analysis process for every stock in the NIFTY50 Index by defining reusable functions and automating the Analysis procedures.\n* Study more strong correlations between the different quotes of the stock and analyze how and why they are related in that fashion. \n\nREFRENCES/LINKS USED FOR THIS PROJECT :\n* https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html\n* https://stackoverflow.com/questions/16683701/in-pandas-how-to-get-the-index-of-a-known-value\n* https://towardsdatascience.com/working-with-datetime-in-pandas-dataframe-663f7af6c587\n* https://thispointer.com/python-find-indexes-of-an-element-in-pandas-dataframe/\n* https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#timeseries-friendly-merging\n* https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html\n* https://towardsdatascience.com/financial-analytics-exploratory-data-analysis-of-stock-data-d98cbadf98b9\n* https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transpose.html\n* https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html\n* https://pandas.pydata.org/docs/reference/api/pandas.merge.html\n* https://stackoverflow.com/questions/14661701/how-to-drop-a-list-of-rows-from-pandas-dataframe\n* https://www.interviewqs.com/ddi-code-snippets/extract-month-year-pandas\n* https://stackoverflow.com/questions/18172851/deleting-dataframe-row-in-pandas-based-on-column-value\n* https://queirozf.com/entries/matplotlib-examples-displaying-and-configuring-legends\n* https://jakevdp.github.io/PythonDataScienceHandbook/04.06-customizing-legends.html\n* https://matplotlib.org/stable/tutorials/intermediate/legend_guide.html\n* https://matplotlib.org/devdocs/gallery/subplots_axes_and_figures/subplots_demo.html\n* https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html\n* https://stackoverflow.com/questions/332289/how-do-you-change-the-size-of-figures-drawn-with-matplotlib\n* https://www.investopedia.com/articles/investing/093014/stock-quotes-explained.asp\n* https://stackoverflow.com/questions/44908383/how-can-i-group-by-month-from-a-datefield-using-python-pandas\n* https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.hist.html\n* https://note.nkmk.me/en/python-pandas-dataframe-rename/\n* https://stackoverflow.com/questions/24748848/pandas-find-the-maximum-range-in-all-the-columns-of-dataframe\n* https://stackoverflow.com/questions/29233283/plotting-multiple-lines-in-different-colors-with-pandas-dataframe\n* https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html\n* https://www.geeksforgeeks.org/python-pandas-extracting-rows-using-loc/",
"_____no_output_____"
]
],
[
[
"import jovian",
"_____no_output_____"
],
[
"jovian.commit()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7f04c23bd22fd09e7ab8e2fb08305be76d545b8 | 13,941 | ipynb | Jupyter Notebook | Array Interview Question.ipynb | sillygod/ds_and_algorithm | 4beff02c80220baece8bbfa778586b833fcc6d6f | [
"MIT"
] | 1 | 2021-04-06T10:06:21.000Z | 2021-04-06T10:06:21.000Z | Array Interview Question.ipynb | sillygod/ds_and_algorithm | 4beff02c80220baece8bbfa778586b833fcc6d6f | [
"MIT"
] | null | null | null | Array Interview Question.ipynb | sillygod/ds_and_algorithm | 4beff02c80220baece8bbfa778586b833fcc6d6f | [
"MIT"
] | null | null | null | 24.036207 | 193 | 0.456782 | [
[
[
"# Array Interview Question\n\n\n### Anagram Check\n\nanagram是一種字的轉換,使用相同的字母以任意順序重新組成不同的字,之中有任意空白都可以例如, \"apple\" -> \"ap e lp\"\n",
"_____no_output_____"
]
],
[
[
"def anagram(s1, s2):\n l_bound = ord('0')\n r_bound = ord('z')\n appeared = [0]*(r_bound - l_bound)\n \n for letter in s1:\n if letter != ' ':\n mapping = ord(letter) - l_bound\n appeared[mapping] += 1\n\n for letter in s2:\n if letter != ' ':\n mapping = ord(letter) - l_bound\n appeared[mapping] -= 1\n if appeared[mapping] < 0:\n return False\n \n for ele in appeared:\n if ele != 0:\n return False\n \n return True\n",
"_____no_output_____"
],
[
"import unittest\n\n\nclass TestAnagram(unittest.TestCase):\n \n def test(self, solve):\n \n self.assertEqual(solve('go go go','gggooo'), True)\n self.assertEqual(solve('abc','cba'), True)\n self.assertEqual(solve('hi man','hi man'), True)\n self.assertEqual(solve('aabbcc','aabbc'), False)\n self.assertEqual(solve('123','1 2'), False)\n print('success')\n \n\nt = TestAnagram('test') # need to provide the method name, default is runTest\nt.test(anagram)",
"success\n"
]
],
[
[
"個人這邊這解法可能會不夠完善,因為僅僅是針對魚數字字母的陣列mapping,但是萬一有符號就不知道要怎辦了,所以當然是可以用dict來解掉這煩人的問題拉,只是想說這是屬於array類別的問題,就故意只用array解",
"_____no_output_____"
],
[
"### Array Pair Sum\n\n給予一個數字陣列,找出所有特定的數字配對的加起來為特定值k\nex.\n\n```python\n\npair_sum([1,3,2,2], 4)\n\n(1,3)\n(2,2)\n\n今天是要回傳有幾個配對就好,所以是回傳數字2\n```",
"_____no_output_____"
]
],
[
[
"def pair_sum(arr,k):\n res = [False]*len(arr)\n \n for i in range(len(arr)-1):\n for j in range(i+1,len(arr)):\n if arr[i] + arr[j] == k:\n res[i] = True\n res[j] = True\n \n pair_count = [1 for ele in res if ele == True]\n \n return len(pair_count)//2",
"_____no_output_____"
]
],
[
[
"上面效率會是$ Big O(n^2) $,但是如果可以使用dict或是set的話,就可以把效率壓到 $ BigO(n) $,因為 `n in dict` 這樣的查找只需 $ BigO(1) $,在array找尋你要的值是要花費 $ BigO(n) $,下面我們就來換成用set or dict來實作",
"_____no_output_____"
]
],
[
[
"def pair_sum_set_version(arr, k):\n to_seek = set()\n output = set()\n \n for num in arr:\n \n target = k - num\n \n if target not in to_seek:\n to_seek.add(num)\n else:\n output.add((min(num, target), max(num, target)))\n \n return len(output)",
"_____no_output_____"
],
[
"class TestPairSum(unittest.TestCase):\n \n def test(self, solve):\n \n self.assertEqual(solve([1,9,2,8,3,7,4,6,5,5,13,14,11,13,-1],10),6)\n self.assertEqual(solve([1,2,3,1],3),1)\n self.assertEqual(solve([1,3,2,2],4),2)\n print('success')\n \nt = TestPairSum()\nt.test(pair_sum_set_version)",
"success\n"
]
],
[
[
"### finding missing element\n\n這題是會給予你兩個array,第二個array是從第一個array隨機刪除一個元素後,並且進行洗亂的動作,然後今天你的任務就是要去找那個消失的元素",
"_____no_output_____"
]
],
[
[
"def finder(ary, ary2):\n table = {}\n \n for ele in ary:\n if ele in table:\n table[ele] += 1\n else:\n table[ele] = 1\n \n for ele in ary2:\n if ele in table:\n table[ele] -= 1\n else:\n return ele\n \n for k, v in table.items():\n if v != 0:\n return k",
"_____no_output_____"
]
],
[
[
"上面這個邏輯,如果是先用ary2去做表紀錄的話邏輯上會更加簡潔,也會少了最後一步\n\n```python\n\nfor ele in ary2:\n table[ele] = 1\n\nfor ele in ary1:\n if (ele not in table) or (table[ele] == 0):\n return ele\n else:\n table[ele] -= 1\n\n```\n\n這個解法算是最快的,因為如果使用排序的話最少都會要 $ n \\log n $,排序就是loop他去找不一樣的元素而已。\n\n\n另外有個天殺的聰明解法,這我真的沒想到就是使用XOR,讓我們先來看看code\nxor ( exclude or ) 具有排他性的or,就是or只要兩者之一有true結果就會是true,但是兩個都是true對於程式會是一種ambiguous,因此exclude這種情況,所以xor就是one or the other but not both\n\n\n$ A \\vee B $ but not $ A \\wedge B $\n\n直接從語意上翻譯成數學就是像下面\n\n$$ A \\oplus B = (A \\vee B) \\wedge \\neg ( A \\wedge B) $$\n\n\n總之呢! 因為xor的特性,若是兩個完全一樣的ary,你將會發現最後結果會是0\n\n```python\n\ndef finder_xor(arr1, arr2): \n result=0 \n \n # Perform an XOR between the numbers in the arrays\n for num in arr1+arr2: \n result^=num \n print result\n \n return result \n \n```\n\n",
"_____no_output_____"
]
],
[
[
"class TestFinder(unittest.TestCase):\n \n def test(self, solve):\n \n self.assertEqual(solve([5,5,7,7],[5,7,7]),5)\n self.assertEqual(solve([1,2,3,4,5,6,7],[3,7,2,1,4,6]),5)\n self.assertEqual(solve([9,8,7,6,5,4,3,2,1],[9,8,7,5,4,3,2,1]),6)\n print('success')\n \nt = TestFinder()\nt.test(finder)",
"success\n"
]
],
[
[
"### largest continuous sum\n\n題目會給予你一個陣列,你的任務就是要去從裡面發現哪種連續數字的總和會是最大值,不一定是全部數字加起來是最大,因為裡面會有負數,有可能是從某某位置開始的連續X個數子總和才是最大。\n",
"_____no_output_____"
]
],
[
[
"def lar_con_sum(ary):\n \n if len(ary) == 0:\n return 0\n \n max_sum = cur_sum = ary[0]\n \n for num in ary[1:]:\n cur_sum = max(cur_sum+num, num)\n max_sum = max(cur_sum, max_sum)\n \n return max_sum\n \n ",
"_____no_output_____"
]
],
[
[
"這題的思緒是,長度n的連續數字最大和,一定是從長度n-1連續數字最大和來的\n\n所以今天從index=0時來看,因為元素只有一個這時候就是他本身為最大值,當index=1時,我們就要來比較ele[0]+ele[1]和ele[0] <- 當前最大值的比較,比較這兩者然後取最大的,需要注意的是,我們需要暫存目前的sum,因為這是拿來判斷後面遇到負數狀時況,計算另一個最大值的點,此時另一個最大值(cur_sum)仍然會與之前最大值去比較(max_sum),",
"_____no_output_____"
]
],
[
[
"class TestLargestConSum(unittest.TestCase):\n \n def test(self, solve):\n \n self.assertEqual(solve([1,2,-1,3,4,-1]),9)\n self.assertEqual(solve([1,2,-1,3,4,10,10,-10,-1]),29)\n self.assertEqual(solve([-1,1]),1)\n self.assertEqual(solve([1,2,-10,5,6]), 11)\n print('success')\n \nt = TestLargestConSum()\nt.test(lar_con_sum)",
"success\n"
]
],
[
[
"#### Sentence Reversal\n\n給予一個字串,然後反轉單字順序,例如: 'here it is' -> 'is it here'",
"_____no_output_____"
]
],
[
[
"def sentenceReversal(str1):\n str1 = str1.strip()\n words = str1.split() \n \n result = ''\n \n for i in range(len(words)):\n result += ' '+words[len(words)-i-1]\n \n return result.strip()\n ",
"_____no_output_____"
],
[
"class TestSentenceReversal(unittest.TestCase):\n \n def test(self, solve):\n self.assertEqual(solve(' space before'),'before space')\n self.assertEqual(solve('space after '),'after space')\n self.assertEqual(solve(' Hello John how are you '),'you are how John Hello')\n self.assertEqual(solve('1'),'1')\n print('success')\n \nt = TestSentenceReversal()\nt.test(sentenceReversal)",
"success\n"
]
],
[
[
"值得注意的是python string split這個方法,不帶參數的話,預設是做strip的事然後分割,跟你使用 split(' ')得到的結果會不一樣,另外面試時可能要使用比較基本的方式來實作這題,也就是少用python trick的方式。",
"_____no_output_____"
],
[
"#### string compression\n\n給予一串字串,轉換成數字加字母的標記法,雖然覺得這個壓縮怪怪的,因為無法保留字母順序",
"_____no_output_____"
]
],
[
[
"def compression(str1):\n mapping = {}\n letter_order = [False]\n result = ''\n \n for ele in str1:\n if ele != letter_order[-1]:\n letter_order.append(ele)\n \n if ele not in mapping:\n mapping[ele] = 1\n else:\n mapping[ele] += 1\n \n for key in letter_order[1:]:\n result += '{}{}'.format(key, mapping[key])\n \n return result",
"_____no_output_____"
],
[
"class TestCompression(unittest.TestCase):\n \n def test(self, solve):\n self.assertEqual(solve(''), '')\n self.assertEqual(solve('AABBCC'), 'A2B2C2')\n self.assertEqual(solve('AAABCCDDDDD'), 'A3B1C2D5')\n print('success')\n \nt = TestCompression()\nt.test(compression)",
"success\n"
]
],
[
[
"#### unique characters in string\n\n給予一串字串並判斷他是否全部不同的字母\n",
"_____no_output_____"
]
],
[
[
"def uni_char(str1):\n mapping = {}\n \n for letter in str1:\n if letter in mapping:\n return False\n else:\n mapping[letter] = True\n \n return True\n\ndef uni_char2(str1):\n return len(set(str1)) == len(str1)",
"_____no_output_____"
],
[
"class TestUniChar(unittest.TestCase):\n \n def test(self, solve):\n self.assertEqual(solve(''), True)\n self.assertEqual(solve('goo'), False)\n self.assertEqual(solve('abcdefg'), True)\n print('success')\n \nt = TestUniChar()\nt.test(uni_char2)",
"success\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7f05da0a6507fbb7c68d9f152ad46ec65580eda | 13,614 | ipynb | Jupyter Notebook | convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | armhzjz/deep-learning-v2-pytorch | cedd30851aba8241a76d5278ce69286058d99fb1 | [
"MIT"
] | null | null | null | convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | armhzjz/deep-learning-v2-pytorch | cedd30851aba8241a76d5278ce69286058d99fb1 | [
"MIT"
] | null | null | null | convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | armhzjz/deep-learning-v2-pytorch | cedd30851aba8241a76d5278ce69286058d99fb1 | [
"MIT"
] | null | null | null | 34.465823 | 349 | 0.572572 | [
[
[
"# Multi-Layer Perceptron, MNIST\n---\nIn this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.\n\nThe process will be broken down into the following steps:\n>1. Load and visualize the data\n2. Define a neural network\n3. Train the model\n4. Evaluate the performance of our trained model on a test dataset!\n\nBefore we begin, we have to import the necessary libraries for working with data and PyTorch.",
"_____no_output_____"
]
],
[
[
"# import libraries\nimport torch\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"---\n## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)\n\nDownloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.\n\nThis cell will create DataLoaders for each of our datasets.",
"_____no_output_____"
]
],
[
[
"from torchvision import datasets\nimport torchvision.transforms as transforms\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# choose the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\n num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers)",
"_____no_output_____"
]
],
[
[
"### Visualize a Batch of Training Data\n\nThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n # print out the correct label for each image\n # .item() gets the value contained in a Tensor\n ax.set_title(str(labels[idx].item()))",
"_____no_output_____"
]
],
[
[
"### View an Image in More Detail",
"_____no_output_____"
]
],
[
[
"img = np.squeeze(images[1])\n\nfig = plt.figure(figsize = (12,12)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')\nwidth, height = img.shape\nthresh = img.max()/2.5\nfor x in range(width):\n for y in range(height):\n val = round(img[x][y],2) if img[x][y] !=0 else 0\n ax.annotate(str(val), xy=(y,x),\n horizontalalignment='center',\n verticalalignment='center',\n color='white' if img[x][y]<thresh else 'black')",
"_____no_output_____"
]
],
[
[
"---\n## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)\n\nThe architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\nimport torch.nn.functional as F\n\n## TODO: Define the NN architecture\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n # linear layer (784 -> 1 hidden node)\n self.fc1 = nn.Linear(28 * 28, 1)\n\n def forward(self, x):\n # flatten image input\n x = x.view(-1, 28 * 28)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc1(x))\n return x\n\n# initialize the NN\nmodel = Net()\nprint(model)",
"_____no_output_____"
]
],
[
[
"### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)\n\nIt's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.",
"_____no_output_____"
]
],
[
[
"## TODO: Specify loss and optimization functions\n\n# specify loss function\ncriterion = None\n\n# specify optimizer\noptimizer = None",
"_____no_output_____"
]
],
[
[
"---\n## Train the Network\n\nThe steps for training/learning from a batch of data are described in the comments below:\n1. Clear the gradients of all optimized variables\n2. Forward pass: compute predicted outputs by passing inputs to the model\n3. Calculate the loss\n4. Backward pass: compute gradient of the loss with respect to model parameters\n5. Perform a single optimization step (parameter update)\n6. Update average training loss\n\nThe following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data. ",
"_____no_output_____"
]
],
[
[
"# number of epochs to train the model\nn_epochs = 30 # suggest training between 20-50 epochs\n\nmodel.train() # prep model for training\n\nfor epoch in range(n_epochs):\n # monitor training loss\n train_loss = 0.0\n \n ###################\n # train the model #\n ###################\n for data, target in train_loader:\n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*data.size(0)\n \n # print training statistics \n # calculate average loss over an epoch\n train_loss = train_loss/len(train_loader.sampler)\n\n print('Epoch: {} \\tTraining Loss: {:.6f}'.format(\n epoch+1, \n train_loss\n ))",
"_____no_output_____"
]
],
[
[
"---\n## Test the Trained Network\n\nFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.\n\n#### `model.eval()`\n\n`model.eval(`) will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn \"off\" nodes during training with some probability, but should allow every node to be \"on\" for evaluation!",
"_____no_output_____"
]
],
[
[
"# initialize lists to monitor test loss and accuracy\ntest_loss = 0.0\nclass_correct = list(0. for i in range(10))\nclass_total = list(0. for i in range(10))\n\nmodel.eval() # prep model for *evaluation*\n\nfor data, target in test_loader:\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # update test loss \n test_loss += loss.item()*data.size(0)\n # convert output probabilities to predicted class\n _, pred = torch.max(output, 1)\n # compare predictions to true label\n correct = np.squeeze(pred.eq(target.data.view_as(pred)))\n # calculate test accuracy for each object class\n for i in range(len(target)):\n label = target.data[i]\n class_correct[label] += correct[i].item()\n class_total[label] += 1\n\n# calculate and print avg test loss\ntest_loss = test_loss/len(test_loader.sampler)\nprint('Test Loss: {:.6f}\\n'.format(test_loss))\n\nfor i in range(10):\n if class_total[i] > 0:\n print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (\n str(i), 100 * class_correct[i] / class_total[i],\n np.sum(class_correct[i]), np.sum(class_total[i])))\n else:\n print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))\n\nprint('\\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (\n 100. * np.sum(class_correct) / np.sum(class_total),\n np.sum(class_correct), np.sum(class_total)))",
"_____no_output_____"
]
],
[
[
"### Visualize Sample Test Results\n\nThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.",
"_____no_output_____"
]
],
[
[
"# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\n\n# get sample outputs\noutput = model(images)\n# convert output probabilities to predicted class\n_, preds = torch.max(output, 1)\n# prep images for display\nimages = images.numpy()\n\n# plot the images in the batch, along with predicted and true labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n ax.set_title(\"{} ({})\".format(str(preds[idx].item()), str(labels[idx].item())),\n color=(\"green\" if preds[idx]==labels[idx] else \"red\"))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f05fad4919d26b6180caf8d4fe30d375c1c33e | 123,866 | ipynb | Jupyter Notebook | climate_starter.ipynb | nebiatabuhay/sqlalchemy-challenge | 12bddc414d06267d78a6cd33481f20f6cc0e760f | [
"ADSL"
] | null | null | null | climate_starter.ipynb | nebiatabuhay/sqlalchemy-challenge | 12bddc414d06267d78a6cd33481f20f6cc0e760f | [
"ADSL"
] | null | null | null | climate_starter.ipynb | nebiatabuhay/sqlalchemy-challenge | 12bddc414d06267d78a6cd33481f20f6cc0e760f | [
"ADSL"
] | null | null | null | 247.732 | 96,008 | 0.922804 | [
[
[
"%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport datetime as dt",
"_____no_output_____"
]
],
[
[
"# Reflect Tables into SQLAlchemy ORM",
"_____no_output_____"
]
],
[
[
"# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func",
"_____no_output_____"
],
[
"# create engine to hawaii.sqlite\nengine = create_engine('sqlite:///Resources/hawaii.sqlite')",
"_____no_output_____"
],
[
"# reflect an existing database into a new model\nBase = automap_base()\n# reflect the tables\nBase.prepare(engine, reflect=True)\n\n",
"_____no_output_____"
],
[
"# View all of the classes that automap found\nBase.classes.keys()",
"_____no_output_____"
],
[
"# Save references to each table\nMeasurement = Base.classes.measurement\nStation = Base.classes.station",
"_____no_output_____"
],
[
"# Create our session (link) from Python to the DB\nsession = Session(engine)",
"_____no_output_____"
]
],
[
[
"# Exploratory Precipitation Analysis",
"_____no_output_____"
]
],
[
[
"# Find the most recent date in the data set.\nmax_date = session.query(func.max(func.strftime(\"%Y-%m-%d\", Measurement.date))).limit(5).all()\nmax_date[0][0]",
"_____no_output_____"
],
[
"# Design a query to retrieve the last 12 months of precipitation data and plot the results. \n# Starting from the most recent data point in the database. \n\n# Calculate the date one year from the last date in data set.\n\n\n# Perform a query to retrieve the data and precipitation scores\n\n\n# Save the query results as a Pandas DataFrame and set the index to the date column\n\n\n# Sort the dataframe by date\n\n\n# Use Pandas Plotting with Matplotlib to plot the data\n\n\nprecipitation_data = session.query(func.strftime(\"%Y-%m-%d\", Measurement.date), Measurement.prcp).\\\n filter(func.strftime(\"%Y-%m-%d\", Measurement.date) >= dt.date(2016, 8, 23)).all()\n\n# Save the query results as a Pandas DataFrame and set the index to the date column\nprecipitation_df = pd.DataFrame(precipitation_data, columns = ['date', 'precipitation'])\n\n#set index\nprecipitation_df.set_index('date', inplace = True)\n\nprecipitation_df = precipitation_df.sort_values(by='date')\nprecipitation_df.head()\n\nfig, ax = plt.subplots(figsize = (20, 10))\nprecipitation_df.plot(ax = ax, x_compat = True)\n\n#title and labels\nax.set_xlabel('Date')\nax.set_ylabel('Precipitation (in.)')\nax.set_title(\"Year Long Precipitation\")\n\nplt.savefig(\"Images/precipitation.png\")\n\n#plot\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"# Use Pandas to calcualte the summary statistics for the precipitation data\nprecipitation_df.describe()",
"_____no_output_____"
]
],
[
[
"# Exploratory Station Analysis",
"_____no_output_____"
]
],
[
[
"# Design a query to calculate the total number stations in the dataset\nstations = session.query(Station.id).distinct().count()\nstations",
"_____no_output_____"
],
[
"# Design a query to find the most active stations (i.e. what stations have the most rows?)\n# List the stations and the counts in descending order.\nstation_counts = (session.query(Measurement.station, func.count(Measurement.station))\n .group_by(Measurement.station)\n .order_by(func.count(Measurement.station).desc())\n .all())\nstation_counts",
"_____no_output_____"
],
[
"# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.\nmost_active_station = 'USC00519281'\ntemps = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\\\n filter(Measurement.station == most_active_station).all()\ntemps",
"_____no_output_____"
],
[
"# Using the most active station id\n# Query the last 12 months of temperature observation data for this station and plot the results as a histogram\ntemp_observation = session.query(Measurement.date, Measurement.tobs).filter(Measurement.station == most_active_station).\\\n filter(func.strftime(\"%Y-%m-%d\", Measurement.date) >= dt.date(2016, 8, 23)).all()\n\n#save as a data frame\ntemp_observation_df = pd.DataFrame(temp_observation, columns = ['date', 'temperature'])\n\nfig, ax = plt.subplots()\ntemp_observation_df.plot.hist(bins = 12, ax = ax)\n\n#labels\nax.set_xlabel('Temperature')\nax.set_ylabel('Frequency')\n\n#save figure\nplt.savefig(\"Images/yearly_plot.png\")\n\n#plot\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Close session",
"_____no_output_____"
]
],
[
[
"# Close Session\nsession.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f06add86e505d28616f830616692f0d7d9e24e | 214,261 | ipynb | Jupyter Notebook | lab2/Part1_MNIST.ipynb | AnthonyLapadula/introtodeeplearning | 176a24d6116a52cdfede343b9319a56ce0dd7585 | [
"MIT"
] | null | null | null | lab2/Part1_MNIST.ipynb | AnthonyLapadula/introtodeeplearning | 176a24d6116a52cdfede343b9319a56ce0dd7585 | [
"MIT"
] | null | null | null | lab2/Part1_MNIST.ipynb | AnthonyLapadula/introtodeeplearning | 176a24d6116a52cdfede343b9319a56ce0dd7585 | [
"MIT"
] | null | null | null | 227.212089 | 134,064 | 0.904803 | [
[
[
"<table align=\"center\">\n <td align=\"center\"><a target=\"_blank\" href=\"http://introtodeeplearning.com\">\n <img src=\"https://i.ibb.co/Jr88sn2/mit.png\" style=\"padding-bottom:5px;\" />\n Visit MIT Deep Learning</a></td>\n <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab2/Part1_MNIST.ipynb\">\n <img src=\"https://i.ibb.co/2P3SLwK/colab.png\" style=\"padding-bottom:5px;\" />Run in Google Colab</a></td>\n <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/master/lab2/Part1_MNIST.ipynb\">\n <img src=\"https://i.ibb.co/xfJbPmL/github.png\" height=\"70px\" style=\"padding-bottom:5px;\" />View Source on GitHub</a></td>\n</table>\n\n# Copyright Information",
"_____no_output_____"
]
],
[
[
"# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.\n# \n# Licensed under the MIT License. You may not use this file except in compliance\n# with the License. Use and/or modification of this code outside of 6.S191 must\n# reference:\n#\n# © MIT 6.S191: Introduction to Deep Learning\n# http://introtodeeplearning.com\n#",
"_____no_output_____"
]
],
[
[
"# Laboratory 2: Computer Vision\n\n# Part 1: MNIST Digit Classification\n\nIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.\n\nFirst, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.",
"_____no_output_____"
]
],
[
[
"# Import Tensorflow 2.0\n#%tensorflow_version 2.x\nimport tensorflow as tf \n\n#!pip install mitdeeplearning\nimport mitdeeplearning as mdl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport random\nfrom tqdm import tqdm\n\n# Check that we are using a GPU, if not switch runtimes\n# using Runtime > Change Runtime Type > GPU\nassert len(tf.config.list_physical_devices('GPU')) > 0",
"_____no_output_____"
]
],
[
[
"## 1.1 MNIST dataset \n\nLet's download and load the dataset and display a few random samples from it:",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\ntrain_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)\ntrain_labels = (train_labels).astype(np.int64)\ntest_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)\ntest_labels = (test_labels).astype(np.int64)",
"_____no_output_____"
]
],
[
[
"Our training set is made up of 28x28 grayscale images of handwritten digits. \n\nLet's visualize what some of these images and their corresponding training labels look like.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,10))\nrandom_inds = np.random.choice(60000,36)\nfor i in range(36):\n plt.subplot(6,6,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n image_ind = random_inds[i]\n plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)\n plt.xlabel(train_labels[image_ind])",
"_____no_output_____"
]
],
[
[
"## 1.2 Neural Network for Handwritten Digit Classification\n\nWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:\n\n\n",
"_____no_output_____"
],
[
"### Fully connected neural network architecture\nTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. \n\nIn this next block, you'll define the fully connected layers of this simple work.",
"_____no_output_____"
]
],
[
[
"def build_fc_model():\n fc_model = tf.keras.Sequential([\n # First define a Flatten layer\n tf.keras.layers.Flatten(),\n\n # '''TODO: Define the activation function for the first fully connected (Dense) layer.'''\n tf.keras.layers.Dense(128, activation=tf.nn.relu),\n\n # '''TODO: Define the second Dense layer to output the classification probabilities'''\n #'''TODO: Dense layer to output classification probabilities'''\n tf.keras.layers.Dense(128, activation=tf.nn.softmax)\n \n ])\n return fc_model\n\nmodel = build_fc_model()",
"_____no_output_____"
]
],
[
[
"As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.**",
"_____no_output_____"
],
[
"Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.\n\nAfter the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.\n\nThat defines our fully connected model! ",
"_____no_output_____"
],
[
"\n\n### Compile the model\n\nBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#compile) step:\n\n* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will \"steer\" the model in the right direction.\n* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.\n* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.\n\nWe'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).\n\nYou'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model. ",
"_____no_output_____"
]
],
[
[
"'''TODO: Experiment with different optimizers and learning rates. How do these affect\n the accuracy of the trained model? Which optimizers and/or learning rates yield\n the best performance?'''\nmodel.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), \n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"### Train the model\n\nWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. \n\nIn Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#fit) method on an instance of the `Model` class. We will use this to train our fully connected model\n",
"_____no_output_____"
]
],
[
[
"# Define the batch size and the number of epochs to use during training\nBATCH_SIZE = 64\nEPOCHS = 5\n\nmodel.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)",
"Epoch 1/5\n938/938 [==============================] - 5s 6ms/step - loss: 0.4299 - accuracy: 0.8817\nEpoch 2/5\n938/938 [==============================] - 5s 5ms/step - loss: 0.2194 - accuracy: 0.9376\nEpoch 3/5\n938/938 [==============================] - 5s 5ms/step - loss: 0.1639 - accuracy: 0.9537\nEpoch 4/5\n938/938 [==============================] - 5s 5ms/step - loss: 0.1322 - accuracy: 0.9625\nEpoch 5/5\n938/938 [==============================] - 5s 5ms/step - loss: 0.1107 - accuracy: 0.9682\n"
]
],
[
[
"As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data.",
"_____no_output_____"
],
[
"### Evaluate accuracy on the test dataset\n\nNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. \n\nUse the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#evaluate) method to evaluate the model on the test dataset!",
"_____no_output_____"
]
],
[
[
"'''TODO: Use the evaluate method to test the model!'''\ntest_loss, test_acc = model.evaluate(\n x=test_images,\n y=test_labels,\n batch_size=BATCH_SIZE)#,\n #verbose=1,\n #sample_weight=None,\n #steps=None,\n #callbacks=None,\n #max_queue_size=10,\n #workers=1,\n #use_multiprocessing=False,\n #return_dict=False,\n #**kwargs\n#)\n\nprint('Test accuracy:', test_acc)",
"157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694\nTest accuracy: 0.9693999886512756\n"
]
],
[
[
"You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. \n\nWhat is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...\n\n",
"_____no_output_____"
],
[
"## 1.3 Convolutional Neural Network (CNN) for handwritten digit classification",
"_____no_output_____"
],
[
"As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:\n\n",
"_____no_output_____"
],
[
"### Define the CNN model\n\nWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.",
"_____no_output_____"
]
],
[
[
"def build_cnn_model():\n cnn_model = tf.keras.Sequential([\n\n # TODO: Define the first convolutional layer\n tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu), \n\n # TODO: Define the first max pooling layer\n ##tf.keras.layers.MaxPool2D('''TODO'''),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n\n # TODO: Define the second convolutional layer\n ##tf.keras.layers.Conv2D('''TODO'''),\n tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu), \n\n # TODO: Define the second max pooling layer\n ##tf.keras.layers.MaxPool2D('''TODO'''),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation=tf.nn.relu),\n\n # TODO: Define the last Dense layer to output the classification \n # probabilities. Pay attention to the activation needed a probability\n # output\n #'''TODO: Dense layer to output classification probabilities'''\n tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)\n\n ])\n \n return cnn_model\n \ncnn_model = build_cnn_model()\n# Initialize the model by passing some data through\ncnn_model.predict(train_images[[0]])\n# Print the summary of the layers in the model.\nprint(cnn_model.summary())",
"2022-03-28 14:34:43.418149: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8303\n"
]
],
[
[
"### Train and test the CNN model\n\nNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:",
"_____no_output_____"
]
],
[
[
"'''TODO: Define the compile operation with your optimizer and learning rate of choice'''\ncnn_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO",
"_____no_output_____"
]
],
[
[
"As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.",
"_____no_output_____"
]
],
[
[
"'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''\ncnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)",
"Epoch 1/5\n938/938 [==============================] - 7s 7ms/step - loss: 0.1806 - accuracy: 0.9467\nEpoch 2/5\n938/938 [==============================] - 6s 7ms/step - loss: 0.0578 - accuracy: 0.9819\nEpoch 3/5\n938/938 [==============================] - 6s 7ms/step - loss: 0.0395 - accuracy: 0.9878\nEpoch 4/5\n938/938 [==============================] - 7s 7ms/step - loss: 0.0300 - accuracy: 0.9906\nEpoch 5/5\n938/938 [==============================] - 7s 7ms/step - loss: 0.0232 - accuracy: 0.9924\n"
]
],
[
[
"Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#evaluate) method:",
"_____no_output_____"
]
],
[
[
"'''TODO: Use the evaluate method to test the model!'''\ntest_loss, test_acc = model.evaluate(\n x=test_images,\n y=test_labels,\n batch_size=BATCH_SIZE)\n\nprint('Test accuracy:', test_acc)",
"157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694\nTest accuracy: 0.9693999886512756\n"
]
],
[
[
"What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? ",
"_____no_output_____"
],
[
"### Make predictions with the CNN model\n\nWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#predict) function call generates the output predictions given a set of input samples.\n",
"_____no_output_____"
]
],
[
[
"predictions = cnn_model.predict(test_images)",
"_____no_output_____"
]
],
[
[
"With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:",
"_____no_output_____"
]
],
[
[
"predictions[0]",
"_____no_output_____"
]
],
[
[
"As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's \"confidence\" that the image corresponds to each of the 10 different digits. \n\nLet's look at the digit that has the highest confidence for the first image in the test dataset:",
"_____no_output_____"
]
],
[
[
"'''TODO: identify the digit with the highest confidence prediction for the first\n image in the test dataset. '''\nprediction = np.argmax(predictions[0])\n\nprint(prediction)",
"7\n"
]
],
[
[
"So, the model is most confident that this image is a \"???\". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:",
"_____no_output_____"
]
],
[
[
"print(\"Label of this digit is:\", test_labels[0])\nplt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)",
"Label of this digit is: 7\n"
]
],
[
[
"It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:",
"_____no_output_____"
]
],
[
[
"#@title Change the slider to look at the model's predictions! { run: \"auto\" }\n\nimage_index = 79 #@param {type:\"slider\", min:0, max:100, step:1}\nplt.subplot(1,2,1)\nmdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nmdl.lab2.plot_value_prediction(image_index, predictions, test_labels)",
"_____no_output_____"
]
],
[
[
"We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!",
"_____no_output_____"
]
],
[
[
"# Plots the first X test images, their predicted label, and the true label\n# Color correct predictions in blue, incorrect predictions in red\nnum_rows = 5\nnum_cols = 4\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n mdl.lab2.plot_value_prediction(i, predictions, test_labels)\n",
"_____no_output_____"
]
],
[
[
"## 1.4 Training the model 2.0\n\nEarlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#fit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. \n\nAs an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTape#gradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.\n\nWe'll use this framework to train our `cnn_model` using stochastic gradient descent.",
"_____no_output_____"
]
],
[
[
"# Rebuild the CNN model\ncnn_model = build_cnn_model()\n\nbatch_size = 12\nloss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss\nplotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')\noptimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer\n\nif hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists\n\nfor idx in tqdm(range(0, train_images.shape[0], batch_size)):\n # First grab a batch of training data and convert the input images to tensors\n (images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])\n images = tf.convert_to_tensor(images, dtype=tf.float32)\n\n # GradientTape to record differentiation operations\n with tf.GradientTape() as tape:\n #'''TODO: feed the images into the model and obtain the predictions'''\n logits = # TODO\n\n #'''TODO: compute the categorical cross entropy loss\n loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO\n\n loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record\n plotter.plot(loss_history.get())\n\n # Backpropagation\n '''TODO: Use the tape to compute the gradient against all parameters in the CNN model.\n Use cnn_model.trainable_variables to access these parameters.''' \n grads = # TODO\n optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))\n",
"_____no_output_____"
]
],
[
[
"## 1.5 Conclusion\nIn this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7f070148de11f160907a3a5d4b8b71ca6b18e1a | 223,817 | ipynb | Jupyter Notebook | cnn/accent_classification/letters/v1_0/cnn.ipynb | lkrsnik/accetuation | 02724147f88aa034487c7922eb75e0fc321aa93f | [
"MIT"
] | null | null | null | cnn/accent_classification/letters/v1_0/cnn.ipynb | lkrsnik/accetuation | 02724147f88aa034487c7922eb75e0fc321aa93f | [
"MIT"
] | null | null | null | cnn/accent_classification/letters/v1_0/cnn.ipynb | lkrsnik/accetuation | 02724147f88aa034487c7922eb75e0fc321aa93f | [
"MIT"
] | null | null | null | 78.642656 | 294 | 0.519255 | [
[
[
"# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n# text in Western (Windows 1252)\n\nimport pickle\nimport numpy as np\n# import StringIO\nimport math\nfrom keras import optimizers, metrics\nfrom keras.models import Model\nfrom keras.layers import Dense, Dropout, Input\nfrom keras.layers.merge import concatenate\nfrom keras import regularizers\nfrom keras.layers.convolutional import Conv1D\nfrom keras.layers.convolutional import MaxPooling1D\nfrom keras.constraints import maxnorm\nfrom keras.layers import Flatten\nfrom keras.optimizers import SGD\nfrom keras.models import load_model\n# from keras import backend as Input\nnp.random.seed(7)",
"Using Theano backend.\n"
],
[
"# %run ../../../prepare_data.py\n\nimport sys\nsys.path.insert(0, '../../../')\nfrom prepare_data import *",
"_____no_output_____"
],
[
"# %run ../../../prepare_data.py\n# X_train, X_other_features_train, y_train, X_test, X_other_features_test, y_test, X_validate, X_other_features_validate, y_validate = generate_full_matrix_inputs('../../internal_representations/inputs/content_shuffle_vector.h5', '../../internal_representations/inputs/shuffle_vector')\n# save_inputs('../../internal_representations/inputs/shuffled_letters_train.h5', X_train, y_train, other_features = X_other_features_train)\n# save_inputs('../../internal_representations/inputs/shuffled_letters_test.h5', X_test, y_test, other_features = X_other_features_test)\n# save_inputs('../../internal_representations/inputs/shuffled_letters_validate.h5', X_validate, y_validate, other_features = X_other_features_validate)\n# X_train, X_other_features_train, y_train = load_inputs('../../internal_representations/inputs/shuffled_letters_train.h5', other_features=True)\n# X_test, X_other_features_test, y_test = load_inputs('../../internal_representations/inputs/shuffled_letters_test.h5', other_features=True)\n# X_validate, X_other_features_validate, y_validate = load_inputs('../../internal_representations/inputs/shuffled_letters_validate.h5', other_features=True)\n\ndata = Data('l', accent_classification=True)\ndata.generate_data('letters_accent_classification_train',\n 'letters_accent_classification_test',\n 'letters_accent_classification_validate', force_override=False)",
"LOADING DATA...\nLOAD SUCCESSFUL!\n"
],
[
"gen2 = data.generator('train', 16)\n# test1 = next(gen1)\n# test2 = next(gen2)",
"_____no_output_____"
],
[
"test2 = next(gen2)",
"_____no_output_____"
],
[
"test2 = next(gen2)\npos = 0\nprint(len(feature_dictionary))\nprint(np.array(test2[0][0]).shape)\nprint(np.array(test2[0][1]).shape)\nprint(len(test2[0]))\nprint(len(test2))\nfor el in test2[0][0]:\n# print(el)\n print (data.decode_x(el, dictionary))\n# print(data.decode_x_other_features(feature_dictionary, [el]))\n\n# for el in test2[1]:\n# print(el)",
"BBB\n10\n(16, 23, 36)\n(16, 150)\n2\n2\ninvitagen\niremirp\niksneko\njnavolimop\nmišjentsačilevjan\nmišjentsačilevjan\nhišjenbordopjan\nhišjenbordopjan\nenitesjavd\namonobarab\nenejlugo\namtsonlanoiseforp\nanavokilboen\nanavokilboen\nebžalot\ninvejark\n"
],
[
"def count_vowels(content, accented_vowels):\n num_all_vowels = 0\n for el in content:\n for m in list(el[3]):\n if m in accented_vowels:\n num_all_vowels += 1\n return num_all_vowels\n\ncount_vowels(content, accented_vowels)",
"_____no_output_____"
],
[
"# print (X_train.shape)\n# print (X_test.shape)\n# print (X_validate.shape)\npos = 0\n# print (decode_input(X_train[pos], dictionary))\n# print (decode_X_features(feature_dictionary, [X_other_features_train[pos]]))\n# print(decode_position(y_train[pos]))\n# print('------------------------------------')\nprint (data.x_train.shape)\nprint (data.x_test.shape)\nprint (data.x_validate.shape)\n# pos = 2\nprint (data.decode_x(data.x_train[pos], dictionary))\nprint (data.decode_x_other_features(feature_dictionary, [data.x_other_features_train[pos]]))\nprint(data.decode_y(data.y_train[pos]))\ndata.y_train[pos]",
"(430151, 23, 36)\n(52058, 23, 36)\n(54222, 23, 36)\našjenlautkajan\nAgsnpn-\nAgsnpn-\n[2, 5]\n"
],
[
"# num_examples = count_vowels(content, accented_vowels) # training set size\nnum_examples = 592886\nnn_output_dim = 13\nnn_hdim = 516\nbatch_size = 16\nactual_epoch = 60\nnum_fake_epoch = 20\n\n\n# num_examples = len(data.x_train) # training set size\n# nn_output_dim = 11\n# nn_hdim = 516\n# batch_size = 16\n# actual_epoch = 1\n# num_fake_epoch = 20\n\n\n\nconv_input_shape=(23, 36)\nothr_input = (150, )\n\nconv_input = Input(shape=conv_input_shape, name='conv_input')\nx_conv = Conv1D(69, (3), padding='same', activation='relu')(conv_input)\nx_conv = Conv1D(43, (3), padding='same', activation='relu')(x_conv)\nx_conv = MaxPooling1D(pool_size=2)(x_conv)\nx_conv = Flatten()(x_conv)\n# x_conv = Dense(516, activation='relu', kernel_constraint=maxnorm(3))(x_conv)\n\nothr_input = Input(shape=othr_input, name='othr_input')\n# x_othr = Dense(256, input_dim=167, activation='relu')(othr_input)\n# x_othr = Dropout(0.3)(x_othr)\n# x_othr = Dense(512, activation='relu')(othr_input)\n# x_othr = Dropout(0.3)(x_othr)\n# x_othr = Dense(256, activation='relu')(othr_input)\n\nx = concatenate([x_conv, othr_input])\n# x = Dense(1024, input_dim=(516 + 256), activation='relu')(x)\nx = Dense(256, activation='relu')(x)\nx = Dropout(0.3)(x)\nx = Dense(512, activation='relu')(x)\nx = Dropout(0.3)(x)\nx = Dense(256, activation='relu')(x)\nx = Dropout(0.2)(x)\nx = Dense(nn_output_dim, activation='sigmoid')(x)\n\n\n\n\nmodel = Model(inputs=[conv_input, othr_input], outputs=x)\nopt = optimizers.Adam(lr=1E-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08)\nmodel.compile(loss='binary_crossentropy', optimizer=opt, metrics=[actual_accuracy,])\n# model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])",
"INFO (theano.gof.compilelock): Waiting for existing lock by unknown process (I am process '1158')\nINFO (theano.gof.compilelock): To manually release the lock, delete /home/lukakrsnik/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-stretch-sid-x86_64-3.5.2-64/lock_dir\n"
],
[
"history = model.fit_generator(data.generator('train', batch_size), \n num_examples/(batch_size * num_fake_epoch), \n epochs=actual_epoch*num_fake_epoch, \n validation_data=data.generator('test', batch_size), \n validation_steps=num_examples/(batch_size * num_fake_epoch))",
"Epoch 1/1200\n1853/1852 [==============================] - 58s - loss: 0.2146 - actual_accuracy: 0.0882 - val_loss: 0.1655 - val_actual_accuracy: 0.2487\nEpoch 2/1200\n1853/1852 [==============================] - 49s - loss: 0.1476 - actual_accuracy: 0.3643 - val_loss: 0.1162 - val_actual_accuracy: 0.5555\nEpoch 3/1200\n1853/1852 [==============================] - 63s - loss: 0.1066 - actual_accuracy: 0.5835 - val_loss: 0.0803 - val_actual_accuracy: 0.7170\nEpoch 4/1200\n1853/1852 [==============================] - 56s - loss: 0.0769 - actual_accuracy: 0.7316 - val_loss: 0.0569 - val_actual_accuracy: 0.8231\nEpoch 5/1200\n1853/1852 [==============================] - 55s - loss: 0.0610 - actual_accuracy: 0.8010 - val_loss: 0.0448 - val_actual_accuracy: 0.8683\nEpoch 6/1200\n1853/1852 [==============================] - 64s - loss: 0.0508 - actual_accuracy: 0.8404 - val_loss: 0.0396 - val_actual_accuracy: 0.8841\nEpoch 7/1200\n1853/1852 [==============================] - 62s - loss: 0.0464 - actual_accuracy: 0.8561 - val_loss: 0.0340 - val_actual_accuracy: 0.9053\nEpoch 8/1200\n1853/1852 [==============================] - 55s - loss: 0.0404 - actual_accuracy: 0.8767 - val_loss: 0.0313 - val_actual_accuracy: 0.9138\nEpoch 9/1200\n1853/1852 [==============================] - 51s - loss: 0.0366 - actual_accuracy: 0.8850 - val_loss: 0.0307 - val_actual_accuracy: 0.9120\nEpoch 10/1200\n1853/1852 [==============================] - 56s - loss: 0.0337 - actual_accuracy: 0.8963 - val_loss: 0.0277 - val_actual_accuracy: 0.9254\nEpoch 11/1200\n1853/1852 [==============================] - 53s - loss: 0.0322 - actual_accuracy: 0.9015 - val_loss: 0.0269 - val_actual_accuracy: 0.9238\nEpoch 12/1200\n1853/1852 [==============================] - 60s - loss: 0.0292 - actual_accuracy: 0.9072 - val_loss: 0.0260 - val_actual_accuracy: 0.9308\nEpoch 13/1200\n1853/1852 [==============================] - 63s - loss: 0.0272 - actual_accuracy: 0.9161 - val_loss: 0.0247 - val_actual_accuracy: 0.9338\nEpoch 14/1200\n1853/1852 [==============================] - 64s - loss: 0.0265 - actual_accuracy: 0.9200 - val_loss: 0.0243 - val_actual_accuracy: 0.9360\nEpoch 15/1200\n1853/1852 [==============================] - 55s - loss: 0.0245 - actual_accuracy: 0.9239 - val_loss: 0.0230 - val_actual_accuracy: 0.9375\nEpoch 16/1200\n1853/1852 [==============================] - 56s - loss: 0.0238 - actual_accuracy: 0.9269 - val_loss: 0.0230 - val_actual_accuracy: 0.9393\nEpoch 17/1200\n1853/1852 [==============================] - 57s - loss: 0.0227 - actual_accuracy: 0.9319 - val_loss: 0.0230 - val_actual_accuracy: 0.9402.\nEpoch 18/1200\n1853/1852 [==============================] - 54s - loss: 0.0210 - actual_accuracy: 0.9353 - val_loss: 0.0225 - val_actual_accuracy: 0.9430\nEpoch 19/1200\n1853/1852 [==============================] - 52s - loss: 0.0200 - actual_accuracy: 0.9389 - val_loss: 0.0223 - val_actual_accuracy: 0.9410\nEpoch 20/1200\n1853/1852 [==============================] - 58s - loss: 0.0201 - actual_accuracy: 0.9393 - val_loss: 0.0209 - val_actual_accuracy: 0.9448\nEpoch 21/1200\n1853/1852 [==============================] - 55s - loss: 0.0183 - actual_accuracy: 0.9432 - val_loss: 0.0240 - val_actual_accuracy: 0.9395\nEpoch 22/1200\n1853/1852 [==============================] - 52s - loss: 0.0189 - actual_accuracy: 0.9417 - val_loss: 0.0222 - val_actual_accuracy: 0.9446\nEpoch 23/1200\n1853/1852 [==============================] - 53s - loss: 0.0176 - actual_accuracy: 0.9458 - val_loss: 0.0244 - val_actual_accuracy: 0.9328\nEpoch 24/1200\n1853/1852 [==============================] - 54s - loss: 0.0168 - actual_accuracy: 0.9486 - val_loss: 0.0225 - val_actual_accuracy: 0.9432\nEpoch 25/1200\n1853/1852 [==============================] - 57s - loss: 0.0165 - actual_accuracy: 0.9511 - val_loss: 0.0212 - val_actual_accuracy: 0.9453\nEpoch 26/1200\n1853/1852 [==============================] - 57s - loss: 0.0155 - actual_accuracy: 0.9525 - val_loss: 0.0217 - val_actual_accuracy: 0.9470\nEpoch 27/1200\n1853/1852 [==============================] - 55s - loss: 0.0155 - actual_accuracy: 0.9523 - val_loss: 0.0210 - val_actual_accuracy: 0.9492\nEpoch 28/1200\n1853/1852 [==============================] - 56s - loss: 0.0151 - actual_accuracy: 0.9546 - val_loss: 0.0226 - val_actual_accuracy: 0.9402\nEpoch 29/1200\n1853/1852 [==============================] - 57s - loss: 0.0143 - actual_accuracy: 0.9565 - val_loss: 0.0215 - val_actual_accuracy: 0.9515\nEpoch 30/1200\n1853/1852 [==============================] - 57s - loss: 0.0140 - actual_accuracy: 0.9564 - val_loss: 0.0232 - val_actual_accuracy: 0.9465\nEpoch 31/1200\n1853/1852 [==============================] - 55s - loss: 0.0137 - actual_accuracy: 0.9592 - val_loss: 0.0219 - val_actual_accuracy: 0.9482\nEpoch 32/1200\n1853/1852 [==============================] - 60s - loss: 0.0129 - actual_accuracy: 0.9607 - val_loss: 0.0225 - val_actual_accuracy: 0.9471\nEpoch 33/1200\n1853/1852 [==============================] - 59s - loss: 0.0131 - actual_accuracy: 0.9614 - val_loss: 0.0222 - val_actual_accuracy: 0.9533\nEpoch 34/1200\n1853/1852 [==============================] - 57s - loss: 0.0123 - actual_accuracy: 0.9629 - val_loss: 0.0214 - val_actual_accuracy: 0.9498\nEpoch 35/1200\n1853/1852 [==============================] - 57s - loss: 0.0124 - actual_accuracy: 0.9632 - val_loss: 0.0222 - val_actual_accuracy: 0.9491\nEpoch 36/1200\n1853/1852 [==============================] - 56s - loss: 0.0117 - actual_accuracy: 0.9649 - val_loss: 0.0235 - val_actual_accuracy: 0.9453\nEpoch 37/1200\n1853/1852 [==============================] - 57s - loss: 0.0110 - actual_accuracy: 0.9667 - val_loss: 0.0256 - val_actual_accuracy: 0.9442\nEpoch 38/1200\n1853/1852 [==============================] - 59s - loss: 0.0119 - actual_accuracy: 0.9648 - val_loss: 0.0220 - val_actual_accuracy: 0.9510\nEpoch 39/1200\n1853/1852 [==============================] - 63s - loss: 0.0114 - actual_accuracy: 0.9658 - val_loss: 0.0249 - val_actual_accuracy: 0.9557\nEpoch 40/1200\n1853/1852 [==============================] - 67s - loss: 0.0107 - actual_accuracy: 0.9684 - val_loss: 0.0210 - val_actual_accuracy: 0.9521\nEpoch 41/1200\n1853/1852 [==============================] - 57s - loss: 0.0107 - actual_accuracy: 0.9684 - val_loss: 0.0250 - val_actual_accuracy: 0.9487\nEpoch 42/1200\n1853/1852 [==============================] - 59s - loss: 0.0105 - actual_accuracy: 0.9689 - val_loss: 0.0223 - val_actual_accuracy: 0.9526\nEpoch 43/1200\n1853/1852 [==============================] - 61s - loss: 0.0106 - actual_accuracy: 0.9695 - val_loss: 0.0226 - val_actual_accuracy: 0.9527.969 - ETA: 0s - loss: 0.0106 - actual_accura\nEpoch 44/1200\n1853/1852 [==============================] - 59s - loss: 0.0099 - actual_accuracy: 0.9701 - val_loss: 0.0226 - val_actual_accuracy: 0.9549\nEpoch 45/1200\n1853/1852 [==============================] - 59s - loss: 0.0098 - actual_accuracy: 0.9725 - val_loss: 0.0213 - val_actual_accuracy: 0.9572\nEpoch 46/1200\n1853/1852 [==============================] - 57s - loss: 0.0095 - actual_accuracy: 0.9720 - val_loss: 0.0243 - val_actual_accuracy: 0.9482\nEpoch 47/1200\n1853/1852 [==============================] - 58s - loss: 0.0100 - actual_accuracy: 0.9697 - val_loss: 0.0200 - val_actual_accuracy: 0.9560\nEpoch 48/1200\n1853/1852 [==============================] - 60s - loss: 0.0088 - actual_accuracy: 0.9741 - val_loss: 0.0232 - val_actual_accuracy: 0.9566\nEpoch 49/1200\n1853/1852 [==============================] - 54s - loss: 0.0092 - actual_accuracy: 0.9735 - val_loss: 0.0225 - val_actual_accuracy: 0.9542\nEpoch 50/1200\n1853/1852 [==============================] - 61s - loss: 0.0085 - actual_accuracy: 0.9742 - val_loss: 0.0217 - val_actual_accuracy: 0.9554\nEpoch 51/1200\n1853/1852 [==============================] - 54s - loss: 0.0085 - actual_accuracy: 0.9748 - val_loss: 0.0220 - val_actual_accuracy: 0.9572\nEpoch 52/1200\n1853/1852 [==============================] - 65s - loss: 0.0083 - actual_accuracy: 0.9753 - val_loss: 0.0264 - val_actual_accuracy: 0.9494\nEpoch 53/1200\n1853/1852 [==============================] - 56s - loss: 0.0080 - actual_accuracy: 0.9753 - val_loss: 0.0266 - val_actual_accuracy: 0.9458\nEpoch 54/1200\n1853/1852 [==============================] - 60s - loss: 0.0088 - actual_accuracy: 0.9749 - val_loss: 0.0244 - val_actual_accuracy: 0.9544\nEpoch 55/1200\n1853/1852 [==============================] - 60s - loss: 0.0085 - actual_accuracy: 0.9756 - val_loss: 0.0229 - val_actual_accuracy: 0.9529\nEpoch 56/1200\n1853/1852 [==============================] - 56s - loss: 0.0080 - actual_accuracy: 0.9772 - val_loss: 0.0244 - val_actual_accuracy: 0.9557\nEpoch 57/1200\n1853/1852 [==============================] - 60s - loss: 0.0079 - actual_accuracy: 0.9772 - val_loss: 0.0236 - val_actual_accuracy: 0.9556\nEpoch 58/1200\n1853/1852 [==============================] - 59s - loss: 0.0082 - actual_accuracy: 0.9760 - val_loss: 0.0233 - val_actual_accuracy: 0.9542\nEpoch 59/1200\n1853/1852 [==============================] - 63s - loss: 0.0076 - actual_accuracy: 0.9779 - val_loss: 0.0247 - val_actual_accuracy: 0.9538\nEpoch 60/1200\n1853/1852 [==============================] - 62s - loss: 0.0074 - actual_accuracy: 0.9789 - val_loss: 0.0219 - val_actual_accuracy: 0.9574\nEpoch 61/1200\n1853/1852 [==============================] - 62s - loss: 0.0077 - actual_accuracy: 0.9782 - val_loss: 0.0255 - val_actual_accuracy: 0.9570\nEpoch 62/1200\n1853/1852 [==============================] - 58s - loss: 0.0071 - actual_accuracy: 0.9791 - val_loss: 0.0221 - val_actual_accuracy: 0.9576\nEpoch 63/1200\n1853/1852 [==============================] - 57s - loss: 0.0075 - actual_accuracy: 0.9774 - val_loss: 0.0257 - val_actual_accuracy: 0.9530\nEpoch 64/1200\n1853/1852 [==============================] - 60s - loss: 0.0074 - actual_accuracy: 0.9789 - val_loss: 0.0253 - val_actual_accuracy: 0.9527\nEpoch 65/1200\n1853/1852 [==============================] - 58s - loss: 0.0072 - actual_accuracy: 0.9787 - val_loss: 0.0257 - val_actual_accuracy: 0.9566\nEpoch 66/1200\n1853/1852 [==============================] - 57s - loss: 0.0073 - actual_accuracy: 0.9795 - val_loss: 0.0241 - val_actual_accuracy: 0.9570\nEpoch 67/1200\n1853/1852 [==============================] - 55s - loss: 0.0067 - actual_accuracy: 0.9806 - val_loss: 0.0252 - val_actual_accuracy: 0.9596\nEpoch 68/1200\n1853/1852 [==============================] - 56s - loss: 0.0066 - actual_accuracy: 0.9809 - val_loss: 0.0263 - val_actual_accuracy: 0.9554\nEpoch 69/1200\n1853/1852 [==============================] - 58s - loss: 0.0066 - actual_accuracy: 0.9799 - val_loss: 0.0270 - val_actual_accuracy: 0.9548\nEpoch 70/1200\n1853/1852 [==============================] - 65s - loss: 0.0071 - actual_accuracy: 0.9806 - val_loss: 0.0270 - val_actual_accuracy: 0.9561\nEpoch 71/1200\n1853/1852 [==============================] - 59s - loss: 0.0069 - actual_accuracy: 0.9804 - val_loss: 0.0227 - val_actual_accuracy: 0.9608\nEpoch 72/1200\n1853/1852 [==============================] - 58s - loss: 0.0065 - actual_accuracy: 0.9812 - val_loss: 0.0265 - val_actual_accuracy: 0.9540\nEpoch 73/1200\n1853/1852 [==============================] - 57s - loss: 0.0062 - actual_accuracy: 0.9817 - val_loss: 0.0247 - val_actual_accuracy: 0.9570\nEpoch 74/1200\n1853/1852 [==============================] - 58s - loss: 0.0064 - actual_accuracy: 0.9818 - val_loss: 0.0279 - val_actual_accuracy: 0.9540\nEpoch 75/1200\n1853/1852 [==============================] - 60s - loss: 0.0063 - actual_accuracy: 0.9809 - val_loss: 0.0224 - val_actual_accuracy: 0.9631\nEpoch 76/1200\n1853/1852 [==============================] - 59s - loss: 0.0066 - actual_accuracy: 0.9813 - val_loss: 0.0256 - val_actual_accuracy: 0.9587\nEpoch 77/1200\n1853/1852 [==============================] - 60s - loss: 0.0065 - actual_accuracy: 0.9807 - val_loss: 0.0245 - val_actual_accuracy: 0.9599\nEpoch 78/1200\n1853/1852 [==============================] - 60s - loss: 0.0061 - actual_accuracy: 0.9819 - val_loss: 0.0264 - val_actual_accuracy: 0.9608\nEpoch 79/1200\n1853/1852 [==============================] - 61s - loss: 0.0059 - actual_accuracy: 0.9830 - val_loss: 0.0254 - val_actual_accuracy: 0.9596\nEpoch 80/1200\n1853/1852 [==============================] - 60s - loss: 0.0059 - actual_accuracy: 0.9827 - val_loss: 0.0273 - val_actual_accuracy: 0.9548\nEpoch 81/1200\n1853/1852 [==============================] - 58s - loss: 0.0061 - actual_accuracy: 0.9823 - val_loss: 0.0263 - val_actual_accuracy: 0.9580\nEpoch 82/1200\n1853/1852 [==============================] - 57s - loss: 0.0062 - actual_accuracy: 0.9821 - val_loss: 0.0240 - val_actual_accuracy: 0.9607\nEpoch 83/1200\n1853/1852 [==============================] - 60s - loss: 0.0058 - actual_accuracy: 0.9833 - val_loss: 0.0260 - val_actual_accuracy: 0.9602\nEpoch 84/1200\n1853/1852 [==============================] - 61s - loss: 0.0053 - actual_accuracy: 0.9838 - val_loss: 0.0284 - val_actual_accuracy: 0.9538\nEpoch 85/1200\n1853/1852 [==============================] - 59s - loss: 0.0059 - actual_accuracy: 0.9827 - val_loss: 0.0257 - val_actual_accuracy: 0.9579\nEpoch 86/1200\n1853/1852 [==============================] - 58s - loss: 0.0059 - actual_accuracy: 0.9829 - val_loss: 0.0252 - val_actual_accuracy: 0.9595\nEpoch 87/1200\n1853/1852 [==============================] - 60s - loss: 0.0056 - actual_accuracy: 0.9850 - val_loss: 0.0260 - val_actual_accuracy: 0.9573\nEpoch 88/1200\n1853/1852 [==============================] - 59s - loss: 0.0056 - actual_accuracy: 0.9841 - val_loss: 0.0240 - val_actual_accuracy: 0.9635\nEpoch 89/1200\n1853/1852 [==============================] - 60s - loss: 0.0059 - actual_accuracy: 0.9835 - val_loss: 0.0290 - val_actual_accuracy: 0.9559\nEpoch 90/1200\n1853/1852 [==============================] - 60s - loss: 0.0060 - actual_accuracy: 0.9830 - val_loss: 0.0251 - val_actual_accuracy: 0.9591\nEpoch 91/1200\n1853/1852 [==============================] - 61s - loss: 0.0049 - actual_accuracy: 0.9857 - val_loss: 0.0262 - val_actual_accuracy: 0.9613\nEpoch 92/1200\n1853/1852 [==============================] - 57s - loss: 0.0053 - actual_accuracy: 0.9856 - val_loss: 0.0237 - val_actual_accuracy: 0.9647\nEpoch 93/1200\n1853/1852 [==============================] - 59s - loss: 0.0051 - actual_accuracy: 0.9855 - val_loss: 0.0243 - val_actual_accuracy: 0.9596\nEpoch 94/1200\n1853/1852 [==============================] - 58s - loss: 0.0055 - actual_accuracy: 0.9834 - val_loss: 0.0263 - val_actual_accuracy: 0.9589\nEpoch 95/1200\n1853/1852 [==============================] - 57s - loss: 0.0048 - actual_accuracy: 0.9862 - val_loss: 0.0251 - val_actual_accuracy: 0.9601\nEpoch 96/1200\n1853/1852 [==============================] - 57s - loss: 0.0054 - actual_accuracy: 0.9837 - val_loss: 0.0267 - val_actual_accuracy: 0.9617\nEpoch 97/1200\n1853/1852 [==============================] - 56s - loss: 0.0053 - actual_accuracy: 0.9845 - val_loss: 0.0254 - val_actual_accuracy: 0.9618\nEpoch 98/1200\n1853/1852 [==============================] - 56s - loss: 0.0052 - actual_accuracy: 0.9852 - val_loss: 0.0277 - val_actual_accuracy: 0.9590\nEpoch 99/1200\n1853/1852 [==============================] - 56s - loss: 0.0051 - actual_accuracy: 0.9856 - val_loss: 0.0252 - val_actual_accuracy: 0.9602\nEpoch 100/1200\n1853/1852 [==============================] - 57s - loss: 0.0047 - actual_accuracy: 0.9868 - val_loss: 0.0294 - val_actual_accuracy: 0.9584\nEpoch 101/1200\n1853/1852 [==============================] - 61s - loss: 0.0056 - actual_accuracy: 0.9841 - val_loss: 0.0268 - val_actual_accuracy: 0.9602\nEpoch 102/1200\n1853/1852 [==============================] - 57s - loss: 0.0052 - actual_accuracy: 0.9855 - val_loss: 0.0276 - val_actual_accuracy: 0.9616\nEpoch 103/1200\n1853/1852 [==============================] - 58s - loss: 0.0046 - actual_accuracy: 0.9870 - val_loss: 0.0309 - val_actual_accuracy: 0.9546\nEpoch 104/1200\n1853/1852 [==============================] - 57s - loss: 0.0048 - actual_accuracy: 0.9862 - val_loss: 0.0252 - val_actual_accuracy: 0.9611\nEpoch 105/1200\n1853/1852 [==============================] - 59s - loss: 0.0050 - actual_accuracy: 0.9861 - val_loss: 0.0271 - val_actual_accuracy: 0.9604\nEpoch 106/1200\n1853/1852 [==============================] - 52s - loss: 0.0050 - actual_accuracy: 0.9855 - val_loss: 0.0240 - val_actual_accuracy: 0.9624\nEpoch 107/1200\n"
],
[
"name = '60_epoch'\nmodel.save(name + '.h5')\noutput = open(name + '_history.pkl', 'wb')\npickle.dump(history.history, output)\noutput.close()",
"_____no_output_____"
],
[
"content = data._read_content('../../../data/SlovarIJS_BESEDE_utf8.lex')\ndictionary, max_word, max_num_vowels, vowels, accented_vowels = data._create_dict(content)\nfeature_dictionary = data._create_feature_dictionary()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f07143d6d6858da63543b841b66e7e47d1be77 | 11,754 | ipynb | Jupyter Notebook | notebooks/noise_map_generator_example.py.ipynb | freiberg-roman/uwb-proto | effda911b5571d2abd29ff0e8a696e279d482e8d | [
"MIT"
] | null | null | null | notebooks/noise_map_generator_example.py.ipynb | freiberg-roman/uwb-proto | effda911b5571d2abd29ff0e8a696e279d482e8d | [
"MIT"
] | null | null | null | notebooks/noise_map_generator_example.py.ipynb | freiberg-roman/uwb-proto | effda911b5571d2abd29ff0e8a696e279d482e8d | [
"MIT"
] | null | null | null | 32.92437 | 116 | 0.517186 | [
[
[
"# Sketch of UWB pipeline\n\nThis notebook contains the original sketch of the uwb implementation which is availible in the uwb package.\nCode in the package is mostly reworked and devide in modules. For usage of the package please check out\nthe other notebook in the directory.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import make_blobs\nfrom sklearn.cluster import DBSCAN\nfrom itertools import product\nfrom scipy.stats import multivariate_normal\nfrom functools import reduce",
"_____no_output_____"
],
[
"def multi_dim_noise(grid_dims, amount, step, std=10, means=(1,5)):\n prod = reduce((lambda x,y: x*y), grid_dims)\n samples = np.zeros(grid_dims + [amount , len(grid_dims)])\n clusters = np.random.randint(\n means[0], means[1] + 1, size=grid_dims\n )\n \n grid = []\n for dim in grid_dims:\n grid.append(((np.arange(dim) + 1) * step))\n\n mean = np.array(np.meshgrid(*grid, indexing=\"ij\")).reshape(prod, len(grid_dims))\n noise = np.random.randn(means[1], prod, len(grid_dims)) * std\n centers = (noise + mean).reshape([means[1]] + grid_dims + [len(grid_dims)])\n \n # transpose hack for selection\n roll_idx = np.roll(np.arange(centers.ndim),-1).tolist()\n centers = np.transpose(centers, roll_idx)\n\n for idxs in product(*[range(i) for i in grid_dims]):\n print(idxs)\n samples[idxs] = make_blobs(\n n_samples=amount, centers=(centers[idxs][:, 0:clusters[idxs]]).T\n )[0]\n return samples",
"_____no_output_____"
],
[
"def generate_noise(width, length, amount, step, std=10, means=(1,5)):\n samples = np.zeros((width, length, amount, 2))\n\n clusters = np.random.randint(\n means[0], means[1] + 1, size=(width, length)\n )\n\n # calculate centers\n grid_width = (np.arange(width) + 1) * step\n grid_length = (np.arange(length) + 1) * step\n mean = np.array(\n [\n np.repeat(grid_width, len(grid_length)),\n np.tile(grid_length, len(grid_width)),\n ]\n ).T\n noise = np.random.randn(means[1], width * length, 2) * std\n centers = (noise + mean).reshape((means[1], width, length, 2))\n\n for i in range(width):\n for j in range(length):\n samples[i, j, :] = make_blobs(\n n_samples=amount, centers=centers[0 : clusters[i, j], i, j, :]\n )[0]\n\n return samples, (grid_width, grid_length)",
"_____no_output_____"
],
[
"np.random.seed(0)\ndata, map_grid = generate_noise(3, 3, 50, 10)\nmulti_dim_noise([4,2,5], 50, 10)",
"_____no_output_____"
],
[
"plt.plot(data[0,0,:,0], data[0,0,:,1], 'o') # example of 5 clusters in position 0,0\nplt.show()",
"_____no_output_____"
],
[
"def generate_map(noise, eps=2, min_samples=3):\n db = DBSCAN(eps=eps, min_samples=min_samples).fit(noise)\n core_samples_mask = np.zeros_like(db.labels_, dtype=bool)\n core_samples_mask[db.core_sample_indices_] = True\n labels = db.labels_\n\n # Number of clusters in labels, ignoring noise if present.\n n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)\n n_noise_ = list(labels).count(-1)\n return labels, core_samples_mask, n_clusters_",
"_____no_output_____"
],
[
"def plot_clusters(X, labels, core_sapmles_mask, n_clusters_):\n \n unique_labels = set(labels)\n colors = [plt.cm.Spectral(each)\n for each in np.linspace(0, 1, len(unique_labels))]\n for k, col in zip(unique_labels, colors):\n if k == -1:\n # Black used for noise.\n col = [0, 0, 0, 1]\n\n class_member_mask = (labels == k)\n\n xy = X[class_member_mask & core_samples_mask]\n plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),\n markeredgecolor='k', markersize=14)\n\n xy = X[class_member_mask & ~core_samples_mask]\n plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),\n markeredgecolor='k', markersize=6)\n plt.title('Estimated number of clusters: %d' % n_clusters_)",
"_____no_output_____"
],
[
"labels = np.zeros((3, 3, 50), dtype=int)\nfor x,y in product(range(3), range(3)):\n labels[x,y,:], core_samples_mask, n_clusters_ = generate_map(data[x,y,:,:])\n plot_clusters(data[x,y,:,:], labels[x,y,:], core_samples_mask, n_clusters_)\n plt.show()",
"_____no_output_____"
],
[
"# estimate parameters\n# this is quite slow but calculation is perfomed only once per map generation\n\nparams = [[[] for i in range(3)] for i in range(3)]\n\nfor x,y in product(range(3), range(3)):\n used_data = 50 - list(labels[x,y]).count(-1)\n \n for i in range(np.max(labels[x,y,:]) + 1):\n mask = labels[x,y] == i\n mean_noise = data[x,y,mask,:].mean(axis=0) - np.array([(x+1) * 10,(y+1) * 10])\n cov_noise = np.cov(data[x,y,mask,:].T)\n weight = mask.sum() / used_data\n params[x][y].append((mean_noise, cov_noise, weight))\n\nprint(params)",
"_____no_output_____"
],
[
"# dynamics model\n\nwalk = []\nstart_state = np.array([[20, 20, 0, 0]], dtype=float)\nwalk.append(start_state)\n\ndef transition_function(current_state, x_range=(10, 40), y_range=(10, 40), std=1):\n \"\"\"Performs a one step transition assuming sensing interval of one\n \n Format of current_state = [x,y,x',y'] + first dimension is batch size\n \"\"\"\n next_state = np.copy(current_state)\n next_state[: ,0:2] += current_state[:, 2:4]\n next_state[: ,2:4] += np.random.randn(2) * std\n \n \n next_state[: ,0] = np.clip(next_state[: ,0], x_range[0], x_range[1])\n next_state[: ,1] = np.clip(next_state[: ,1], y_range[0], y_range[1])\n return next_state\n\nnext_state = transition_function(start_state)\nwalk.append(next_state)\nfor i in range(100):\n next_state = transition_function(next_state)\n walk.append(next_state)\nwalk = np.array(walk)\nprint(walk.shape)\nplt.plot(walk[:,0,0], walk[:,0, 1])\nplt.show()",
"_____no_output_____"
],
[
"# measurement noise map augmented particle filter\n\ndef find_nearest_map_position(x,y, map_grid):\n x_pos = np.searchsorted(map_grid[0], x)\n y_pos = np.searchsorted(map_grid[1], y, side=\"right\")\n\n x_valid = (x_pos != 0) & (x_pos < len(map_grid[0]))\n x_pos = np.clip(x_pos, 0, len(map_grid[0]) - 1)\n x_dist_right = map_grid[0][x_pos] - x\n x_dist_left = x - map_grid[0][x_pos - 1]\n x_pos[x_valid & (x_dist_right > x_dist_left)] -= 1\n \n y_valid = (y_pos != 0) & (y_pos < len(map_grid[1]))\n y_pos = np.clip(y_pos, 0, len(map_grid[1]) - 1)\n y_dist_right = map_grid[1][y_pos] - y\n y_dist_left = y - map_grid[0][y_pos - 1]\n y_pos[y_valid & (y_dist_right > y_dist_left)] -= 1 \n \n return x_pos, y_pos\n\n\ndef reweight_samples(x, z, w, params, map_grip):\n x_pos, y_pos = find_nearest_map_position(x[:,0], x[:,1], map_grid)\n new_weights = np.zeros_like(w)\n \n for i, (x_p, y_p) in enumerate(zip(x_pos, y_pos)):\n for gm in params[x_p][y_p]:\n # calculating p(z|x) for GM\n mean, cov, weight = gm\n new_weights[i] += multivariate_normal.pdf(z[i, 0:2] ,mean=mean, cov=cov) * weight * w[i]\n denorm = np.sum(new_weights)\n return new_weights / denorm\n \n \nprint(map_grid)\nx = np.array([9, 10, 11, 14, 16, 24, 31, 30, 29, 15])\ny = np.array([9, 10, 11, 14, 16, 24, 31, 30, 29, 15])\nw = np.ones(10) * 0.1\nprint(find_nearest_map_position(\n x,\n y,\n map_grid\n))\nx_noise = np.random.randn(10)\ny_noise = np.random.randn(10)\nparticles = np.stack((x, y, x_noise, y_noise)).T\ntransitioned_particles = transition_function(particles)\n\nn_w = reweight_samples(particles, transitioned_particles, w, params, map_grid)\n",
"_____no_output_____"
],
[
"# compute metrics for resampling\n\ndef compute_ESS(x, w):\n M = len(x)\n CV = 1/M * np.sum((w*M-1)**2)\n return M / (1 + CV)\n\nprint(compute_ESS(particles, w))\nprint(compute_ESS(particles, n_w)) # needs to be resampled",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f08c8ec34da11c5cba3b022b53454af7ec45e9 | 6,306 | ipynb | Jupyter Notebook | docs/notebooks/mca.ipynb | davidbailey/dpd | 29bce937e34afa2161788a5c4a911e590a388229 | [
"MIT"
] | 6 | 2020-08-13T22:21:25.000Z | 2021-09-15T19:12:51.000Z | docs/notebooks/mca.ipynb | davidbailey/dpd | 29bce937e34afa2161788a5c4a911e590a388229 | [
"MIT"
] | 3 | 2018-01-25T09:11:01.000Z | 2020-12-22T17:31:24.000Z | docs/notebooks/mca.ipynb | davidbailey/dpd | 29bce937e34afa2161788a5c4a911e590a388229 | [
"MIT"
] | null | null | null | 30.028571 | 358 | 0.399619 | [
[
[
"# Multiple-criteria Analysis",
"_____no_output_____"
]
],
[
[
"from dpd.mca import MultipleCriteriaAnalysis\nfrom dpd.d3 import radar_chart\nfrom IPython.core.display import HTML",
"_____no_output_____"
],
[
"attributes = [\"Cost\", \"Time\", \"Comfort\"]\nalternatives = [\"Tram\", \"Bus\"]\n\nmca = MultipleCriteriaAnalysis(attributes, alternatives)\nmca.mca[\"Tram\"][\"Cost\"] = 200\nmca.mca[\"Bus\"][\"Cost\"] = 100\nmca.mca[\"Tram\"][\"Time\"] = 50\nmca.mca[\"Bus\"][\"Time\"] = 100\nmca.mca[\"Tram\"][\"Comfort\"] = 800\nmca.mca[\"Bus\"][\"Comfort\"] = 500\n\nmca.mca",
"_____no_output_____"
],
[
"legend_options, d, title = mca.to_d3_radar_chart()\nHTML(radar_chart(legend_options, d, title))",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7f091f74438d2ffd4ec0f35bacc871869360630 | 4,621 | ipynb | Jupyter Notebook | 08. Spark/TF-IDF_wine data.ipynb | vm1729/Openthink | 63b3ae65c85b88bdffb71f5789a7799ae389fc82 | [
"MIT"
] | 1 | 2021-01-14T14:32:20.000Z | 2021-01-14T14:32:20.000Z | 08. Spark/TF-IDF_wine data.ipynb | vm1729/Openthink | 63b3ae65c85b88bdffb71f5789a7799ae389fc82 | [
"MIT"
] | 2 | 2020-09-12T10:45:18.000Z | 2020-09-12T10:45:18.000Z | 08. Spark/TF-IDF_wine data.ipynb | vm1729/Openthink | 63b3ae65c85b88bdffb71f5789a7799ae389fc82 | [
"MIT"
] | 2 | 2020-04-21T15:09:00.000Z | 2020-12-01T07:27:10.000Z | 29.812903 | 90 | 0.401212 | [
[
[
"import pandas as pd\ndata=pd.read_csv('wine.csv')",
"_____no_output_____"
],
[
"data.head(2)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e7f092e16e9221b399f460e457bae48d92f0200a | 67,249 | ipynb | Jupyter Notebook | transformers_doc/pytorch/task_summary.ipynb | Magiccircuit/TreeMix | eea02aea8973aa206eaa9c496f0a83ec2b3e69f5 | [
"MIT"
] | null | null | null | transformers_doc/pytorch/task_summary.ipynb | Magiccircuit/TreeMix | eea02aea8973aa206eaa9c496f0a83ec2b3e69f5 | [
"MIT"
] | null | null | null | transformers_doc/pytorch/task_summary.ipynb | Magiccircuit/TreeMix | eea02aea8973aa206eaa9c496f0a83ec2b3e69f5 | [
"MIT"
] | null | null | null | 39.604829 | 862 | 0.517078 | [
[
[
"<a href=\"https://colab.research.google.com/github/Magiccircuit/TreeMix/blob/main/transformers_doc/pytorch/task_summary.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Example of TreeMix",
"_____no_output_____"
],
[
"NOTE : This page was originally used by HuggingFace to illustrate the summary of various tasks ([original page](https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/pytorch/task_summary.ipynb#scrollTo=XJEVX6F9rQdI)), we use it to show the examples we illustrate in our paper. We follow the orginal settings and just change the sentence in to predict. This is a sequence classification model trained on full SST2 datasets.",
"_____no_output_____"
]
],
[
[
"# Transformers installation\n! pip install transformers datasets\n# To install from source instead of the last release, comment the command above and uncomment the following one.\n# ! pip install git+https://github.com/huggingface/transformers.git\n",
"Collecting transformers\n Downloading transformers-4.12.2-py3-none-any.whl (3.1 MB)\n\u001b[K |████████████████████████████████| 3.1 MB 4.9 MB/s \n\u001b[?25hCollecting datasets\n Downloading datasets-1.14.0-py3-none-any.whl (290 kB)\n\u001b[K |████████████████████████████████| 290 kB 42.2 MB/s \n\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.3.0)\nCollecting huggingface-hub>=0.0.17\n Downloading huggingface_hub-0.0.19-py3-none-any.whl (56 kB)\n\u001b[K |████████████████████████████████| 56 kB 4.1 MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nCollecting sacremoses\n Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 56.3 MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.8.1)\nRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.0)\nCollecting pyyaml>=5.1\n Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)\n\u001b[K |████████████████████████████████| 596 kB 57.1 MB/s \n\u001b[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.3)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nCollecting tokenizers<0.11,>=0.10.1\n Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)\n\u001b[K |████████████████████████████████| 3.3 MB 30.8 MB/s \n\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from huggingface-hub>=0.0.17->transformers) (3.7.4.3)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (2.4.7)\nCollecting xxhash\n Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)\n\u001b[K |████████████████████████████████| 243 kB 57.5 MB/s \n\u001b[?25hRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets) (0.3.4)\nRequirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets) (0.70.12.2)\nCollecting aiohttp\n Downloading aiohttp-3.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)\n\u001b[K |████████████████████████████████| 1.1 MB 57.7 MB/s \n\u001b[?25hCollecting fsspec[http]>=2021.05.0\n Downloading fsspec-2021.10.1-py3-none-any.whl (125 kB)\n\u001b[K |████████████████████████████████| 125 kB 60.6 MB/s \n\u001b[?25hRequirement already satisfied: pyarrow!=4.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (3.0.0)\nRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from datasets) (1.1.5)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.5.30)\nRequirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (2.0.7)\nCollecting aiosignal>=1.1.2\n Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB)\nCollecting yarl<2.0,>=1.0\n Downloading yarl-1.7.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)\n\u001b[K |████████████████████████████████| 271 kB 58.0 MB/s \n\u001b[?25hCollecting asynctest==0.13.0\n Downloading asynctest-0.13.0-py3-none-any.whl (26 kB)\nRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (21.2.0)\nCollecting frozenlist>=1.1.1\n Downloading frozenlist-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (192 kB)\n\u001b[K |████████████████████████████████| 192 kB 57.0 MB/s \n\u001b[?25hCollecting multidict<7.0,>=4.5\n Downloading multidict-5.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (160 kB)\n\u001b[K |████████████████████████████████| 160 kB 57.1 MB/s \n\u001b[?25hCollecting async-timeout<5.0,>=4.0.0a3\n Downloading async_timeout-4.0.0a3-py3-none-any.whl (9.5 kB)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.6.0)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2.8.2)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2018.9)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.15.0)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1)\nInstalling collected packages: multidict, frozenlist, yarl, asynctest, async-timeout, aiosignal, pyyaml, fsspec, aiohttp, xxhash, tokenizers, sacremoses, huggingface-hub, transformers, datasets\n Attempting uninstall: pyyaml\n Found existing installation: PyYAML 3.13\n Uninstalling PyYAML-3.13:\n Successfully uninstalled PyYAML-3.13\nSuccessfully installed aiohttp-3.8.0 aiosignal-1.2.0 async-timeout-4.0.0a3 asynctest-0.13.0 datasets-1.14.0 frozenlist-1.2.0 fsspec-2021.10.1 huggingface-hub-0.0.19 multidict-5.2.0 pyyaml-6.0 sacremoses-0.0.46 tokenizers-0.10.3 transformers-4.12.2 xxhash-2.0.2 yarl-1.7.0\n"
]
],
[
[
"## Sequence Classification",
"_____no_output_____"
],
[
"Sequence classification is the task of classifying sequences according to a given number of classes. An example of\nsequence classification is the GLUE dataset, which is entirely based on that task. If you would like to fine-tune a\nmodel on a GLUE sequence classification task, you may leverage the :prefix_link:*run_glue.py\n<examples/pytorch/text-classification/run_glue.py>*, :prefix_link:*run_tf_glue.py\n<examples/tensorflow/text-classification/run_tf_glue.py>*, :prefix_link:*run_tf_text_classification.py\n<examples/tensorflow/text-classification/run_tf_text_classification.py>* or :prefix_link:*run_xnli.py\n<examples/pytorch/text-classification/run_xnli.py>* scripts.\n\nHere is an example of using pipelines to do sentiment analysis: identifying if a sequence is positive or negative. It\nleverages a fine-tuned model on sst2, which is a GLUE task.\n\nThis returns a label (\"POSITIVE\" or \"NEGATIVE\") alongside a score, as follows:",
"_____no_output_____"
]
],
[
[
"from transformers import pipeline\nclassifier = pipeline(\"sentiment-analysis\")\nresult = classifier(\"This film is good and every one loves it\")[0]\nprint(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")\nresult = classifier(\"The film is poor and I do not like it\")[0]\nprint(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")\nresult = classifier(\"This film is good but I do not like it\")[0]\nprint(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")\n",
"No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)\n"
]
],
[
[
"The first two examples are correctly classified.However, the last one, created by combining frag-ments from the first two, is wrongly classified.The model fails in last example because it can’trecognize the two fragments in the first two andinstead assigns a probability to the entire sentence,indicating the model’s poor compositional gener-alization capability. We humans can distinguishthe two parts in the last example. Although theyare contradictory, from the perspective of senti-ment classification, we will assign a certain scoreto both positive and negative, and the negativescore should be higher, so the ideal score maybepositive 40% and negative 60%. Unfortunately,such soft labels are difficult to appear in the actuallabeling process, although such complex examplesare countless in real life.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7f0ae052df02a20b846229eae7e3e89ab3df4b9 | 16,775 | ipynb | Jupyter Notebook | CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb | acox84/crc_python_crash_course | 03592961af3e18a3022cc4cbb01729317530f601 | [
"MIT"
] | null | null | null | CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb | acox84/crc_python_crash_course | 03592961af3e18a3022cc4cbb01729317530f601 | [
"MIT"
] | null | null | null | CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb | acox84/crc_python_crash_course | 03592961af3e18a3022cc4cbb01729317530f601 | [
"MIT"
] | null | null | null | 23.998569 | 540 | 0.564054 | [
[
[
"# Selecting Data from a Data Frame, Plotting, and Indexes",
"_____no_output_____"
],
[
"### Selecting Data",
"_____no_output_____"
],
[
"Import Pandas and Load in the Data from **practicedata.csv**. Call the dataframe 'df'. Show the first 5 lines of the table.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv('practicedata.csv') # overwrite this yourself\ndf.head(5)",
"_____no_output_____"
]
],
[
[
"Lets walk through how to select data by row and column number using .iloc[row#, column#]",
"_____no_output_____"
]
],
[
[
"# Let's select the first row of the table\nfirst_row = df.iloc[0,:]\nfirst_row",
"_____no_output_____"
],
[
"#now let's try selecting the first column of the table\nfirst_column = df.iloc[:, 0]\n#let's print the first five rows of the column\nfirst_column[:5]",
"_____no_output_____"
]
],
[
[
"Notice a few things: 1st - we select parts of a dataframe by its numeric position in the table using .iloc followed by two values in square brackets. 2nd - We can use ':' to indicate that we want all of a row or column. 3rd - The values in the square brackets are [row, column].",
"_____no_output_____"
],
[
"Our old friend from lists is back: **slicing**. We can slice in much the same way as lists:",
"_____no_output_____"
]
],
[
[
"upper_corner_of_table = df.iloc[:5,:5]\nupper_corner_of_table",
"_____no_output_____"
],
[
"another_slice = df.iloc[:5, 5:14]\nanother_slice",
"_____no_output_____"
]
],
[
[
"Now let's select a column by its name",
"_____no_output_____"
]
],
[
[
"oil_prod = df['OIL_PROD'] #simply put the column name as a string in square brackets\noil_prod[:8]",
"_____no_output_____"
]
],
[
[
"Let's select multiple columns",
"_____no_output_____"
]
],
[
[
"production_streams = df[['OIL_PROD', 'WATER_PROD', 'OWG_PROD']] # notice that we passed a list of columns\nproduction_streams.head(5)",
"_____no_output_____"
]
],
[
[
"Let's select data by **index**",
"_____no_output_____"
]
],
[
[
"first_rows = df.loc[0:5] # to select by row index, we pass the index(es) we want to select with .loc[row index]\nfirst_rows",
"_____no_output_____"
]
],
[
[
"We can also use loc to select rows and columns at the same time using .loc[row index, column index]",
"_____no_output_____"
]
],
[
[
"production_streams = df.loc[0:5, ['OIL_PROD', 'WATER_PROD', 'OWG_PROD']]\nproduction_streams",
"_____no_output_____"
]
],
[
[
"Note that you can't mix positional selection and index selection (.iloc vs .loc)",
"_____no_output_____"
]
],
[
[
"error = df.loc[0:5, 0:5]",
"_____no_output_____"
]
],
[
[
"When you are selecting data in data frames there is a lot of potential to change the **type** of your data let's see what the output types are of the various selection methods.",
"_____no_output_____"
]
],
[
[
"print(type(df))\nprint(type(df.iloc[:,0]))",
"_____no_output_____"
]
],
[
[
"Notice how the type changes when we select the first column? A Pandas series is similar to a python dictionary, but there are important differences. You can call numerous functions like mean, sum, etc on a pandas series and unlike dictionaries pandas series have an index instead of keys and allows for different values to be associated with the same 'key' or in this case an index value. Let's try this with a row instead of a column.",
"_____no_output_____"
]
],
[
[
"print(type(df.iloc[0, :]))",
"_____no_output_____"
]
],
[
[
"Rows also become series when a single one is selected. Let's try summing the water production really quick.",
"_____no_output_____"
]
],
[
[
"print(df['WATER_PROD'].sum())",
"_____no_output_____"
]
],
[
[
"Try summing the oil and water production together below",
"_____no_output_____"
],
[
"Lastly, lets see what type we get when we select multiple rows/columns",
"_____no_output_____"
]
],
[
[
"print(type(df.iloc[:5,:5]))",
"_____no_output_____"
]
],
[
[
"If we select multiple rows/columns we keep our dataframe type. This can be important in code as dataframes and series behave differently. This is a particular problem if you have an index with unexpected duplicate values and you are selecting something by index expecting a series, but you get multiple rows and have a dataframe instead.",
"_____no_output_____"
],
[
"Fast Scalar Selection of a 'cell' of a dataframe",
"_____no_output_____"
],
[
"There is are special functions for selecting scalar values in pandas. These functions are .at[] and .iat[]. These functions are much faster (60-70% faster) than .loc or .iloc when selecting a scalar value. Let's try them out.",
"_____no_output_____"
]
],
[
[
"#select the first value\nprint(df.iat[0,0])\nprint(df.at[0,'API_NO14'])",
"_____no_output_____"
]
],
[
[
"Notice that it works the same as .loc and .iloc, the only difference is that you must select one value.",
"_____no_output_____"
]
],
[
[
"print(df.iat[0:5, 1]) # gives an error since I tried to select more than one value",
"_____no_output_____"
]
],
[
[
"Adding columns",
"_____no_output_____"
]
],
[
[
"# you can add a column by assigning it a starting value\ndf['column of zeros'] = 0\n# you can also create a column by adding columns (or doing anything else that results in a column of the same size)\ndf['GROSS_LIQUID'] = df['OIL_PROD'] + df['WATER_PROD']\ndf.iloc[0:2, 30:]",
"_____no_output_____"
]
],
[
[
"Removing Columns",
"_____no_output_____"
]
],
[
[
"# remove columns via the .drop function\ndf = df.drop(['column of zeros'], axis=1)\ndf.iloc[0:2, 30:]",
"_____no_output_____"
]
],
[
[
"Selecting Data with conditionals (booleans)",
"_____no_output_____"
]
],
[
[
"# We can use conditional statements (just like with if and while statements to find whether conditions are true/false in our data)\n# Let's find the months with no oil in wells\nmonths_with_zero_oil = df['OIL_PROD'] == 0 \nprint(months_with_zero_oil.sum())\nprint(len(months_with_zero_oil))",
"_____no_output_____"
],
[
"# What does months with zero oil look like?\nmonths_with_zero_oil[:5]",
"_____no_output_____"
],
[
"# Lets try to make a column out of months with zero oil\ndf['zero_oil'] = months_with_zero_oil\ndf.iloc[0:2, 30:]",
"_____no_output_____"
],
[
"# Let's make the value of zero oil 10,000 whenever there is zero oil (no reason, just for fun)\ntemp = df[months_with_zero_oil]\ntemp['zero_oil'] = 10000.00\ntemp.head(3)",
"_____no_output_____"
]
],
[
[
"Notice the warning we recieved about setting data on a 'slice' of a dataframe. This is because when you select a piece of a dataframe, it doesn't (by default at least) create a new dataframe, it shows you a 'view' of the original data. This is true even if we assign that piece to a new variable like we did above. When we set the zero oil column to 10000, this could also affect the original dataframe. This is why the warning was given because this may or may not be what we want. Let's see if the original dataframe was affected...",
"_____no_output_____"
]
],
[
[
"df[months_with_zero_oil].head(5)",
"_____no_output_____"
],
[
"temp.head(5)",
"_____no_output_____"
]
],
[
[
"In this case we were protected from making changes to original dataframe, what if we want to change the original dataframe?",
"_____no_output_____"
]
],
[
[
"# Let's try this instead\ndf.loc[months_with_zero_oil,'zero_oil'] = 10000.00\ndf[months_with_zero_oil].head(5)",
"_____no_output_____"
]
],
[
[
"That got it! We were able to set values in the original dataframe using the 'boolean' series of months with zero oil.",
"_____no_output_____"
],
[
"Finding, Changing, and Setting Data",
"_____no_output_____"
]
],
[
[
"# Find a column in a dataframe\nif 'API_NO14' in df:\n print('got it')\nelse:\n print('nope, not in there')",
"_____no_output_____"
],
[
"# If a column name is in a dataframe, get it\nfor column in df:\n print(column)",
"_____no_output_____"
],
[
"# Search through the rows of a table\ncount = 0\nfor row in df.iterrows():\n count += 1\n print(row)\n if count == 1:\n break",
"_____no_output_____"
]
],
[
[
"Notice that 'row' is **tuple** with the row index at 0 and the row series at 1",
"_____no_output_____"
]
],
[
[
"# Let's change WATER_INJ to 1 for the first row\ncount = 0\nfor row in df.iterrows():\n df.loc[row[0], 'WATER_INJ'] = 1\n count += 1\n if count == 1:\n break\ndf[['WATER_INJ']].head(1)",
"_____no_output_____"
]
],
[
[
"### Exercise: Fix the apis in the table\n",
"_____no_output_____"
],
[
"All the apis have been converted to numbers and are missing the leading zero, can you add it back in and convert them to strings in a new column?",
"_____no_output_____"
],
[
"### Plotting Data",
"_____no_output_____"
],
[
"First we need to import matplotlib and set jupyter notebook to display the plots",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"Let's select data related to the well: 04029270170000\n and plot it",
"_____no_output_____"
]
],
[
[
"# Let's plot using the original API_NO14 Column for now\ndf.loc[df['API_NO14'] == 4029270170000, 'OIL_PROD'].plot()",
"_____no_output_____"
],
[
"# Those numbers are not super helpful, lets try a new index\n\n# lets copy the dataframe\nsorted_df = df.copy()\n# Convert dates to a 'datetime' type instead of string\nsorted_df['PROD_INJ_DATE'] = pd.to_datetime(df['PROD_INJ_DATE'])\n# Then we sort by production/injection date\nsorted_df = sorted_df.sort_values(by=['PROD_INJ_DATE'])\n# Then we set the row index to be API # and Date\nsorted_df.set_index(['API_NO14', 'PROD_INJ_DATE'], inplace=True, drop=False)\nsorted_df.head(2)\n",
"_____no_output_____"
],
[
"# Lets select the well we want to plot by api #\nplot_df = sorted_df.loc[4029270170000]\nplot_df.head(5)\n",
"_____no_output_____"
],
[
"# Now let's try plotting again\nplot_df['OIL_PROD'].plot()",
"_____no_output_____"
]
],
[
[
"Let's manipulate the plot and try different options",
"_____no_output_____"
]
],
[
[
"plot_df['OIL_PROD'].plot(logy=True)",
"_____no_output_____"
],
[
"plot_df[['OIL_PROD', 'WATER_PROD']].plot(sharey=True, logy=True)",
"_____no_output_____"
]
],
[
[
"All of the available options can be found here: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html",
"_____no_output_____"
],
[
"### Excercise",
"_____no_output_____"
],
[
"Covert production to bbl/day and plot it for a well of your choice from the table",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e7f0b13ee5e3cc073aaed44f059e0eb0c1bc85e1 | 735,080 | ipynb | Jupyter Notebook | code/17.networkx.ipynb | nju-teaching/computational-communication | b95bca72bcfbe412fef15df9f3f057e398be7e34 | [
"MIT"
] | 7 | 2016-03-16T12:11:39.000Z | 2018-05-03T16:42:08.000Z | code/17.networkx.ipynb | nju-teaching/computational-communication | b95bca72bcfbe412fef15df9f3f057e398be7e34 | [
"MIT"
] | 5 | 2016-03-18T02:03:35.000Z | 2016-05-04T10:20:52.000Z | code/17.networkx.ipynb | nju-teaching/computational-communication | b95bca72bcfbe412fef15df9f3f057e398be7e34 | [
"MIT"
] | 12 | 2016-03-16T12:12:13.000Z | 2017-04-03T09:25:39.000Z | 460.28804 | 127,434 | 0.935332 | [
[
[
"### 网络科学理论\n***\n***\n# 网络科学:使用NetworkX分析复杂网络\n***\n***\n\n王成军 \n\[email protected]\n\n计算传播网 http://computational-communication.com",
"_____no_output_____"
],
[
"http://networkx.readthedocs.org/en/networkx-1.11/tutorial/",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport networkx as nx\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import networkx as nx\n\nG=nx.Graph() # G = nx.DiGraph() # 有向网络\n# 添加(孤立)节点\nG.add_node(\"spam\")\n# 添加节点和链接\nG.add_edge(1,2)\n\nprint(G.nodes())\n\nprint(G.edges())",
"[1, 2, 'spam']\n[(1, 2)]\n"
],
[
"# 绘制网络\nnx.draw(G, with_labels = True)",
"_____no_output_____"
]
],
[
[
"# WWW Data download \n\nhttp://www3.nd.edu/~networks/resources.htm\n\nWorld-Wide-Web: [README] [DATA]\nRéka Albert, Hawoong Jeong and Albert-László Barabási:\nDiameter of the World Wide Web Nature 401, 130 (1999) [ PDF ]\n\n# 作业:\n\n- 下载www数据\n- 构建networkx的网络对象g(提示:有向网络)\n- 将www数据添加到g当中\n- 计算网络中的节点数量和链接数量\n",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nn = 0\nwith open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f:\n for line in f:\n n += 1\n #if n % 10**4 == 0:\n #flushPrint(n)\n x, y = line.rstrip().split(' ')\n G.add_edge(x,y)",
"_____no_output_____"
],
[
"nx.info(G)",
"_____no_output_____"
]
],
[
[
"# 描述网络\n### nx.karate_club_graph",
"_____no_output_____"
],
[
"我们从karate_club_graph开始,探索网络的基本性质。",
"_____no_output_____"
]
],
[
[
"G = nx.karate_club_graph()\n \nclubs = [G.node[i]['club'] for i in G.nodes()]\ncolors = []\nfor j in clubs:\n if j == 'Mr. Hi':\n colors.append('r')\n else:\n colors.append('g')\n \nnx.draw(G, with_labels = True, node_color = colors)",
"_____no_output_____"
],
[
"G.node[1] # 节点1的属性",
"_____no_output_____"
],
[
"G.edge.keys()[:3] # 前三条边的id",
"_____no_output_____"
],
[
"nx.info(G)",
"_____no_output_____"
],
[
"G.nodes()[:10]",
"_____no_output_____"
],
[
"G.edges()[:3]",
"_____no_output_____"
],
[
"G.neighbors(1)",
"_____no_output_____"
],
[
"nx.average_shortest_path_length(G) ",
"_____no_output_____"
]
],
[
[
"### 网络直径",
"_____no_output_____"
]
],
[
[
"nx.diameter(G)#返回图G的直径(最长最短路径的长度)",
"_____no_output_____"
]
],
[
[
"### 密度",
"_____no_output_____"
]
],
[
[
"nx.density(G)",
"_____no_output_____"
],
[
"nodeNum = len(G.nodes())\nedgeNum = len(G.edges())\n\n2.0*edgeNum/(nodeNum * (nodeNum - 1))",
"_____no_output_____"
]
],
[
[
"# 作业:\n- 计算www网络的网络密度",
"_____no_output_____"
],
[
"### 聚集系数",
"_____no_output_____"
]
],
[
[
"cc = nx.clustering(G)\ncc.items()[:5]",
"_____no_output_____"
],
[
"plt.hist(cc.values(), bins = 15)\nplt.xlabel('$Clustering \\, Coefficient, \\, C$', fontsize = 20)\nplt.ylabel('$Frequency, \\, F$', fontsize = 20)\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Spacing in Math Mode\n\n\nIn a math environment, LaTeX ignores the spaces you type and puts in the spacing that it thinks is best. LaTeX formats mathematics the way it's done in mathematics texts. If you want different spacing, LaTeX provides the following four commands for use in math mode:\n\n\\; - a thick space\n\n\\: - a medium space\n\n\\, - a thin space\n\n\\\\! - a negative thin space",
"_____no_output_____"
],
[
"### 匹配系数",
"_____no_output_____"
]
],
[
[
"# M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003\nnx.degree_assortativity_coefficient(G) #计算一个图的度匹配性。",
"_____no_output_____"
],
[
"Ge=nx.Graph()\nGe.add_nodes_from([0,1],size=2)\nGe.add_nodes_from([2,3],size=3)\nGe.add_edges_from([(0,1),(2,3)])\nprint(nx.numeric_assortativity_coefficient(Ge,'size'))",
"1.0\n"
],
[
"# plot degree correlation \nfrom collections import defaultdict\nimport numpy as np\n\nl=defaultdict(list)\ng = nx.karate_club_graph()\n\nfor i in g.nodes():\n k = []\n for j in g.neighbors(i):\n k.append(g.degree(j))\n l[g.degree(i)].append(np.mean(k)) \n #l.append([g.degree(i),np.mean(k)])\n \nx = l.keys()\ny = [np.mean(i) for i in l.values()]\n\n#x, y = np.array(l).T\nplt.plot(x, y, 'r-o', label = '$Karate\\;Club$')\nplt.legend(loc=1,fontsize=10, numpoints=1)\nplt.xscale('log'); plt.yscale('log')\nplt.ylabel(r'$<knn(k)$> ', fontsize = 20)\nplt.xlabel('$k$', fontsize = 20)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Degree centrality measures.(度中心性)\n* degree_centrality(G) # Compute the degree centrality for nodes.\n* in_degree_centrality(G) # Compute the in-degree centrality for nodes.\n* out_degree_centrality(G) # Compute the out-degree centrality for nodes.\n* closeness_centrality(G[, v, weighted_edges]) # Compute closeness centrality for nodes.\n* betweenness_centrality(G[, normalized, ...]) # Betweenness centrality measures.(介数中心性)",
"_____no_output_____"
]
],
[
[
"dc = nx.degree_centrality(G)\ncloseness = nx.closeness_centrality(G)\nbetweenness= nx.betweenness_centrality(G)\n",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(15, 4),facecolor='white')\nax = plt.subplot(1, 3, 1)\nplt.hist(dc.values(), bins = 20)\nplt.xlabel('$Degree \\, Centrality$', fontsize = 20)\nplt.ylabel('$Frequency, \\, F$', fontsize = 20)\n\nax = plt.subplot(1, 3, 2)\nplt.hist(closeness.values(), bins = 20)\nplt.xlabel('$Closeness \\, Centrality$', fontsize = 20)\n\nax = plt.subplot(1, 3, 3)\nplt.hist(betweenness.values(), bins = 20)\nplt.xlabel('$Betweenness \\, Centrality$', fontsize = 20)\nplt.tight_layout()\nplt.show()\n",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(15, 8),facecolor='white')\n\nfor k in betweenness:\n plt.scatter(dc[k], closeness[k], s = betweenness[k]*1000)\n plt.text(dc[k], closeness[k]+0.02, str(k))\nplt.xlabel('$Degree \\, Centrality$', fontsize = 20)\nplt.ylabel('$Closeness \\, Centrality$', fontsize = 20)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 度分布",
"_____no_output_____"
]
],
[
[
"from collections import defaultdict\nimport numpy as np\n\ndef plotDegreeDistribution(G):\n degs = defaultdict(int)\n for i in G.degree().values(): degs[i]+=1\n items = sorted ( degs.items () )\n x, y = np.array(items).T\n y_sum = np.sum(y)\n y = [float(i)/y_sum for i in y]\n plt.plot(x, y, 'b-o')\n plt.xscale('log')\n plt.yscale('log')\n plt.legend(['Degree'])\n plt.xlabel('$K$', fontsize = 20)\n plt.ylabel('$P_K$', fontsize = 20)\n plt.title('$Degree\\,Distribution$', fontsize = 20)\n plt.show() \n \nG = nx.karate_club_graph() \nplotDegreeDistribution(G)",
"_____no_output_____"
]
],
[
[
"### 网络科学理论简介\n***\n***\n# 网络科学:分析网络结构\n***\n***\n\n王成军 \n\[email protected]\n\n计算传播网 http://computational-communication.com",
"_____no_output_____"
],
[
"# 规则网络",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nRG = nx.random_graphs.random_regular_graph(3,200) #生成包含200个节点、每个节点有3个邻居的规则图RG\npos = nx.spectral_layout(RG) #定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别\nnx.draw(RG,pos,with_labels=False,node_size = 30) #绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径\nplt.show() #显示图形",
"_____no_output_____"
],
[
"plotDegreeDistribution(RG)",
"_____no_output_____"
]
],
[
[
"# ER随机网络",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nER = nx.random_graphs.erdos_renyi_graph(200,0.05) #生成包含20个节点、以概率0.2连接的随机图\npos = nx.shell_layout(ER) #定义一个布局,此处采用了shell布局方式\nnx.draw(ER,pos,with_labels=False,node_size = 30) \nplt.show()",
"_____no_output_____"
],
[
"plotDegreeDistribution(ER)",
"_____no_output_____"
]
],
[
[
"# 小世界网络",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nWS = nx.random_graphs.watts_strogatz_graph(200,4,0.3) #生成包含200个节点、每个节点4个近邻、随机化重连概率为0.3的小世界网络\npos = nx.circular_layout(WS) #定义一个布局,此处采用了circular布局方式\nnx.draw(WS,pos,with_labels=False,node_size = 30) #绘制图形\nplt.show()",
"_____no_output_____"
],
[
"plotDegreeDistribution(WS)",
"_____no_output_____"
],
[
"nx.diameter(WS)",
"_____no_output_____"
],
[
"cc = nx.clustering(WS)\nplt.hist(cc.values(), bins = 10)\nplt.xlabel('$Clustering \\, Coefficient, \\, C$', fontsize = 20)\nplt.ylabel('$Frequency, \\, F$', fontsize = 20)\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\nnp.mean(cc.values())",
"_____no_output_____"
]
],
[
[
"# BA网络",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nBA= nx.random_graphs.barabasi_albert_graph(200,2) #生成n=20、m=1的BA无标度网络\npos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式\nnx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形\nplt.show()",
"_____no_output_____"
],
[
"plotDegreeDistribution(BA)",
"_____no_output_____"
],
[
"BA= nx.random_graphs.barabasi_albert_graph(20000,2) #生成n=20、m=1的BA无标度网络\nplotDegreeDistribution(BA)",
"_____no_output_____"
],
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nBA= nx.random_graphs.barabasi_albert_graph(500,1) #生成n=20、m=1的BA无标度网络\npos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式\nnx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形\nplt.show()",
"_____no_output_____"
],
[
"nx.degree_histogram(BA)[:3]",
"_____no_output_____"
],
[
"BA.degree().items()[:3]",
"_____no_output_____"
],
[
"plt.hist(BA.degree().values())\nplt.show()",
"_____no_output_____"
],
[
"from collections import defaultdict\nimport numpy as np\ndef plotDegreeDistributionLongTail(G):\n degs = defaultdict(int)\n for i in G.degree().values(): degs[i]+=1\n items = sorted ( degs.items () )\n x, y = np.array(items).T\n y_sum = np.sum(y)\n y = [float(i)/y_sum for i in y]\n plt.plot(x, y, 'b-o')\n plt.legend(['Degree'])\n plt.xlabel('$K$', fontsize = 20)\n plt.ylabel('$P_K$', fontsize = 20)\n plt.title('$Degree\\,Distribution$', fontsize = 20)\n plt.show() \n \nBA= nx.random_graphs.barabasi_albert_graph(5000,2) #生成n=20、m=1的BA无标度网络 \nplotDegreeDistributionLongTail(BA)",
"_____no_output_____"
],
[
"def plotDegreeDistribution(G):\n degs = defaultdict(int)\n for i in G.degree().values(): degs[i]+=1\n items = sorted ( degs.items () )\n x, y = np.array(items).T\n x, y = np.array(items).T\n y_sum = np.sum(y)\n plt.plot(x, y, 'b-o')\n plt.xscale('log')\n plt.yscale('log')\n plt.legend(['Degree'])\n plt.xlabel('$K$', fontsize = 20)\n plt.ylabel('$P_K$', fontsize = 20)\n plt.title('$Degree\\,Distribution$', fontsize = 20)\n plt.show() \n\nBA= nx.random_graphs.barabasi_albert_graph(50000,2) #生成n=20、m=1的BA无标度网络 \nplotDegreeDistribution(BA)",
"_____no_output_____"
]
],
[
[
"\n# 作业:\n\n- 阅读 Barabasi (1999) Internet Diameter of the world wide web.Nature.401\n- 绘制www网络的出度分布、入度分布\n- 使用BA模型生成节点数为N、幂指数为$\\gamma$的网络\n- 计算平均路径长度d与节点数量的关系",
"_____no_output_____"
],
[
"<img src = './img/diameter.png' width = 10000>",
"_____no_output_____"
]
],
[
[
"Ns = [i*10 for i in [1, 10, 100, 1000]]\nds = []\nfor N in Ns:\n print N\n BA= nx.random_graphs.barabasi_albert_graph(N,2)\n d = nx.average_shortest_path_length(BA)\n ds.append(d)",
"10\n100\n1000\n10000\n"
],
[
"plt.plot(Ns, ds, 'r-o')\nplt.xlabel('$N$', fontsize = 20)\nplt.ylabel('$<d>$', fontsize = 20)\nplt.xscale('log')\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 参考\n* https://networkx.readthedocs.org/en/stable/tutorial/tutorial.html\n* http://computational-communication.com/wiki/index.php?title=Networkx",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7f0bd6c0aaf1ce1eac23f66714fd1440588f956 | 106,228 | ipynb | Jupyter Notebook | ck+.ipynb | yulin6666/Facial-Expression-Recognition.Pytorch | 60dbdffb8aa77f6e9ee787f1c05f994ef7a3bc09 | [
"MIT"
] | null | null | null | ck+.ipynb | yulin6666/Facial-Expression-Recognition.Pytorch | 60dbdffb8aa77f6e9ee787f1c05f994ef7a3bc09 | [
"MIT"
] | null | null | null | ck+.ipynb | yulin6666/Facial-Expression-Recognition.Pytorch | 60dbdffb8aa77f6e9ee787f1c05f994ef7a3bc09 | [
"MIT"
] | null | null | null | 52.588119 | 269 | 0.415371 | [
[
[
"!python preprocess_CK+.py",
"(981, 48, 48)\n(981,)\nSave data finish!!!\n"
],
[
"!python mainpro_CK+.py",
"==> Preparing data..\n882 99\n882 99\n==> Building model..\n\nEpoch: 0\nlearning_rate: 0.01\n [=========================>....] | Loss: 1.925 | Acc: 29.000% (260/882) 7/7 \nmainpro_CK+.py:141: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.\n inputs, targets = Variable(inputs, volatile=True), Variable(targets)\n [============================>.] | Loss: 1.950 | Acc: 12.000% (12/99) 20/20 \nSaving..\nbest_Test_acc: 12.000\n\nEpoch: 1\nlearning_rate: 0.01\n [=========================>....] | Loss: 1.363 | Acc: 48.000% (426/882) 7/7 \n [============================>.] | Loss: 2.103 | Acc: 6.000% (6/99) 20/20 \n\nEpoch: 2\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.968 | Acc: 61.000% (546/882) 7/7 \n [============================>.] | Loss: 2.571 | Acc: 6.000% (6/99) 20/20 \n\nEpoch: 3\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.692 | Acc: 71.000% (632/882) 7/7 \n [============================>.] | Loss: 2.509 | Acc: 18.000% (18/99) 20/20 \nSaving..\nbest_Test_acc: 18.000\n\nEpoch: 4\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.535 | Acc: 80.000% (709/882) 7/7 \n [============================>.] | Loss: 1.492 | Acc: 51.000% (51/99) 20/20 \nSaving..\nbest_Test_acc: 51.000\n\nEpoch: 5\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.441 | Acc: 83.000% (739/882) 7/7 \n [============================>.] | Loss: 1.412 | Acc: 34.000% (34/99) 20/20 \n\nEpoch: 6\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.405 | Acc: 85.000% (753/882) 7/7 \n [============================>.] | Loss: 1.127 | Acc: 68.000% (68/99) 20/20 \nSaving..\nbest_Test_acc: 68.000\n\nEpoch: 7\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.366 | Acc: 86.000% (766/882) 7/7 \n [============================>.] | Loss: 0.789 | Acc: 70.000% (70/99) 20/20 \nSaving..\nbest_Test_acc: 70.000\n\nEpoch: 8\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.356 | Acc: 86.000% (767/882) 7/7 \n [============================>.] | Loss: 0.554 | Acc: 83.000% (83/99) 20/20 \nSaving..\nbest_Test_acc: 83.000\n\nEpoch: 9\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.252 | Acc: 91.000% (806/882) 7/7 \n [============================>.] | Loss: 0.734 | Acc: 79.000% (79/99) 20/20 \n\nEpoch: 10\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.247 | Acc: 91.000% (809/882) 7/7 \n [============================>.] | Loss: 0.375 | Acc: 80.000% (80/99) 20/20 \n\nEpoch: 11\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.183 | Acc: 93.000% (829/882) 7/7 \n [============================>.] | Loss: 0.599 | Acc: 78.000% (78/99) 20/20 \n\nEpoch: 12\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.138 | Acc: 96.000% (848/882) 7/7 \n [============================>.] | Loss: 0.459 | Acc: 78.000% (78/99) 20/20 \n\nEpoch: 13\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.137 | Acc: 95.000% (845/882) 7/7 \n [============================>.] | Loss: 0.562 | Acc: 85.000% (85/99) 20/20 \nSaving..\nbest_Test_acc: 85.000\n\nEpoch: 14\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.114 | Acc: 96.000% (847/882) 7/7 \n [============================>.] | Loss: 0.547 | Acc: 78.000% (78/99) 20/20 \n\nEpoch: 15\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.101 | Acc: 96.000% (851/882) 7/7 \n [============================>.] | Loss: 0.718 | Acc: 75.000% (75/99) 20/20 \n\nEpoch: 16\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.066 | Acc: 98.000% (867/882) 7/7 \n [============================>.] | Loss: 0.479 | Acc: 74.000% (74/99) 20/20 \n\nEpoch: 17\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.076 | Acc: 97.000% (858/882) 7/7 \n [============================>.] | Loss: 0.310 | Acc: 87.000% (87/99) 20/20 \nSaving..\nbest_Test_acc: 87.000\n\nEpoch: 18\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.058 | Acc: 98.000% (866/882) 7/7 \n [============================>.] | Loss: 0.785 | Acc: 76.000% (76/99) 20/20 \n\nEpoch: 19\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.082 | Acc: 97.000% (857/882) 7/7 \n [============================>.] | Loss: 0.257 | Acc: 91.000% (91/99) 20/20 \nSaving..\nbest_Test_acc: 91.000\n\nEpoch: 20\nlearning_rate: 0.01\n [=========================>....] | Loss: 0.079 | Acc: 97.000% (864/882) 7/7 \n [============================>.] | Loss: 1.023 | Acc: 65.000% (65/99) 20/20 \n\nEpoch: 21\nlearning_rate: 0.008\n [=========================>....] | Loss: 0.035 | Acc: 98.000% (872/882) 7/7 \n [============================>.] | Loss: 0.257 | Acc: 94.000% (94/99) 20/20 \nSaving..\nbest_Test_acc: 94.000\n\nEpoch: 22\nlearning_rate: 0.006400000000000001\n [=========================>....] | Loss: 0.055 | Acc: 98.000% (868/882) 7/7 \n [============================>.] | Loss: 0.161 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 23\nlearning_rate: 0.005120000000000001\n [=========================>....] | Loss: 0.042 | Acc: 98.000% (872/882) 7/7 \n [============================>.] | Loss: 0.207 | Acc: 94.000% (94/99) 20/20 \n\nEpoch: 24\nlearning_rate: 0.004096000000000001\n [=========================>....] | Loss: 0.024 | Acc: 99.000% (877/882) 7/7 \n [============================>.] | Loss: 0.166 | Acc: 90.000% (90/99) 20/20 \n\nEpoch: 25\nlearning_rate: 0.0032768000000000007\n [=========================>....] | Loss: 0.021 | Acc: 99.000% (876/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 91.000% (91/99) 20/20 \n\nEpoch: 26\nlearning_rate: 0.002621440000000001\n [=========================>....] | Loss: 0.014 | Acc: 99.000% (879/882) 7/7 \n [============================>.] | Loss: 0.160 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 27\nlearning_rate: 0.002097152000000001\n [=========================>....] | Loss: 0.021 | Acc: 99.000% (876/882) 7/7 \n [============================>.] | Loss: 0.173 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 28\nlearning_rate: 0.001677721600000001\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (879/882) 7/7 \n [============================>.] | Loss: 0.180 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 29\nlearning_rate: 0.0013421772800000006\n [=========================>....] | Loss: 0.012 | Acc: 99.000% (878/882) 7/7 \n [============================>.] | Loss: 0.180 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 30\nlearning_rate: 0.0010737418240000006\n [=========================>....] | Loss: 0.011 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.176 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 31\nlearning_rate: 0.0008589934592000006\n [=========================>....] | Loss: 0.013 | Acc: 99.000% (878/882) 7/7 \n [============================>.] | Loss: 0.170 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 32\nlearning_rate: 0.0006871947673600004\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.166 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 33\nlearning_rate: 0.0005497558138880004\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.168 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 34\nlearning_rate: 0.00043980465111040037\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.160 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 35\nlearning_rate: 0.0003518437208883203\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (879/882) 7/7 \n [============================>.] | Loss: 0.159 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 36\nlearning_rate: 0.00028147497671065624\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.159 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 37\nlearning_rate: 0.00022517998136852504\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.157 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 38\nlearning_rate: 0.00018014398509482002\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 39\nlearning_rate: 0.00014411518807585602\n [=========================>....] | Loss: 0.011 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 40\nlearning_rate: 0.00011529215046068484\n [=========================>....] | Loss: 0.007 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 41\nlearning_rate: 9.223372036854788e-05\n [=========================>....] | Loss: 0.011 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 42\nlearning_rate: 7.37869762948383e-05\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 43\nlearning_rate: 5.902958103587064e-05\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.152 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 44\nlearning_rate: 4.722366482869652e-05\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 45\nlearning_rate: 3.777893186295722e-05\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 46\nlearning_rate: 3.0223145490365776e-05\n [=========================>....] | Loss: 0.008 | Acc: 100.000% (882/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 47\nlearning_rate: 2.417851639229262e-05\n [=========================>....] | Loss: 0.006 | Acc: 100.000% (882/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 48\nlearning_rate: 1.9342813113834096e-05\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (879/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 49\nlearning_rate: 1.547425049106728e-05\n [=========================>....] | Loss: 0.011 | Acc: 99.000% (878/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 50\nlearning_rate: 1.2379400392853824e-05\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 51\nlearning_rate: 9.903520314283058e-06\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (879/882) 7/7 \n [============================>.] | Loss: 0.157 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 52\nlearning_rate: 7.922816251426448e-06\n [=========================>....] | Loss: 0.011 | Acc: 99.000% (878/882) 7/7 \n [============================>.] | Loss: 0.158 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 53\nlearning_rate: 6.338253001141158e-06\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 54\nlearning_rate: 5.0706024009129275e-06\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.152 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 55\nlearning_rate: 4.056481920730342e-06\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.153 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 56\nlearning_rate: 3.2451855365842735e-06\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.153 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 57\nlearning_rate: 2.5961484292674196e-06\n [=========================>....] | Loss: 0.013 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 58\nlearning_rate: 2.0769187434139356e-06\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 59\nlearning_rate: 1.6615349947311485e-06\n [=========================>....] | Loss: 0.007 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 60\nlearning_rate: 1.3292279957849189e-06\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.152 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 61\nlearning_rate: 1.0633823966279351e-06\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (879/882) 7/7 \n [============================>.] | Loss: 0.150 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 62\nlearning_rate: 8.507059173023481e-07\n [=========================>....] | Loss: 0.006 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.153 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 63\nlearning_rate: 6.805647338418786e-07\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 64\nlearning_rate: 5.444517870735029e-07\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 65\nlearning_rate: 4.3556142965880233e-07\n [=========================>....] | Loss: 0.009 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 66\nlearning_rate: 3.484491437270419e-07\n [=========================>....] | Loss: 0.011 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 67\nlearning_rate: 2.787593149816335e-07\n [=========================>....] | Loss: 0.008 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.153 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 68\nlearning_rate: 2.2300745198530684e-07\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (878/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 69\nlearning_rate: 1.784059615882455e-07\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (877/882) 7/7 \n [============================>.] | Loss: 0.156 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 70\nlearning_rate: 1.4272476927059639e-07\n [=========================>....] | Loss: 0.010 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.155 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 71\nlearning_rate: 1.1417981541647711e-07\n [=========================>....] | Loss: 0.007 | Acc: 99.000% (881/882) 7/7 \n [============================>.] | Loss: 0.154 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 72\nlearning_rate: 9.13438523331817e-08\n [=========================>....] | Loss: 0.007 | Acc: 99.000% (880/882) 7/7 \n [============================>.] | Loss: 0.152 | Acc: 93.000% (93/99) 20/20 \n\nEpoch: 73\nlearning_rate: 7.307508186654536e-08\n"
],
[
"#验证模型正确性\n\"\"\"\nvisualize results for test image\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport os\nfrom torch.autograd import Variable\n\nimport transforms as transforms\nfrom skimage import io\nfrom skimage.transform import resize\nfrom models import *\n\ncut_size = 44\n\ntransform_test = transforms.Compose([\n transforms.ToTensor()\n])\n\ndef rgb2gray(rgb):\n return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n\nimg = io.imread('images/anger_rgb.png')\n# img = raw_img[:, :, np.newaxis]\n# img = np.concatenate((img, img, img), axis=2)\n# img = Image.fromarray(img)\ninputs = transform_test(img)\n\nclass_names = ['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']\n\nnet = VGG('VGG19')\ncheckpoint = torch.load(os.path.join('CK+_VGG19/1/', 'Test_model.t7'))\nnet.load_state_dict(checkpoint['net'])\nnet.cuda()\nnet.eval()\n\nc, h, w = np.shape(inputs)\ninputs = inputs.view(-1, c, h, w)\ninputs = inputs.cuda()\ninputs = Variable(inputs, volatile=True)\noutputs = net(inputs)\nfor i in range(7):\n print('origin %10.3f' % outputs[0][i])\n\nscore = F.softmax(outputs,1)\nmax = score[0][0]\nmaxindex = 0\nfor i in range(7):\n print('%10.3f' % score[0][i])\n if(score[0][i] > max):\n max = score[0][i]\n maxindex = i\nprint(\"The Expression is %s\" %str(class_names[maxindex]))",
"origin 5.899\norigin 0.868\norigin -3.237\norigin -2.626\norigin 2.764\norigin -0.591\norigin 1.633\n 0.938\n 0.006\n 0.000\n 0.000\n 0.041\n 0.001\n 0.013\nThe Expression is Angry\n"
],
[
"#转成onnx模型\n!pip install onnx\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport os\nfrom torch.autograd import Variable\n\nimport transforms as transforms\nfrom skimage import io\nfrom skimage.transform import resize\nfrom models import *\n\ntransform_test = transforms.Compose([\n transforms.ToTensor()\n])\n\nimg = io.imread('images/anger.png')\nimg = img[:, :, np.newaxis]\nimg = np.concatenate((img, img, img), axis=2)\nimg = Image.fromarray(img)\ninputs = transform_test(img)\n\nclass_names = ['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']\n\n#导入模型,用训练模式\nnet = VGG('VGG19')\ncheckpoint = torch.load(os.path.join('CK+_VGG19/1/', 'Test_model.t7'))\nnet.load_state_dict(checkpoint['net'])\nnet.cuda()\nnet.train(False)\n\n#模拟input\nc, h, w = np.shape(inputs)\ninputs = inputs.view(-1,c, h, w)\ninputs = inputs.cuda()\ninputs = Variable(inputs, volatile=True)\n#导出模型\ntorch_out = torch.onnx._export(net, # model being run\n inputs, # model input (or a tuple for multiple inputs)\n \"CK+_VGG19_privateTest.onnx\", # where to save the model\n verbose=True,\n input_names=['data'], \n output_names=['outTensor'], \n export_params=True, \n training=False)\n",
"Requirement already satisfied: onnx in /usr/local/lib/python3.6/dist-packages (1.5.0)\r\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from onnx) (1.17.0)\r\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from onnx) (1.12.0)\r\nRequirement already satisfied: typing>=3.6.4 in /usr/local/lib/python3.6/dist-packages (from onnx) (3.7.4)\r\nRequirement already satisfied: protobuf in /usr/local/lib/python3.6/dist-packages (from onnx) (3.9.1)\r\nRequirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.6/dist-packages (from onnx) (3.7.4.1)\r\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->onnx) (41.0.1)\r\n"
],
[
"# 验证onnx模型\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport os\nfrom torch.autograd import Variable\nimport onnx\n\nimport transforms as transforms\nfrom skimage import io\nfrom skimage.transform import resize\nfrom models import *\n\ntransform_test = transforms.Compose([\n transforms.ToTensor()\n])\n\nclass_names = ['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']\n\n\n# Load the ONNX model\nmodel = onnx.load(\"CK+_VGG19_privateTest.onnx\")\n\n# Check that the IR is well formed\nonnx.checker.check_model(model)\n\nonnx.helper.printable_graph(model.graph)\n\n\nimport caffe2.python.onnx.backend as backend\nimport numpy as np\n\nrep = backend.prepare(model) # or \"CPU\"\n# For the Caffe2 backend:\n# rep.predict_net is the Caffe2 protobuf for the network\n# rep.workspace is the Caffe2 workspace for the network\n# (see the class caffe2.python.onnx.backend.Workspace)\n\n#input\n\nimg = io.imread('images/anger_rgb.png')\n# img = img[:, :, np.newaxis]\n# img = np.concatenate((img, img, img), axis=2)\n# img = Image.fromarray(img)\ninputs = transform_test(img)\n\nc, h, w = np.shape(inputs)\ninputs = inputs.view(-1,c, h, w)\n\noutputs = rep.run(inputs.numpy().astype(np.float32))\n# To run networks with more than one input, pass a tuple\n# rather than a single numpy ndarray.\nprint(outputs[0])\n\ntorch_data = torch.from_numpy(outputs[0])\nscore = F.softmax(torch_data,1)\nmax = score[0][0]\nmaxindex = 0\nfor i in range(7):\n print('%10.3f' % score[0][i])\n if(score[0][i] > max):\n max = score[0][i]\n maxindex = i\n\nprint(\"The Expression is %s\" %str(class_names[maxindex]))",
"[[ 5.898687 0.8682228 -3.2367802 -2.6263356 2.7641838 -0.59135896\n 1.6326844 ]]\n 0.938\n 0.006\n 0.000\n 0.000\n 0.041\n 0.001\n 0.013\nThe Expression is Angry\n"
],
[
"# onnx模型简单化\n!pip install onnx-simplifier\n!python -m onnxsim \"CK+_VGG19_privateTest.onnx\" \"CK+_VGG19_privateTest_sim.onnx\"\n\n#验证模型\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport os\nfrom torch.autograd import Variable\nimport onnx\n\nimport transforms as transforms\nfrom skimage import io\nfrom skimage.transform import resize\nfrom models import *\n\ntransform_test = transforms.Compose([\n transforms.ToTensor()\n])\n\nclass_names = ['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']\n\n\n# Load the ONNX model\nmodel = onnx.load(\"CK+_VGG19_privateTest_sim.onnx\")\n\n# Check that the IR is well formed\nonnx.checker.check_model(model)\n\nonnx.helper.printable_graph(model.graph)\n\n\nimport caffe2.python.onnx.backend as backend\nimport numpy as np\n\nrep = backend.prepare(model) # or \"CPU\"\n# For the Caffe2 backend:\n# rep.predict_net is the Caffe2 protobuf for the network\n# rep.workspace is the Caffe2 workspace for the network\n# (see the class caffe2.python.onnx.backend.Workspace)\n\n#input\n\nimg = io.imread('images/anger.png')\nimg = img[:, :, np.newaxis]\nimg = np.concatenate((img, img, img), axis=2)\nimg = Image.fromarray(img)\ninputs = transform_test(img)\n\nc, h, w = np.shape(inputs)\ninputs = inputs.view(-1,c, h, w)\n\noutputs = rep.run(inputs.numpy().astype(np.float32))\n# To run networks with more than one input, pass a tuple\n# rather than a single numpy ndarray.\nprint(outputs[0])\n\ntorch_data = torch.from_numpy(outputs[0])\nscore = F.softmax(torch_data,1)\nmax = score[0][0]\nmaxindex = 0\nfor i in range(7):\n print('%10.3f' % score[0][i])\n if(score[0][i] > max):\n max = score[0][i]\n maxindex = i\n\nprint(\"The Expression is %s\" %str(class_names[maxindex]))",
"Requirement already satisfied: onnx-simplifier in /usr/local/lib/python3.6/dist-packages (0.2.2)\nRequirement already satisfied: onnxruntime>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from onnx-simplifier) (1.0.0)\nRequirement already satisfied: onnx in /usr/local/lib/python3.6/dist-packages (from onnx-simplifier) (1.5.0)\nRequirement already satisfied: protobuf>=3.7.0 in /usr/local/lib/python3.6/dist-packages (from onnx-simplifier) (3.9.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from onnx->onnx-simplifier) (1.12.0)\nRequirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.6/dist-packages (from onnx->onnx-simplifier) (3.7.4.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from onnx->onnx-simplifier) (1.17.0)\nRequirement already satisfied: typing>=3.6.4 in /usr/local/lib/python3.6/dist-packages (from onnx->onnx-simplifier) (3.7.4)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.7.0->onnx-simplifier) (41.0.1)\nSimplifying...\nOk!\n[[ 5.898687 0.8682228 -3.2367797 -2.6263351 2.7641833 -0.591359\n 1.6326842]]\n 0.938\n 0.006\n 0.000\n 0.000\n 0.041\n 0.001\n 0.013\nThe Expression is Angry\n"
],
[
"!pip install -U onnx-coreml\n#转成corml模型\nimport onnx;\nfrom onnx_coreml import convert\n\nonnx_model = onnx.load(\"CK+_VGG19_privateTest.onnx\")\ncml_model= convert(onnx_model,image_input_names='data',target_ios='12')\ncml_model.save(\"CK+_VGG19_privateTest_12.mlmodel\")",
"Requirement already up-to-date: onnx-coreml in /usr/local/lib/python3.6/dist-packages (1.0)\nRequirement already satisfied, skipping upgrade: click in /usr/local/lib/python3.6/dist-packages (from onnx-coreml) (7.0)\nRequirement already satisfied, skipping upgrade: onnx==1.5.0 in /usr/local/lib/python3.6/dist-packages (from onnx-coreml) (1.5.0)\nRequirement already satisfied, skipping upgrade: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.6/dist-packages (from onnx-coreml) (3.7.4.1)\nRequirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from onnx-coreml) (1.17.0)\nRequirement already satisfied, skipping upgrade: typing>=3.6.4 in /usr/local/lib/python3.6/dist-packages (from onnx-coreml) (3.7.4)\nRequirement already satisfied, skipping upgrade: coremltools==3.0 in /usr/local/lib/python3.6/dist-packages (from onnx-coreml) (3.0)\nRequirement already satisfied, skipping upgrade: protobuf in /usr/local/lib/python3.6/dist-packages (from onnx==1.5.0->onnx-coreml) (3.9.1)\nRequirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from onnx==1.5.0->onnx-coreml) (1.12.0)\nRequirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->onnx==1.5.0->onnx-coreml) (41.0.1)\n1/57: Converting Node Type Conv\n2/57: Converting Node Type BatchNormalization\n3/57: Converting Node Type Relu\n4/57: Converting Node Type Conv\n5/57: Converting Node Type BatchNormalization\n6/57: Converting Node Type Relu\n7/57: Converting Node Type MaxPool\n8/57: Converting Node Type Conv\n9/57: Converting Node Type BatchNormalization\n10/57: Converting Node Type Relu\n11/57: Converting Node Type Conv\n12/57: Converting Node Type BatchNormalization\n13/57: Converting Node Type Relu\n14/57: Converting Node Type MaxPool\n15/57: Converting Node Type Conv\n16/57: Converting Node Type BatchNormalization\n17/57: Converting Node Type Relu\n18/57: Converting Node Type Conv\n19/57: Converting Node Type BatchNormalization\n20/57: Converting Node Type Relu\n21/57: Converting Node Type Conv\n22/57: Converting Node Type BatchNormalization\n23/57: Converting Node Type Relu\n24/57: Converting Node Type Conv\n25/57: Converting Node Type BatchNormalization\n26/57: Converting Node Type Relu\n27/57: Converting Node Type MaxPool\n28/57: Converting Node Type Conv\n29/57: Converting Node Type BatchNormalization\n30/57: Converting Node Type Relu\n31/57: Converting Node Type Conv\n32/57: Converting Node Type BatchNormalization\n33/57: Converting Node Type Relu\n34/57: Converting Node Type Conv\n35/57: Converting Node Type BatchNormalization\n36/57: Converting Node Type Relu\n37/57: Converting Node Type Conv\n38/57: Converting Node Type BatchNormalization\n39/57: Converting Node Type Relu\n40/57: Converting Node Type MaxPool\n41/57: Converting Node Type Conv\n42/57: Converting Node Type BatchNormalization\n43/57: Converting Node Type Relu\n44/57: Converting Node Type Conv\n45/57: Converting Node Type BatchNormalization\n46/57: Converting Node Type Relu\n47/57: Converting Node Type Conv\n48/57: Converting Node Type BatchNormalization\n49/57: Converting Node Type Relu\n50/57: Converting Node Type Conv\n51/57: Converting Node Type BatchNormalization\n52/57: Converting Node Type Relu\n53/57: Converting Node Type MaxPool\n54/57: Converting Node Type Pad\n55/57: Converting Node Type AveragePool\n56/57: Converting Node Type Reshape\n57/57: Converting Node Type Gemm\nTranslation to CoreML spec completed. Now compiling the CoreML model.\nModel Compilation done.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f0be0018d6db9b970b36724ae08d5180662409 | 16,986 | ipynb | Jupyter Notebook | S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb | jtiagosg/batch3-students | 5eb94bee46625881e9470da2b137aaa0f6cf7912 | [
"MIT"
] | 12 | 2019-07-06T09:06:17.000Z | 2020-11-13T00:58:42.000Z | S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb | Daniel3424/batch3-students | 10c46963e51ce974837096ad06a8c134ed4bcd8a | [
"MIT"
] | 29 | 2019-07-01T14:19:49.000Z | 2021-03-24T13:29:50.000Z | S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb | Daniel3424/batch3-students | 10c46963e51ce974837096ad06a8c134ed4bcd8a | [
"MIT"
] | 36 | 2019-07-05T15:53:35.000Z | 2021-07-04T04:18:02.000Z | 29.185567 | 195 | 0.408807 | [
[
[
"Now we have to put together the different datasets.",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"files_df = pd.read_pickle(\"data/clean/files_df.pkl\")",
"_____no_output_____"
]
],
[
[
"We remove from the dataframe those rows whose origin is the website (these rows have `WEBSITE` in all the values) and those that come from the API (these rows have `API` in all the values).",
"_____no_output_____"
]
],
[
[
"files_df.shape",
"_____no_output_____"
],
[
"website_ids = files_df[files_df.tierafterorder.isin(['WEBSITE'])].index\napi_ids = files_df[files_df.tierafterorder.isin(['API'])].index",
"_____no_output_____"
],
[
"files_df = files_df[-files_df.tierafterorder.isin(['WEBSITE', 'API'])]",
"_____no_output_____"
],
[
"files_df.shape",
"_____no_output_____"
],
[
"files_df.tierafterorder.value_counts()",
"_____no_output_____"
],
[
"scraped_df = pd.read_pickle('data/clean/scraped.pkl')",
"_____no_output_____"
],
[
"scraped_df.shape",
"_____no_output_____"
],
[
"scraped_df = scraped_df.loc[website_ids]",
"_____no_output_____"
],
[
"scraped_df.shape",
"_____no_output_____"
],
[
"scraped_df.head()",
"_____no_output_____"
],
[
"targets_df = pd.read_pickle('data/clean/targets.pkl')\nstoreid_df = pd.read_pickle('data/clean/storeids.pkl')",
"_____no_output_____"
],
[
"api_df = pd.read_pickle('data/clean/api_df.pickle')",
"_____no_output_____"
]
],
[
[
"We concat the dataframes that have different ids.",
"_____no_output_____"
]
],
[
[
"train_df = pd.concat(\n [\n pd.concat([files_df, api_df], sort=True),\n scraped_df\n ],\n sort=True\n)",
"_____no_output_____"
],
[
"train_df.shape",
"_____no_output_____"
]
],
[
[
"Now we join the files that share an index.",
"_____no_output_____"
]
],
[
[
"train_df = train_df.drop(columns=['returned', 'storeid']).join(targets_df).join(storeid_df)",
"_____no_output_____"
],
[
"train_df.shape",
"_____no_output_____"
],
[
"train_df.to_pickle('data/clean/train_df_merged.pkl')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7f0d0c891ff05019267941170057cd2186e7e1d | 205,658 | ipynb | Jupyter Notebook | notebook/Recommendations_with_IBM.ipynb | dalpengholic/Udacity_Recommendations_with_IBM | 8c620b733bf91b7b97b607373d0e6ff86934d03d | [
"MIT"
] | null | null | null | notebook/Recommendations_with_IBM.ipynb | dalpengholic/Udacity_Recommendations_with_IBM | 8c620b733bf91b7b97b607373d0e6ff86934d03d | [
"MIT"
] | null | null | null | notebook/Recommendations_with_IBM.ipynb | dalpengholic/Udacity_Recommendations_with_IBM | 8c620b733bf91b7b97b607373d0e6ff86934d03d | [
"MIT"
] | null | null | null | 88.265236 | 52,588 | 0.779517 | [
[
[
"# Recommendations with IBM\n\nIn this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform. \n\n\nYou may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**\n\nBy following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations. \n\n\n## Table of Contents\n\nI. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>\nII. [Rank Based Recommendations](#Rank)<br>\nIII. [User-User Based Collaborative Filtering](#User-User)<br>\nIV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>\nV. [Matrix Factorization](#Matrix-Fact)<br>\nVI. [Extras & Concluding](#conclusions)\n\nAt the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.",
"_____no_output_____"
]
],
[
[
"%autosave 180\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport project_tests as t\nimport pickle\nimport seaborn as sns\n\nimport re\nimport nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem.wordnet import WordNetLemmatizer\nfrom nltk.tokenize import word_tokenize\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom sklearn.metrics.pairwise import linear_kernel\n\nnltk.download('punkt')\nnltk.download('stopwords')\nnltk.download('wordnet')\n\n%matplotlib inline\n\ndf = pd.read_csv('data/user-item-interactions.csv')\ndf_content = pd.read_csv('data/articles_community.csv')\ndel df['Unnamed: 0']\ndel df_content['Unnamed: 0']\n\n# Show df to get an idea of the data\ndf.head()",
"_____no_output_____"
],
[
"# Show df_content to get an idea of the data\ndf_content.head()",
"_____no_output_____"
]
],
[
[
"### <a class=\"anchor\" id=\"Exploratory-Data-Analysis\">Part I : Exploratory Data Analysis</a>\n\nUse the dictionary and cells below to provide some insight into the descriptive statistics of the data.\n\n`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article. ",
"_____no_output_____"
]
],
[
[
"# make a groupby instance and count how many articles were read by each user\nemail_grouped_df = df.groupby('email')\nnum_article_email = email_grouped_df['article_id'].count()\nprint(\"Mean # article :\",num_article_email.mean())\nprint(\"Quantile 0.25 , 0.5, 0.75: \" , num_article_email.quantile(0.25), num_article_email.quantile(0.5), num_article_email.quantile(0.75))\n# Draw histogram and boxplot using seaborn\nf, axes = plt.subplots(1, 2, figsize=(16,5))\nf.tight_layout()\nsns.set(style=\"white\", palette=\"muted\", color_codes=True)\nsns.set_context(\"poster\")\nsns.set_style(\"darkgrid\")\nsns.distplot(num_article_email, rug=False, kde=False, norm_hist=False, color='g', ax=axes[0])\nsns.boxplot(num_article_email, ax=axes[1])",
"Mean # article : 8.930846930846931\nQuantile 0.25 , 0.5, 0.75: 1.0 3.0 9.0\n"
],
[
"# Fill in the median and maximum number of user_article interactios below\nmedian_val = num_article_email.median()\nmax_views_by_user = num_article_email.max()\nprint(\"50% of individuals interact with {} number of articles or fewer.\".format(median_val)) \nprint(\"The maximum number of user-article interactions by any 1 user is{}:\".format(max_views_by_user))\n# 50% of individuals interact with ____ number of articles or fewer.\n# The maximum number of user-article interactions by any 1 user is ______.",
"50% of individuals interact with 3.0 number of articles or fewer.\nThe maximum number of user-article interactions by any 1 user is364:\n"
]
],
[
[
"`2.` Explore and remove duplicate articles from the **df_content** dataframe. ",
"_____no_output_____"
]
],
[
[
"# Find and explore duplicate articles\ndf_content.head()\ncheck_dupl_df_1 = df_content[df_content.duplicated(['article_id'])]\ncheck_dupl_df_1",
"_____no_output_____"
],
[
"# Remove any rows that have the same article_id - only keep the first\ndf_content.drop_duplicates(subset='article_id', keep='first', inplace=True)\ncheck_dupl_df_1 = df_content[df_content.duplicated(['article_id'])]\ncheck_dupl_df_1",
"_____no_output_____"
]
],
[
[
"`3.` Use the cells below to find:\n\n**a.** The number of unique articles that have an interaction with a user. \n**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>\n**c.** The number of unique users in the dataset. (excluding null values) <br>\n**d.** The number of user-article interactions in the dataset.",
"_____no_output_____"
]
],
[
[
"unique_articles = len(df.article_id.unique()) # The number of unique articles that have at least one interaction\ntotal_articles = len(df_content.article_id.unique()) # The number of unique articles on the IBM platform\ndf_email_na_dropped = df.dropna(subset=['email'])\nunique_users = len(df_email_na_dropped.email.unique()) # The number of unique users\nuser_article_interactions = df.shape[0] # The number of user-article interactions\n",
"_____no_output_____"
]
],
[
[
"`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).",
"_____no_output_____"
]
],
[
[
"# df_content.head(3)\narticle_id_grouped_df = df.groupby('article_id')\nprint(article_id_grouped_df['email'].count().sort_values(ascending=False).index[0])\nprint(article_id_grouped_df['email'].count().sort_values(ascending=False).values[0])",
"1429.0\n937\n"
],
[
"most_viewed_article_id = '1429.0'# The most viewed article in the dataset as a string with one value following the decimal \nmax_views = 937 # The most viewed article in the dataset was viewed how many times?",
"_____no_output_____"
],
[
"## No need to change the code here - this will be helpful for later parts of the notebook\n# Run this cell to map the user email to a user_id column and remove the email column\n\ndef email_mapper():\n coded_dict = dict()\n cter = 1\n email_encoded = []\n \n for val in df['email']:\n if val not in coded_dict:\n coded_dict[val] = cter\n cter+=1\n \n email_encoded.append(coded_dict[val])\n return email_encoded\n\nemail_encoded = email_mapper()\ndel df['email']\ndf['user_id'] = email_encoded\n\n# show header\ndf.head()",
"_____no_output_____"
],
[
"## If you stored all your results in the variable names above, \n## you shouldn't need to change anything in this cell\n\nsol_1_dict = {\n '`50% of individuals have _____ or fewer interactions.`': median_val,\n '`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,\n '`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,\n '`The most viewed article in the dataset was viewed _____ times.`': max_views,\n '`The article_id of the most viewed article is ______.`': most_viewed_article_id,\n '`The number of unique articles that have at least 1 rating ______.`': unique_articles,\n '`The number of unique users in the dataset is ______`': unique_users,\n '`The number of unique articles on the IBM platform`': total_articles\n}\n\n# Test your dictionary against the solution\nt.sol_1_test(sol_1_dict)",
"It looks like you have everything right here! Nice job!\n"
]
],
[
[
"### <a class=\"anchor\" id=\"Rank\">Part II: Rank-Based Recommendations</a>\n\nUnlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.\n\n`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.",
"_____no_output_____"
]
],
[
[
"def get_top_articles(n, df=df):\n '''\n INPUT:\n n - (int) the number of top articles to return\n df - (pandas dataframe) df as defined at the top of the notebook \n \n OUTPUT:\n top_articles - (list) A list of the top 'n' article titles \n \n '''\n article_id_grouped_df = df.groupby(['title'])\n top_articles = article_id_grouped_df['user_id'].count().sort_values(ascending=False).iloc[:n].index.tolist()\n \n return top_articles # Return the top article titles from df (not df_content)\n\ndef get_top_article_ids(n, df=df):\n '''\n INPUT:\n n - (int) the number of top articles to return\n df - (pandas dataframe) df as defined at the top of the notebook \n \n OUTPUT:\n top_articles - (list) A list of the top 'n' article titles \n \n '''\n article_id_grouped_df = df.groupby(['article_id'])\n top_articles_ids = article_id_grouped_df['user_id'].count().sort_values(ascending=False).iloc[:n].index.tolist()\n\n return top_articles_ids # Return the top article ids",
"_____no_output_____"
],
[
"print(get_top_articles(10))\nprint(get_top_article_ids(10))",
"['use deep learning for image classification', 'insights from new york car accident reports', 'visualize car data with brunel', 'use xgboost, scikit-learn & ibm watson machine learning apis', 'predicting churn with the spss random tree algorithm', 'healthcare python streaming application demo', 'finding optimal locations of new store using decision optimization', 'apache spark lab, part 1: basic concepts', 'analyze energy consumption in buildings', 'gosales transactions for logistic regression model']\n[1429.0, 1330.0, 1431.0, 1427.0, 1364.0, 1314.0, 1293.0, 1170.0, 1162.0, 1304.0]\n"
],
[
"# Test your function by returning the top 5, 10, and 20 articles\ntop_5 = get_top_articles(5)\ntop_10 = get_top_articles(10)\ntop_20 = get_top_articles(20)\n\n# Test each of your three lists from above\nt.sol_2_test(get_top_articles)",
"Your top_5 looks like the solution list! Nice job.\nYour top_10 looks like the solution list! Nice job.\nYour top_20 looks like the solution list! Nice job.\n"
]
],
[
[
"### <a class=\"anchor\" id=\"User-User\">Part III: User-User Based Collaborative Filtering</a>\n\n\n`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns. \n\n* Each **user** should only appear in each **row** once.\n\n\n* Each **article** should only show up in one **column**. \n\n\n* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1. \n\n\n* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**. \n\nUse the tests to make sure the basic structure of your matrix matches what is expected by the solution.",
"_____no_output_____"
]
],
[
[
"# create the user-article matrix with 1's and 0's\n\ndef create_user_item_matrix(df):\n '''\n INPUT:\n df - pandas dataframe with article_id, title, user_id columns\n \n OUTPUT:\n user_item - user item matrix \n \n Description:\n Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with \n an article and a 0 otherwise\n '''\n # Fill in the function here\n user_item = df.groupby('user_id')['article_id'].value_counts().unstack()\n user_item[user_item.isna() == False] = 1\n \n return user_item # return the user_item matrix \n\nuser_item = create_user_item_matrix(df)",
"_____no_output_____"
],
[
"## Tests: You should just need to run this cell. Don't change the code.\nassert user_item.shape[0] == 5149, \"Oops! The number of users in the user-article matrix doesn't look right.\"\nassert user_item.shape[1] == 714, \"Oops! The number of articles in the user-article matrix doesn't look right.\"\nassert user_item.sum(axis=1)[1] == 36, \"Oops! The number of articles seen by user 1 doesn't look right.\"\nprint(\"You have passed our quick tests! Please proceed!\")",
"You have passed our quick tests! Please proceed!\n"
]
],
[
[
"`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users. \n\nUse the tests to test your function.",
"_____no_output_____"
]
],
[
[
"def find_similar_users(user_id, user_item=user_item):\n '''\n INPUT:\n user_id - (int) a user_id\n user_item - (pandas dataframe) matrix of users by articles: \n 1's when a user has interacted with an article, 0 otherwise\n \n OUTPUT:\n similar_users - (list) an ordered list where the closest users (largest dot product users)\n are listed first\n \n Description:\n Computes the similarity of every pair of users based on the dot product\n Returns an ordered\n \n '''\n # compute similarity of each user to the provided user\n user_item_tmp = user_item.copy()\n user_item_tmp[user_item_tmp.isna() == True] = 0 # 1. Make Nan to 0\n row = user_item_tmp.loc[user_id] # 2. Select a row\n result_dot = row@user_item_tmp.T # 3. Dot product of each of row of the matrix \n result_dot.drop(labels = [user_id], inplace=True) # remove the own user's id\n most_similar_users = result_dot.sort_values(ascending=False).index.tolist() # sort by similarity # create list of just the ids\n \n return most_similar_users # return a list of the users in order from most to least similar\n ",
"_____no_output_____"
],
[
"# Do a spot check of your function\nprint(\"The 10 most similar users to user 1 are: {}\".format(find_similar_users(1)[:10]))\nprint(\"The 5 most similar users to user 3933 are: {}\".format(find_similar_users(3933)[:5]))\nprint(\"The 3 most similar users to user 46 are: {}\".format(find_similar_users(46)[:3]))\n",
"The 10 most similar users to user 1 are: [3933, 23, 3782, 203, 4459, 3870, 131, 4201, 46, 3697]\nThe 5 most similar users to user 3933 are: [1, 3782, 23, 203, 4459]\nThe 3 most similar users to user 46 are: [4201, 3782, 23]\n"
]
],
[
[
"`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user. ",
"_____no_output_____"
]
],
[
[
"def get_article_names(article_ids, df=df):\n '''\n INPUT:\n article_ids - (list) a list of article ids\n df - (pandas dataframe) df as defined at the top of the notebook\n \n OUTPUT:\n article_names - (list) a list of article names associated with the list of article ids \n (this is identified by the title column)\n '''\n # Your code here\n article_names = []\n article_ids = list(map(float, article_ids))\n for i in article_ids:\n try:\n title = df[df['article_id'] == i]['title'].unique()[0]\n except IndexError:\n title =\"None\"\n \n article_names.append(title)\n return article_names # Return the article names associated with list of article ids\n\n# try:\n# myVar\n# except IndexError:\n# myVar = \"None\"\n\ndef get_user_articles(user_id, user_item=user_item):\n '''\n INPUT:\n user_id - (int) a user id\n user_item - (pandas dataframe) matrix of users by articles: \n 1's when a user has interacted with an article, 0 otherwise\n \n OUTPUT:\n article_ids - (list) a list of the article ids seen by the user\n article_names - (list) a list of article names associated with the list of article ids \n (this is identified by the doc_full_name column in df_content)\n \n Description:\n Provides a list of the article_ids and article titles that have been seen by a user\n '''\n # Your code here\n article_ids = user_item.loc[user_id][user_item.loc[user_id] ==1].index.tolist()\n article_ids = list(map(str, article_ids))\n article_names = get_article_names(article_ids, df=df)\n \n return article_ids, article_names # return the ids and names\n\n\ndef user_user_recs(user_id, m=10):\n '''\n INPUT:\n user_id - (int) a user id\n m - (int) the number of recommendations you want for the user\n \n OUTPUT:\n recs - (list) a list of recommendations for the user\n \n Description:\n Loops through the users based on closeness to the input user_id\n For each user - finds articles the user hasn't seen before and provides them as recs\n Does this until m recommendations are found\n \n Notes:\n Users who are the same closeness are chosen arbitrarily as the 'next' user\n \n For the user where the number of recommended articles starts below m \n and ends exceeding m, the last items are chosen arbitrarily\n \n '''\n recs = []\n counter = 0\n # Get seen article ids and names from selected user id\n article_ids, article_names = get_user_articles(user_id)\n # Make set to find unseen articles\n seen_ids_set = set(article_ids)\n # Find five similar users of the selected user\n most_similar_users = find_similar_users(user_id)[0:5]\n \n # Make recommendation list\n for sim_user in most_similar_users:\n if counter < m: \n # Get seen article ids and names from similar users\n sim_article_ids, sim_article_names = get_user_articles(sim_user) \n # Make dict (key: article_ids, value:article_names)\n sim_user_dict = dict(zip(sim_article_ids, sim_article_names)) \n # Make set to find unseen articles\n sim_seen_ids_set = set(sim_article_ids)\n # Create set of unseen articles_ids\n unseen_ids_set = sim_seen_ids_set.difference(seen_ids_set)\n\n for i in unseen_ids_set: \n if counter < m: \n recs.append(i)\n counter += 1\n \n return recs # return your recommendations for this user_id ",
"_____no_output_____"
],
[
"get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])",
"_____no_output_____"
],
[
"# Check Results\nget_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1",
"_____no_output_____"
],
[
"# Test your functions here - No need to change this code - just run this cell\nassert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), \"Oops! Your the get_article_names function doesn't work quite how we expect.\"\nassert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), \"Oops! Your the get_article_names function doesn't work quite how we expect.\"\nassert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])\nassert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])\nassert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])\nassert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])\nprint(\"If this is all you see, you passed all of our tests! Nice job!\")",
"If this is all you see, you passed all of our tests! Nice job!\n"
]
],
[
[
"`4.` Now we are going to improve the consistency of the **user_user_recs** function from above. \n\n* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.\n\n\n* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.",
"_____no_output_____"
]
],
[
[
"def get_top_sorted_users(user_id, df=df, user_item=user_item):\n '''\n INPUT:\n user_id - (int)\n df - (pandas dataframe) df as defined at the top of the notebook \n user_item - (pandas dataframe) matrix of users by articles: \n 1's when a user has interacted with an article, 0 otherwise\n \n \n OUTPUT:\n neighbors_df - (pandas dataframe) a dataframe with:\n neighbor_id - is a neighbor user_id\n similarity - measure of the similarity of each user to the provided user_id\n num_interactions - the number of articles viewed by the user - if a u\n \n Other Details - sort the neighbors_df by the similarity and then by number of interactions where \n highest of each is higher in the dataframe\n \n '''\n # Make neighbor_id column\n df_user_id_grouped =df.groupby(\"user_id\")\n df_user_id_grouped['article_id'].count().sort_values(ascending=False)\n most_similar_users = find_similar_users(user_id)[0:10]\n neighbors_df = pd.DataFrame()\n neighbors_df['neighbor_id'] = most_similar_users\n \n # make similarity column\n user_item_tmp = user_item.copy()\n row = user_item_tmp.loc[user_id] # Select a row\n result_dot = row@user_item_tmp.T # Dot product of each of row of the matrix \n result_dot.drop(labels = [user_id], inplace=True) # remove the own user's id\n similarity = result_dot.sort_values(ascending=False).values.tolist()[0:10] \n neighbors_df['similarity'] = similarity\n \n # Make num_interactions column\n num_interactions = []\n for i in neighbors_df['neighbor_id']:\n counted_interaction = df_user_id_grouped['article_id'].count().loc[i]\n num_interactions.append(counted_interaction)\n neighbors_df['num_interactions'] = num_interactions\n neighbors_df = neighbors_df.sort_values(by=['similarity', 'num_interactions'], ascending=False)\n \n return neighbors_df # Return the dataframe specified in the doc_string\n\n\ndef user_user_recs_part2(user_id, m=10):\n '''\n INPUT:\n user_id - (int) a user id\n m - (int) the number of recommendations you want for the user\n \n OUTPUT:\n recs - (list) a list of recommendations for the user by article id\n rec_names - (list) a list of recommendations for the user by article title\n \n Description:\n Loops through the users based on closeness to the input user_id\n For each user - finds articles the user hasn't seen before and provides them as recs\n Does this until m recommendations are found\n \n Notes:\n * Choose the users that have the most total article interactions \n before choosing those with fewer article interactions.\n\n * Choose articles with the articles with the most total interactions \n before choosing those with fewer total interactions. \n \n '''\n recs = []\n rec_names =[]\n counter = 0\n # Get seen article ids and names from selected user id\n article_ids, article_names = get_user_articles(user_id)\n # Make set to find unseen articles\n seen_ids_set = set(article_ids)\n # Find five similar users of the selected user\n neighbors_df = get_top_sorted_users(user_id, df=df, user_item=user_item)\n similar_users_list = neighbors_df['neighbor_id'] # Get neighbor_df\n\n # Make recommendation list\n for sim_user in similar_users_list:\n if counter < m: \n # Get seen article ids and names from similar users\n sim_article_ids, sim_article_names = get_user_articles(sim_user) \n # Make dict (key: article_ids, value:article_names)\n sim_user_dict = dict(zip(sim_article_ids, sim_article_names)) \n # Make set to find unseen articles\n sim_seen_ids_set = set(sim_article_ids)\n # Create set of unseen articles_ids\n unseen_ids_set = sim_seen_ids_set.difference(seen_ids_set)\n# unseen_article_names_set = sim_seen_article_names_set.difference(seen_article_names_set)\n\n for i in unseen_ids_set: \n if counter < m: \n recs.append(i)\n rec_names.append(sim_user_dict[i])\n counter += 1\n \n \n return recs, rec_names",
"_____no_output_____"
],
[
"# Quick spot check - don't change this code - just use it to test your functions\nrec_ids, rec_names = user_user_recs_part2(20, 10)\nprint(\"The top 10 recommendations for user 20 are the following article ids:\")\nprint(rec_ids)\nprint()\nprint(\"The top 10 recommendations for user 20 are the following article names:\")\nprint(rec_names)",
"The top 10 recommendations for user 20 are the following article ids:\n['1162.0', '1351.0', '1164.0', '491.0', '1186.0', '14.0', '1429.0', '162.0', '939.0', '813.0']\n\nThe top 10 recommendations for user 20 are the following article names:\n['analyze energy consumption in buildings', 'model bike sharing data with spss', 'analyze open data sets with pandas dataframes', 'this week in data science (may 23, 2017)', 'connect to db2 warehouse on cloud and db2 using scala', 'got zip code data? prep it for analytics. – ibm watson data lab – medium', 'use deep learning for image classification', 'an introduction to stock market data analysis with r (part 1)', 'deep learning from scratch i: computational graphs', 'generative adversarial networks (gans)']\n"
]
],
[
[
"`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.",
"_____no_output_____"
]
],
[
[
"### Tests with a dictionary of results\nneighbor_df_1 = get_top_sorted_users(1, df=df, user_item=user_item)\nneighbor_df_131 = get_top_sorted_users(131, df=df, user_item=user_item)\n\nuser1_most_sim = neighbor_df_1.neighbor_id[0].item()# Find the user that is most similar to user 1 \nuser131_10th_sim = neighbor_df_131.neighbor_id[9].item() # Find the 10th most similar user to user 131",
"_____no_output_____"
],
[
"## Dictionary Test Here\nsol_5_dict = {\n 'The user that is most similar to user 1.': user1_most_sim, \n 'The user that is the 10th most similar to user 131': user131_10th_sim,\n}\n\nt.sol_5_test(sol_5_dict)",
"This all looks good! Nice job!\n"
]
],
[
[
"`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.",
"_____no_output_____"
],
[
"Rank based recommendation is suitable for a new user because it only depends on the popularity of an article. As no interaction with articles has been made, user-user based collaborative filtering is not possible for a new user. If we know the preference of a new user, for example, a few key words are given by a new user, we could use the information to make a recommendation. In sum, content based and rank based recommendations are better methods for new users.",
"_____no_output_____"
],
[
"`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.",
"_____no_output_____"
]
],
[
[
"new_user = '0.0'\n# top_10 = get_top_articles(10)\ntop_10 = get_top_article_ids(10, df=df)\n# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.\n# Provide a list of the top 10 article ids you would give to \ntop_10 = list(map(str, top_10))\nnew_user_recs = top_10# Your recommendations here",
"_____no_output_____"
],
[
"assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), \"Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users.\"\n\nprint(\"That's right! Nice job!\")",
"That's right! Nice job!\n"
]
],
[
[
"### <a class=\"anchor\" id=\"Content-Recs\">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>\n\nAnother method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information. \n\n`1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.\n\n### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.",
"_____no_output_____"
]
],
[
[
"def make_content_recs(article_id, df_content, df, m=10):\n '''\n INPUT:\n article_id = (int) a article id in df_content\n m - (int) the number of recommendations you want for the user\n df_content - (pandas dataframe) df_content as defined at the top of the notebook \n df - (pandas dataframe) df as defined at the top of the notebook \n \n OUTPUT:\n recs - (list) a list of recommendations for the user by article id\n rec_names - (list) a list of recommendations for the user by article title\n '''\n corpus = df_content['doc_description']\n df_content['doc_description'].fillna(df_content['doc_full_name'], inplace=True)\n\n stop_words = stopwords.words(\"english\")\n lemmatizer = WordNetLemmatizer()\n # Text Processing, Feature Extraction\n def tokenize(text):\n '''\n Function splits text into separate words and gets a word lowercased and removes whitespaces at the ends of a word. \n The funtions also cleans irrelevant stopwords.\n Input:\n 1. text: text message\n Output:\n 1. Clean_tokens : list of tokenized clean words\n '''\n # Get rid of other sepcial characters \n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text)\n # Tokenize\n tokens = word_tokenize(text)\n # Lemmatize\n lemmatizer = WordNetLemmatizer()\n clean_tokens = []\n for tok in tokens:\n clean_tok = lemmatizer.lemmatize(tok, pos='v').lower().strip()\n clean_tokens.append(clean_tok)\n\n # Remove stop words \n stopwords = nltk.corpus.stopwords.words('english')\n clean_tokens = [token for token in clean_tokens if token not in stopwords]\n\n return clean_tokens\n \n vect = TfidfVectorizer(tokenizer=tokenize)\n # get counts of each token (word) in text data\n X = vect.fit_transform(corpus)\n X = X.toarray()\n cosine_similarity = linear_kernel(X, X)\n df_similarity = pd.DataFrame(cosine_similarity[article_id], columns=['similarity'])\n df_similarity_modified = df_similarity.drop(article_id)\n recs = df_similarity_modified.similarity.sort_values(ascending=False).index[0:10].tolist()\n rec_names = []\n\n for i in recs:\n name = df_content[df_content['article_id'] == i]['doc_full_name'].values[0]\n rec_names.append(name)\n \n return recs, rec_names\n ",
"_____no_output_____"
],
[
"recs, rec_names = make_content_recs(0, df_content, df, m=10)\nprint(recs)\nprint(\"**\"*30)\nprint(rec_names)",
"[730, 194, 53, 470, 1005, 980, 423, 266, 681, 670]\n************************************************************\n['Developing for the IBM Streaming Analytics service', 'Data science for real-time streaming analytics', 'Introducing Streams Designer', 'What’s new in the Streaming Analytics service on Bluemix', 'Real-time Sentiment Analysis of Twitter Hashtags with Spark', 'logshare', 'Web application state, à la Dogfight (1983) – IBM Watson Data Lab', 'Developing IBM Streams applications with the Python API (Version 1.6)', 'Real-Time Sentiment Analysis of Twitter Hashtags with Spark (+ PixieDust)', 'Calculate moving averages on real time data with Streams Designer']\n"
]
],
[
[
"`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?\n\n### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.",
"_____no_output_____"
],
[
"Before making my content-based recommendation system, I read an article [here](https://towardsdatascience.com/introduction-to-two-approaches-of-content-based-recommendation-system-fc797460c18c) to grap idea how to build it. After reading it, I selected TF-IDF (Term Frequency - Inverse Document Frequency) method to make keywords to be TF-IDF Vectors. Then I used cosine similarity to find similar articles to a given article. This system could perform well to a user who has only one interaction with an ariticle. \n\nIn my opinion, this system needs to be improved in three cases. The first case is for a brand new user having no interaction. In this case, the system could rely on `rank based recommendation system` using `get_top_articles(n, df=df)` and `get_top_article_ids(n, df=df)`. The second case is a user having interaction with an article and the information of the article in df_content. For this, `doc_description` column could be used for find similar articles to the one. the information in the column was cleaned and vectorized by TfidfVectorizer. The matrix after vectorization consisted of rows and columns that rows saved the vectorized information of the articles and columns showed tokenized word like 'data', 'science', 'IBM'. The matrix was used to get the most top 10 similar articles. Top 10 articles will be given by the order of the articles in `rank based recommendation system`. This second case was executed by `make_content_recs(article_id, df_content, df, m=10)`.\n\nThe last case is the opposite of the previous one that a user having one interaction with an aritlce but no information in df_content. To get a solution for this, not a `doc_descrpition` but `title`column in df was used to do content based recommendation. The title of a given article in df was vectorized and this information was used to slice the matrix I got before in case two. Finally, the sum of each row was calculated and the top 10 articles were given in order of the high score. The last case was performed by ` make_content_recs_2(article_id, df_content, df, m=10)`",
"_____no_output_____"
],
[
"`3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.\n\n### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.",
"_____no_output_____"
]
],
[
[
"def make_content_recs_2(article_id, df_content, df, m=10):\n '''\n INPUT:\n article_id = (int) a article id in df_content\n m - (int) the number of recommendations you want for the user\n df_content - (pandas dataframe) df_content as defined at the top of the notebook \n df - (pandas dataframe) df as defined at the top of the notebook \n \n OUTPUT:\n recs - (list) a list of recommendations for the user by article id\n rec_names - (list) a list of recommendations for the user by article title\n '''\n if article_id in df_content.article_id:\n recs, rec_names =make_content_recs(article_id, df_content, df, m=10)\n \n else : \n corpus = df_content['doc_description']\n df_content['doc_description'].fillna(df_content['doc_full_name'], inplace=True)\n\n stop_words = stopwords.words(\"english\")\n lemmatizer = WordNetLemmatizer()\n # Text Processing, Feature Extraction\n def tokenize(text):\n '''\n Function splits text into separate words and gets a word lowercased and removes whitespaces at the ends of a word. \n The funtions also cleans irrelevant stopwords.\n Input:\n 1. text: text message\n Output:\n 1. Clean_tokens : list of tokenized clean words\n '''\n # Get rid of other sepcial characters \n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text)\n # Tokenize\n tokens = word_tokenize(text)\n # Lemmatize\n lemmatizer = WordNetLemmatizer()\n clean_tokens = []\n for tok in tokens:\n clean_tok = lemmatizer.lemmatize(tok, pos='v').lower().strip()\n clean_tokens.append(clean_tok)\n\n # Remove stop words \n stopwords = nltk.corpus.stopwords.words('english')\n clean_tokens = [token for token in clean_tokens if token not in stopwords]\n\n return clean_tokens\n\n vect = TfidfVectorizer(tokenizer=tokenize)\n # get counts of each token (word) in text data\n X = vect.fit_transform(corpus)\n X = X.toarray()\n tfidf_feature_name = vect.get_feature_names()\n\n # Get title of the document of interest\n booktitle = df[df['article_id'] == article_id]['title'].values[0]\n # Tokenize the title\n booktitle_tokenized = tokenize(booktitle)\n\n X_slice_list = []\n for i in booktitle_tokenized:\n if i in tfidf_feature_name:\n X_slice_list.append(tfidf_feature_name.index(i))\n\n X_slice_list.sort()\n X_sliced = X[:,X_slice_list]\n check_df = pd.DataFrame(X_sliced, columns=X_slice_list)\n check_df['sum'] = check_df.sum(axis=1)\n recs = check_df.sort_values(\"sum\", ascending=False)[0:10].index.tolist()\n rec_names = []\n for i in recs:\n name = df_content[df_content['article_id'] == i]['doc_full_name'].values[0]\n rec_names.append(name)\n \n return recs, rec_names",
"_____no_output_____"
],
[
"recs, rec_names = make_content_recs_2(1427, df_content, df, m=10)\nprint(recs, rec_names )\ndf.groupby('user_id')['title'].count()",
"[384, 805, 48, 662, 809, 161, 893, 686, 723, 655] ['Continuous Learning on Watson', 'Machine Learning for everyone', 'Data Science Experience Documentation', 'Build Deep Learning Architectures With Neural Network Modeler', 'Use the Machine Learning Library', 'Use the Machine Learning Library in Spark', 'Use the Machine Learning Library in IBM Analytics for Apache Spark', 'Score a Predictive Model Built with IBM SPSS Modeler, WML & DSX', '10 Essential Algorithms For Machine Learning Engineers', 'Create a project for Watson Machine Learning in DSX']\n"
],
[
"def final_recs(user_id=None, article_id=None, df_content=df_content, df=df, m=10):\n # No arguments of user_id and article_id / New user case\n if (user_id==None and article_id==None) or (user_id not in df['user_id'].unique() and article_id==None) : \n recs = get_top_article_ids(m, df=df)\n rec_names = get_top_articles(m, df=df)\n \n # Existed user\n elif user_id in df['user_id'].unique() and article_id==None:\n recs ,rec_names = user_user_recs_part2(user_id, m=10)\n \n # One article given\n elif article_id != None and user_id == None:\n recs, rec_names = make_content_recs_2(article_id, df_content, df, m=10)\n elif user_id != None and article_id !=None:\n print(\"input only user_id or article_id\")\n recs = []\n rec_names =[]\n \n return recs, rec_names",
"_____no_output_____"
],
[
"# make recommendations for a brand new user\nrecs, rec_names = final_recs(user_id=10, article_id=10, df_content=df_content, df=df, m=10)\nprint(recs, rec_names)\n\n# make a recommendations for a user who only has interacted with article id '1427.0'\nrecs, rec_names = final_recs(article_id=1427, df_content=df_content, df=df, m=10)\nprint(recs, rec_names)\n\nrecs, rec_names = final_recs(article_id=1314.0, df_content=df_content, df=df, m=10)\nprint(recs, rec_names)\n\n# normal cases\nrecs, rec_names = final_recs(user_id=23, df_content=df_content, df=df, m=10)\nprint(recs, rec_names)\nrecs, rec_names = final_recs(user_id=21, df_content=df_content, df=df, m=10)\nprint(recs, rec_names)\nrecs, rec_names = final_recs(df_content=df_content, df=df, m=10)\nprint(recs, rec_names)\n",
"input only user_id or article_id\n[] []\n[384, 805, 48, 662, 809, 161, 893, 686, 723, 655] ['Continuous Learning on Watson', 'Machine Learning for everyone', 'Data Science Experience Documentation', 'Build Deep Learning Architectures With Neural Network Modeler', 'Use the Machine Learning Library', 'Use the Machine Learning Library in Spark', 'Use the Machine Learning Library in IBM Analytics for Apache Spark', 'Score a Predictive Model Built with IBM SPSS Modeler, WML & DSX', '10 Essential Algorithms For Machine Learning Engineers', 'Create a project for Watson Machine Learning in DSX']\n[730, 470, 266, 0, 670, 931, 774, 194, 53, 651] ['Developing for the IBM Streaming Analytics service', 'What’s new in the Streaming Analytics service on Bluemix', 'Developing IBM Streams applications with the Python API (Version 1.6)', 'Detect Malfunctioning IoT Sensors with Streaming Analytics', 'Calculate moving averages on real time data with Streams Designer', 'Short-Notice Serverless Conference', 'Authenticating Node-RED using JSONWebToken, part 2', 'Data science for real-time streaming analytics', 'Introducing Streams Designer', 'Analyzing streaming Data from Kafka Topics']\n['225.0', '205.0', '173.0', '522.0', '766.0', '684.0', '491.0', '1186.0', '1116.0', '57.0'] ['a visual explanation of the back propagation algorithm for neural networks', \"a beginner's guide to variational methods\", '10 must attend data science, ml and ai conferences in 2018', 'share the (pixiedust) magic – ibm watson data lab – medium', 'making data science a team sport', 'flexdashboard: interactive dashboards for r', 'this week in data science (may 23, 2017)', 'connect to db2 warehouse on cloud and db2 using scala', 'airbnb data for analytics: new york city reviews', 'transfer learning for flight delay prediction via variational autoencoders']\n['973.0', '252.0', '821.0', '1291.0', '348.0', '729.0', '939.0', '1159.0', '316.0', '1304.0'] ['recent trends in recommender systems', 'web picks (week of 4 september 2017)', 'using rstudio in ibm data science experience', 'fertility rate by country in total births per woman', 'this week in data science (april 25, 2017)', 'pixiedust 1.0 is here! – ibm watson data lab', 'deep learning from scratch i: computational graphs', 'analyze facebook data using ibm watson and watson studio', 'leverage python, scikit, and text classification for behavioral profiling', 'gosales transactions for logistic regression model']\n[1429.0, 1330.0, 1431.0, 1427.0, 1364.0, 1314.0, 1293.0, 1170.0, 1162.0, 1304.0] ['use deep learning for image classification', 'insights from new york car accident reports', 'visualize car data with brunel', 'use xgboost, scikit-learn & ibm watson machine learning apis', 'predicting churn with the spss random tree algorithm', 'healthcare python streaming application demo', 'finding optimal locations of new store using decision optimization', 'apache spark lab, part 1: basic concepts', 'analyze energy consumption in buildings', 'gosales transactions for logistic regression model']\n"
]
],
[
[
"### <a class=\"anchor\" id=\"Matrix-Fact\">Part V: Matrix Factorization</a>\n\nIn this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.\n\n`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook. ",
"_____no_output_____"
]
],
[
[
"# Load the matrix here\nuser_item_matrix = pd.read_pickle('user_item_matrix.p')",
"_____no_output_____"
],
[
"# quick look at the matrix\nuser_item_matrix.head()\nuser_item_matrix.shape",
"_____no_output_____"
],
[
"user_item_matrix.to_numpy()",
"_____no_output_____"
]
],
[
[
"`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.",
"_____no_output_____"
]
],
[
[
"# Perform SVD on the User-Item Matrix Here\nu, s, vt = np.linalg.svd(user_item_matrix)# use the built in to get the three matrices\ns.shape, u.shape, vt.shape",
"_____no_output_____"
]
],
[
[
"Here, the user-item matrix passed in linalg.svd has no missing values. All elements in the matrix are 0 or 1. In the previous lesson, there were a lot of null cells in the matrix. It was not able to be passed in.",
"_____no_output_____"
],
[
"`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.",
"_____no_output_____"
]
],
[
[
"num_latent_feats = np.arange(10,700+10,20)\nsum_errs = []\n\nfor k in num_latent_feats:\n # restructure with k latent features\n s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]\n \n # take dot product\n user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))\n \n # compute error for each prediction to actual value\n diffs = np.subtract(user_item_matrix, user_item_est)\n \n # total errors and keep track of them\n err = np.sum(np.sum(np.abs(diffs)))\n sum_errs.append(err)\n \nplt.figure(figsize=(16,5))\nplt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);\nplt.xlabel('Number of Latent Features');\nplt.ylabel('Accuracy');\nplt.title('Accuracy vs. Number of Latent Features');",
"_____no_output_____"
]
],
[
[
"`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below. \n\nUse the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below: \n\n* How many users can we make predictions for in the test set? \n* How many users are we not able to make predictions for because of the cold start problem?\n* How many articles can we make predictions for in the test set? \n* How many articles are we not able to make predictions for because of the cold start problem?",
"_____no_output_____"
]
],
[
[
"df_train = df.head(40000)\ndf_test = df.tail(5993)\n\ndef create_test_and_train_user_item(df_train, df_test):\n '''\n INPUT:\n df_train - training dataframe\n df_test - test dataframe\n \n OUTPUT:\n user_item_train - a user-item matrix of the training dataframe \n (unique users for each row and unique articles for each column)\n user_item_test - a user-item matrix of the testing dataframe \n (unique users for each row and unique articles for each column)\n test_idx - all of the test user ids\n test_arts - all of the test article ids\n \n '''\n user_item_train = create_user_item_matrix(df_train)\n user_item_test = create_user_item_matrix(df_test)\n # nan to zero \n user_item_train[np.isnan(user_item_train)] = 0\n user_item_test[np.isnan(user_item_test)] = 0\n test_idx = user_item_test.index.values\n test_arts = user_item_test.columns.values\n \n return user_item_train, user_item_test, test_idx, test_arts\n\nuser_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)",
"_____no_output_____"
],
[
"df_train = df.head(40000)\ndf_test = df.tail(5993)\nuser_item_train = create_user_item_matrix(df_train)\nuser_item_test = create_user_item_matrix(df_test)\nuser_item_train[np.isnan(user_item_train)] = 0\nuser_item_test[np.isnan(user_item_test)] = 0\nuser_item_test.index.values\nuser_item_test.columns.values\nuser_item_train.shape",
"_____no_output_____"
],
[
"# 'How many users can we make predictions for in the test set?': \ntrain_idx = user_item_train.index.values\ntest_idx = user_item_test.index.values\nanswer1 = len(np.intersect1d(test_idx,train_idx))\n# 'How many users in the test set are we not able to make predictions for because of the cold start problem?': \nanswer2 = len(test_idx) - answer1\n# 'How many articles can we make predictions for in the test set?'\ntrain_arts = user_item_train.columns.values\ntest_arts = user_item_test.columns.values\nanswer3 = len(np.intersect1d(test_arts,train_arts))\n# 'How many articles in the test set are we not able to make predictions for because of the cold start problem?\nanswer4 = len(test_arts) - answer3 \nprint(answer1, answer2, answer3, answer4)",
"20 662 574 0\n"
],
[
"# Replace the values in the dictionary below\na = 662 \nb = 574 \nc = 20 \nd = 0 \nsol_4_dict = {\n 'How many users can we make predictions for in the test set?': c, \n 'How many users in the test set are we not able to make predictions for because of the cold start problem?': a, \n 'How many movies can we make predictions for in the test set?': b,\n 'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d\n}\n\nt.sol_4_test(sol_4_dict)",
"Awesome job! That's right! All of the test movies are in the training data, but there are only 20 test users that were also in the training set. All of the other users that are in the test set we have no data on. Therefore, we cannot make predictions for these users using SVD.\n"
]
],
[
[
"`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.\n\nUse the cells below to explore how well SVD works towards making predictions for recommendations on the test data. ",
"_____no_output_____"
]
],
[
[
"# fit SVD on the user_item_train matrix\nu_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below",
"_____no_output_____"
],
[
"# Use these cells to see how well you can use the training \n# decomposition to predict on test data\nboth_rows = user_item_train.index.isin(test_idx)\nrows_mask = np.intersect1d(user_item_train.index,test_idx)\nboth_cols = user_item_train.columns.isin(test_arts)\ncols_mask = np.intersect1d(user_item_train.columns,test_arts)\nu_test = u_train[both_rows,:]\nvt_test = vt_train[:, both_cols]\nuser_item_test = user_item_test.loc[rows_mask, cols_mask]",
"_____no_output_____"
],
[
"user_item_test.shape\nuser_item_train.shape",
"_____no_output_____"
],
[
"num_latent_feats = np.arange(10,700+10,20)\nsum_errs_train = []\nsum_errs_test = []\n\nfor k in num_latent_feats:\n # restructure with k latent features\n s_train_new, u_train_new, vt_train_new = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :]\n s_test_new, u_test_new, vt_test_new = np.diag(s_train[:k]), u_test[:, :k], vt_test[:k, :]\n \n # take dot product\n user_item_est_train = np.around(np.dot(np.dot(u_train_new, s_train_new), vt_train_new))\n user_item_est_test = np.around(np.dot(np.dot(u_test_new, s_test_new), vt_test_new))\n \n # compute error for each prediction to actual value\n diffs_train = np.subtract(user_item_train, user_item_est_train)\n diffs_test = np.subtract(user_item_test, user_item_est_test)\n \n # total errors and keep track of them\n err_train = np.sum(np.sum(np.abs(diffs_train)))\n sum_errs_train.append(err_train)\n err_test = np.sum(np.sum(np.abs(diffs_test)))\n sum_errs_test.append(err_test)\n \nplt.figure(figsize=(16,5))\nplt.plot(num_latent_feats, 1 - np.array(sum_errs_train)/(user_item_train.shape[0]*user_item_train.shape[1]), label=\"Train\");\nplt.plot(num_latent_feats, 1 - np.array(sum_errs_test)/(user_item_test.shape[0]*user_item_test.shape[1]), label=\"Test\");\nplt.xlabel('Number of Latent Features');\nplt.ylabel('Accuracy');\nplt.title('Accuracy vs. Number of Latent Features');\nplt.legend();",
"_____no_output_____"
]
],
[
[
"`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles? ",
"_____no_output_____"
],
[
"1. Brief summary of plot\nAs the number of latent features increases, the accuracy of train set increases but that of test set decreases in the above plot. I think the result relies on the characteristic of the matrix used in SVD that the matrix has many 0's in every rows. So the accuracy of predicting 0s was an easy job by, however the predicting 1s (meaning that the user read the article) was not succesful. It is possible to check the cell below. The cell showed the sum of absolute values of elements in each row in the diffs dataframe made by substracting predicted user_item_test matrix from real user_item_test. If SVD had predicted correctly, the sum of difference would have been small.\n\n2. Suggestions for the recommendation systems to be improved\nIn order to validate recommendation systems, there could be two ways suggested in the lesson of Matrix Factorization for Recommendations. One is to check validation metrics like sales, higher engagement. or click rates after deploying new recommendation systems. It is called online testing, and it could be performed by A/B testing. To do that the entire users of IBM Watson Studio platform has to be divided into two groups statiscally correctly. A group has to be exposed to old recommendation systems. B group has to use new recommendation system. The access of articles is only allowed logged in users in IBM cloud, so it would be useful to set click rates or liked rates on the recommended articles as the metrics for this experiment. \n\nThe other one is offline testing like I did with SVD before. As shown above, the model was not good to predict because of imbalanced dataset having lots of 0s and few 1s. However, like shown in previous lessons, applying FunkSVD could be another solution to validate the recommendation systems. To do this, the rating system has to be in IBM Watson Studio platform.",
"_____no_output_____"
]
],
[
[
" print(\"# of row / sum of readings / sum of difference\")\nfor i in range(20):\n print(i, \" \",user_item_test.iloc[i].abs().sum(), \" \",diffs_test.iloc[i].abs().sum())",
"# of row / sum of readings / sum of difference\n0 2.0 12.0\n1 7.0 31.0\n2 5.0 16.0\n3 5.0 6.0\n4 1.0 5.0\n5 32.0 48.0\n6 3.0 36.0\n7 55.0 65.0\n8 1.0 2.0\n9 26.0 26.0\n10 8.0 24.0\n11 1.0 2.0\n12 1.0 2.0\n13 8.0 10.0\n14 10.0 9.0\n15 2.0 22.0\n16 16.0 20.0\n17 5.0 19.0\n18 26.0 37.0\n19 4.0 16.0\n"
]
],
[
[
"<a id='conclusions'></a>\n### Extras\nUsing your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!\n\n\n## Conclusion\n\n> Congratulations! You have reached the end of the Recommendations with IBM project! \n\n> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the \"Tips\" like this one so that the presentation is as polished as possible.\n\n\n## Directions to Submit\n\n> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).\n\n> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.\n\n> Once you've done this, you can submit your project by clicking on the \"Submit Project\" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations! ",
"_____no_output_____"
]
],
[
[
"from subprocess import call\ncall(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f0d2af8898e4bab867a1ac3b323678a133104b | 56,922 | ipynb | Jupyter Notebook | machine_learning/hello.ipynb | visualizit/vpoc | a55da9db1942d3c477afc57a236e2bd4f5ba801f | [
"Apache-2.0"
] | 164 | 2020-06-01T22:40:24.000Z | 2022-03-29T03:53:54.000Z | machine_learning/hello.ipynb | linfinity000/vpoc | f1a0b253d64c01a6a2c2cb38128a9a94752bdd11 | [
"Apache-2.0"
] | 5 | 2020-07-15T15:09:03.000Z | 2022-01-04T16:44:10.000Z | machine_learning/hello.ipynb | linfinity000/vpoc | f1a0b253d64c01a6a2c2cb38128a9a94752bdd11 | [
"Apache-2.0"
] | 98 | 2020-06-08T16:27:04.000Z | 2022-03-16T11:01:19.000Z | 406.585714 | 53,908 | 0.938143 | [
[
[
"import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set()",
"_____no_output_____"
],
[
"tips = sns.load_dataset(\"tips\")\n\nprint(tips)",
" total_bill tip sex smoker day time size\n0 16.99 1.01 Female No Sun Dinner 2\n1 10.34 1.66 Male No Sun Dinner 3\n2 21.01 3.50 Male No Sun Dinner 3\n3 23.68 3.31 Male No Sun Dinner 2\n4 24.59 3.61 Female No Sun Dinner 4\n.. ... ... ... ... ... ... ...\n239 29.03 5.92 Male No Sat Dinner 3\n240 27.18 2.00 Female Yes Sat Dinner 2\n241 22.67 2.00 Male Yes Sat Dinner 2\n242 17.82 1.75 Male No Sat Dinner 2\n243 18.78 3.00 Female No Thur Dinner 2\n\n[244 rows x 7 columns]\n"
],
[
"sns.relplot(x=\"total_bill\", y=\"tip\", col=\"time\",\n hue=\"smoker\", style=\"smoker\", size=\"size\",\n data=tips)",
"_____no_output_____"
],
[
"plt.show()\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e7f0dc40c69cfd6fd81c00ae8b8e60092a9a89ff | 4,545 | ipynb | Jupyter Notebook | examples/supercollider-objects/bus-examples.ipynb | interactive-sonification/sc3nb | c6081ae01f4e72094b6cb6dbd9667278c8d21069 | [
"MIT"
] | 7 | 2021-08-02T12:57:13.000Z | 2022-02-16T08:54:23.000Z | examples/supercollider-objects/bus-examples.ipynb | thomas-hermann/sc3nb | 7d7fbd9178fe804c5c8ddd0ddd4075579221b7c4 | [
"MIT"
] | 3 | 2019-08-09T17:56:18.000Z | 2020-10-24T13:05:47.000Z | examples/supercollider-objects/bus-examples.ipynb | thomas-hermann/sc3nb | 7d7fbd9178fe804c5c8ddd0ddd4075579221b7c4 | [
"MIT"
] | 6 | 2019-04-18T17:25:42.000Z | 2020-04-28T09:43:33.000Z | 16.407942 | 102 | 0.466227 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7f0e8f34aacd2ac243dcb08a3a9a4591d4d834c | 70,847 | ipynb | Jupyter Notebook | Section 16/AdvanceReg/Teclov_generalised_regression.ipynb | ashokjohn/ML_RealWorld | 8508c8cd6a9fd0467ee68954850179ab2506bc04 | [
"MIT"
] | 24 | 2020-01-05T22:22:02.000Z | 2022-03-07T23:41:14.000Z | Section 16/AdvanceReg/Teclov_generalised_regression.ipynb | ashokjohn/ML_RealWorld | 8508c8cd6a9fd0467ee68954850179ab2506bc04 | [
"MIT"
] | 1 | 2020-04-22T01:53:19.000Z | 2020-04-22T01:53:19.000Z | Section 16/AdvanceReg/Teclov_generalised_regression.ipynb | ashokjohn/ML_RealWorld | 8508c8cd6a9fd0467ee68954850179ab2506bc04 | [
"MIT"
] | 26 | 2019-07-28T13:00:21.000Z | 2022-01-27T23:49:19.000Z | 217.322086 | 21,694 | 0.903228 | [
[
[
"# Generalised Regression\n\nIn this notebook, we will build a generalised regression model on the **electricity consumption** dataset. The dataset contains two variables - year and electricity consumption.",
"_____no_output_____"
]
],
[
[
"#importing libraries\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import metrics",
"_____no_output_____"
],
[
"#fetching data\nelec_cons = pd.read_csv(\"total-electricity-consumption-us.csv\", sep = ',', header= 0 )\nelec_cons.head()",
"_____no_output_____"
],
[
"# number of observations: 51\nelec_cons.shape",
"_____no_output_____"
],
[
"# checking NA\n# there are no missing values in the dataset\nelec_cons.isnull().values.any()",
"_____no_output_____"
],
[
"size = len(elec_cons.index)\nindex = range(0, size, 5)\n\ntrain = elec_cons[~elec_cons.index.isin(index)]\ntest = elec_cons[elec_cons.index.isin(index)]\n",
"_____no_output_____"
],
[
"print(len(train))\nprint(len(test))",
"40\n11\n"
],
[
"# converting X to a two dimensional array, as required by the learning algorithm\nX_train = train.Year.reshape(-1,1) #Making X two dimensional\ny_train = train.Consumption\n\nX_test = test.Year.reshape(-1,1) #Making X two dimensional\ny_test = test.Consumption",
"_____no_output_____"
],
[
"# Doing a polynomial regression: Comparing linear, quadratic and cubic fits\n# Pipeline helps you associate two models or objects to be built sequentially with each other, \n# in this case, the objects are PolynomialFeatures() and LinearRegression()\n\nr2_train = []\nr2_test = []\ndegrees = [1, 2, 3]\n\nfor degree in degrees:\n pipeline = Pipeline([('poly_features', PolynomialFeatures(degree=degree)),\n ('model', LinearRegression())])\n pipeline.fit(X_train, y_train)\n y_pred = pipeline.predict(X_test)\n r2_test.append(metrics.r2_score(y_test, y_pred))\n \n # training performance\n y_pred_train = pipeline.predict(X_train)\n r2_train.append(metrics.r2_score(y_train, y_pred_train))\n \n# plot predictions and actual values against year\n fig, ax = plt.subplots()\n ax.set_xlabel(\"Year\") \n ax.set_ylabel(\"Power consumption\")\n ax.set_title(\"Degree= \" + str(degree))\n \n # train data in blue\n ax.scatter(X_train, y_train)\n ax.plot(X_train, y_pred_train)\n \n # test data\n ax.scatter(X_train, y_train)\n ax.plot(X_test, y_pred)\n \n plt.show()",
"_____no_output_____"
],
[
"# respective test r-squared scores of predictions\nprint(degrees)\nprint(r2_train)\nprint(r2_test)",
"[1, 2, 3]\n[0.84237474021761372, 0.99088967445535958, 0.9979789881969624]\n[0.81651704638268097, 0.98760805026754717, 0.99848974839924587]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f0ef3f70404a0f06735fcb477a2909dd663d73 | 11,147 | ipynb | Jupyter Notebook | matrix_two/day5.ipynb | jedrzejd/dw_matrix_car | 955310d167203e8fb46cc5dd023709b2886e8d0b | [
"MIT"
] | null | null | null | matrix_two/day5.ipynb | jedrzejd/dw_matrix_car | 955310d167203e8fb46cc5dd023709b2886e8d0b | [
"MIT"
] | null | null | null | matrix_two/day5.ipynb | jedrzejd/dw_matrix_car | 955310d167203e8fb46cc5dd023709b2886e8d0b | [
"MIT"
] | null | null | null | 11,147 | 11,147 | 0.705302 | [
[
[
"## Import library",
"_____no_output_____"
]
],
[
[
"# !pip install --upgrade tables\n# !pip install eli5\n# !pip install xgboost\n# !pip install hyperopt",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport xgboost as xgb\n\nfrom sklearn.metrics import mean_absolute_error as mae\nfrom sklearn.model_selection import cross_val_score\n\nfrom hyperopt import hp, fmin, tpe, STATUS_OK\n\nimport eli5\nfrom eli5.sklearn import PermutationImportance",
"_____no_output_____"
],
[
"cd \"drive/My Drive/Colab Notebooks/dw_matrix_car\"",
"/content/drive/My Drive/Colab Notebooks/dw_matrix_car\n"
],
[
"df = pd.read_hdf('data/car.h5')\ndf.shape",
"_____no_output_____"
]
],
[
[
"## Feature Engineering",
"_____no_output_____"
]
],
[
[
"SUFFIX_CAT = '__cat'\n\nfor feat in df.columns:\n if isinstance(df[feat][0], list):\n continue\n factorized_values = df[feat].factorize()[0]\n\n if SUFFIX_CAT in feat:\n continue\n df[feat + SUFFIX_CAT] = factorized_values\n",
"_____no_output_____"
],
[
"df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(str(x).split('cm')[0].replace(' ', '')) )\ndf['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]) )\ndf['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))\n",
"_____no_output_____"
],
[
"def run_model(model, feats):\n X = df[feats].values\n y = df['price_value'].values\n\n scores = cross_val_score(model, X, y, cv = 3, scoring = 'neg_mean_absolute_error')\n return np.mean(scores), np.std(scores)",
"_____no_output_____"
],
[
"feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat']\nxgb_parms = {\n 'max_depth': 5,\n 'n_estimators': 50,\n 'learning_rate': 0.1,\n 'seed': 0\n}\n\nrun_model(xgb.XGBRegressor( **xgb_parms), feats)\n",
"[14:20:56] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[14:21:00] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[14:21:04] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n"
],
[
"def obj_func(params):\n mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)\n \n print(\"Training with params:\")\n print(params, np.abs(mean_mae))\n \n return {'loss': np.abs(mean_mae), 'status': STATUS_OK}\n\n# space\n\nxgb_reg_params = {\n 'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05,dtype=float)),\n 'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype = int)),\n 'subsample': hp.quniform('subsample', 0.5, 1, 0.05),\n 'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),\n 'objective': 'reg:squarederror',\n 'n_estimators': 100,\n 'seed': 0\n}\n# run\n\nbest = fmin(obj_func, xgb_reg_params, algo = tpe.suggest, max_evals = 5)",
"Training with params:\n{'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\n8021.26782298684\nTraining with params:\n{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.2, 'max_depth': 12, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\n7670.749769854843\nTraining with params:\n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}\n8072.788374825047\nTraining with params:\n{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.2, 'max_depth': 14, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\n7661.6167853284\nTraining with params:\n{'colsample_bytree': 0.8, 'learning_rate': 0.1, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\n7484.451006042277\n100%|██████████| 5/5 [06:06<00:00, 73.38s/it, best loss: 7484.451006042277]\n"
]
],
[
[
"## Best Config XGBoost",
"_____no_output_____"
]
],
[
[
"feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat']\nxgb_best_params = {\n 'learning_rate': 0.1,\n 'max_depth': 13,\n 'subsample': 0.7,\n 'colsample_bytree': 0.8,\n 'objective': 'reg:squarederror',\n 'n_estimators': 100,\n 'seed': 0\n}\nrun_model(xgb.XGBRegressor( **xgb_best_params), feats)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7f0efc5f79093661cec4f4519cb83ef4a48141b | 6,981 | ipynb | Jupyter Notebook | examples/.ipynb_checkpoints/Madgwick_debugging-checkpoint.ipynb | jdgalviss/imusensor | 9bd23120e749b54c5daf80e2f6cf736b61101322 | [
"MIT"
] | 54 | 2020-06-27T21:37:46.000Z | 2022-03-29T05:59:53.000Z | examples/.ipynb_checkpoints/Madgwick_debugging-checkpoint.ipynb | jdgalviss/imusensor | 9bd23120e749b54c5daf80e2f6cf736b61101322 | [
"MIT"
] | 16 | 2020-07-01T06:41:29.000Z | 2021-10-30T20:46:20.000Z | examples/.ipynb_checkpoints/Madgwick_debugging-checkpoint.ipynb | jdgalviss/imusensor | 9bd23120e749b54c5daf80e2f6cf736b61101322 | [
"MIT"
] | 20 | 2020-07-24T01:08:23.000Z | 2022-03-16T11:21:22.000Z | 24.932143 | 386 | 0.437473 | [
[
[
"import numpy as np",
"_____no_output_____"
],
[
"def normalizeq( q):\n\n qLength = np.sqrt(np.sum(np.square(q)))\n q = q/qLength\n return q\ndef quaternionMul(q1, q2):\n mat1 = np.array([[0,1,0,0],[-1,0,0,0],[0,0,0,1],[0,0,-1,0]])\n mat2 = np.array([[0,0,1,0],[0,0,0,-1],[-1,0,0,0],[0,1,0,0]])\n mat3 = np.array([[0,0,0,1],[0,0,1,0],[0,-1,0,0],[-1,0,0,0]])\n\n k1 = np.matmul(q1,mat1)[np.newaxis,:].T\n k2 = np.matmul(q1,mat2)[np.newaxis,:].T\n k3 = np.matmul(q1,mat3)[np.newaxis,:].T\n k0 = q1[np.newaxis,:].T\n\n mat = np.concatenate((k0,k1,k2,k3), axis = 1)\n\n finalq = np.matmul(mat,q2)\n\n return finalq",
"_____no_output_____"
],
[
"q = np.array([0.5,1.5,2.5,3.5])\na = np.array([0.0,0.9,0.4,10])\nm = np.array([0.0,4.5,3.6,2.7])\n\nq= normalizeq(q)\na = normalizeq(a)\nm = normalizeq(m)\n",
"_____no_output_____"
],
[
"jacob = np.array([[-2.0*q[2], 2.0*q[3], -2.0*q[0], 2.0*q[1]],\\\n\t\t\t\t\t\t[2.0*q[1], 2.0*q[0], 2.0*q[3], 2.0*q[2]],\\\n\t\t\t\t\t\t[0.0, -4.0*q[1], -4.0*q[2], 0.0]])\n\nfunc = np.array([[2.0*(q[1]*q[3] - q[0]*q[2]) - a[1]],\\\n\t\t\t\t\t\t[2.0*(q[0]*q[1] - q[2]*q[3]) - a[2]],\\\n\t\t\t\t\t\t[2.0*(0.5 - q[1]*q[1] - q[2]*q[2]) - a[3]]])",
"_____no_output_____"
],
[
"ax = a[1]\nay = a[2]\naz = a[3]\n\nq1 = q[0]\nq2 = q[1]\nq3 = q[2]\nq4 = q[3]\n\nmx = m[1]\nmy = m[2]\nmz = m[3]\n\n_2q1 = 2.0 * q1;\n_2q2 = 2.0 * q2;\n_2q3 = 2.0 * q3;\n_2q4 = 2.0 * q4;\n_2q1q3 = 2.0 * q1 * q3;\n_2q3q4 = 2.0 * q3 * q4;\nq1q1 = q1 * q1;\nq1q2 = q1 * q2;\nq1q3 = q1 * q3;\nq1q4 = q1 * q4;\nq2q2 = q2 * q2;\nq2q3 = q2 * q3;\nq2q4 = q2 * q4;\nq3q3 = q3 * q3;\nq3q4 = q3 * q4;\nq4q4 = q4 * q4;",
"_____no_output_____"
],
[
"s1 = -_2q3 * (2.0 * q2q4 - _2q1q3 - ax) + _2q2 * (2.0 * q1q2 + _2q3q4 - ay) #- _2bz * q3 * (_2bx * (0.5 - q3q3 - q4q4) + _2bz * (q2q4 - q1q3) - mx) + (-_2bx * q4 + _2bz * q2) * (_2bx * (q2q3 - q1q4) + _2bz * (q1q2 + q3q4) - my) + _2bx * q3 * (_2bx * (q1q3 + q2q4) + _2bz * (0.5 - q2q2 - q3q3) - mz);\ns2 = _2q4 * (2.0 * q2q4 - _2q1q3 - ax) + _2q1 * (2.0 * q1q2 + _2q3q4 - ay) - 4.0 * q2 * (1.0 - 2.0 * q2q2 - 2.0 * q3q3 - az) #+ _2bz * q4 * (_2bx * (0.5 - q3q3 - q4q4) + _2bz * (q2q4 - q1q3) - mx) + (_2bx * q3 + _2bz * q1) * (_2bx * (q2q3 - q1q4) + _2bz * (q1q2 + q3q4) - my) + (_2bx * q4 - _4bz * q2) * (_2bx * (q1q3 + q2q4) + _2bz * (0.5 - q2q2 - q3q3) - mz);\ns3 = -_2q1 * (2.0 * q2q4 - _2q1q3 - ax) + _2q4 * (2.0 * q1q2 + _2q3q4 - ay) - 4.0 * q3 * (1.0 - 2.0 * q2q2 - 2.0 * q3q3 - az) #+ (-_4bx * q3 - _2bz * q1) * (_2bx * (0.5 - q3q3 - q4q4) + _2bz * (q2q4 - q1q3) - mx) + (_2bx * q2 + _2bz * q4) * (_2bx * (q2q3 - q1q4) + _2bz * (q1q2 + q3q4) - my) + (_2bx * q1 - _4bz * q3) * (_2bx * (q1q3 + q2q4) + _2bz * (0.5 - q2q2 - q3q3) - mz);\ns4 = _2q2 * (2.0 * q2q4 - _2q1q3 - ax) + _2q3 * (2.0 * q1q2 + _2q3q4 - ay)\ns = np.array([s1,s2,s3,s4])",
"_____no_output_____"
],
[
"deltaF1 = np.squeeze(np.matmul((jacob.T), func))\nprint (\"DelatF difference-> deltaF:{0} and deltaF1:{1} \".format(s, deltaF1))",
"DelatF difference-> deltaF:[0.24831774 1.68745876 3.01367284 1.13449947] and deltaF1:[-0.84277171 1.32376227 0.46779745 -0.68398294] \n"
],
[
"jacob.T",
"_____no_output_____"
],
[
"func",
"_____no_output_____"
],
[
"(2.0 * q2q4 - _2q1q3 - ax)",
"_____no_output_____"
],
[
"(2.0 * q1q2 + _2q3q4 - ay)",
"_____no_output_____"
],
[
"(1.0 - 2.0 * q2q2 - 2.0 * q3q3 - az)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f0f2c67d702e0ab144369068e450d76313e4c1 | 89,375 | ipynb | Jupyter Notebook | notebooks/dataset_statistics.ipynb | medhini/habitat-api | f0d4cdaacb12be43e58bf0b87f43074240faf99b | [
"MIT"
] | null | null | null | notebooks/dataset_statistics.ipynb | medhini/habitat-api | f0d4cdaacb12be43e58bf0b87f43074240faf99b | [
"MIT"
] | null | null | null | notebooks/dataset_statistics.ipynb | medhini/habitat-api | f0d4cdaacb12be43e58bf0b87f43074240faf99b | [
"MIT"
] | null | null | null | 135.416667 | 10,856 | 0.850573 | [
[
[
"# Obtaining Statistics of the RoomNav dataset",
"_____no_output_____"
],
[
"1. Average Geodesic Distances\n2. Histogram of distances vs episodes\n3. Average of top-down maps\n4. Lenght of oracle",
"_____no_output_____"
]
],
[
[
"import habitat\n\nimport numpy as np\nimport random\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nsplits = ['train']\n\ndata_path = '../data/datasets/roomnav/mp3d/v1/{split}/{split}.json.gz'\n\nfor split in splits:\n \n avg_gd = 0\n avg_ed = 0\n min_gd = 10000000000\n max_gd = 0\n min_ed = 10000000000\n max_ed = 0\n \n gd_dists = []\n ed_dists = []\n gd2ed = []\n\n config = habitat.get_config(config_paths='../configs/tasks/pointnav_roomnav_mp3d.yaml')\n config.defrost()\n config.DATASET.DATA_PATH = data_path.format(split=split)\n config.DATASET.SCENES_DIR = '../data/scene_datasets/'\n config.freeze()\n \n env = habitat.Env(config=config)\n \n print('EPISODE COUNT:', len(env.episodes))\n\n for i in range(len(env.episodes)):\n observations = env.reset()\n \n sp = env.current_episode.start_position\n tp = env.current_episode.goals[0].position\n \n gd = env.sim.geodesic_distance(sp, tp)\n ed = np.power(np.power(np.array(sp) - np.array(tp), 2).sum(0), 0.5)\n \n gd2ed.append(gd/ed)\n gd_dists.append(gd)\n ed_dists.append(ed)\n\n env.close()\n \n ed_dists = np.asarray(ed_dists)\n gd_dists = np.asarray(gd_dists)\n gd2ed = np.asarray(gd2ed)\n \n print('SPLIT: ', split)\n print('Average Euclidean Distance: ', np.mean(ed_dists))\n print('Max Euclidean Distance: ', np.max(ed_dists))\n print('Min Euclidean Distance: ', np.min(ed_dists))\n \n print('Average Geodesic Distance: ', np.mean(gd_dists))\n print('Max Geodesic Distance: ', np.max(gd_dists))\n print('Min Geodesic Distance: ', np.min(gd_dists))\n \n plt.hist(gd_dists.astype(int), bins=int(np.max(gd_dists)))\n plt.title(\"Geodesic Distance\")\n plt.ylabel('Episodes')\n plt.show()\n\n plt.hist(ed_dists.astype(int), bins=int(np.max(ed_dists)))\n plt.title(\"Euclidean Distance\")\n plt.ylabel('Episodes')\n plt.show()\n \n plt.hist(np.around(gd2ed, decimals=4), bins=100)\n plt.title(\"Geodesic to Euclidean Ratio\")\n plt.ylabel('Episodes')\n plt.show()\n ",
"2019-08-07 18:11:59,640 initializing sim Sim-v0\nI0807 18:11:59.651225 26550 simulator.py:78] Loaded navmesh /private/home/medhini/navigation-analysis-habitat/habitat-api/data/scene_datasets/mp3d/17DRP5sb8fy/17DRP5sb8fy.navmesh\n2019-08-07 18:12:02,564 initializing task Nav-v0\n"
],
[
"import habitat\n\nimport numpy as np\nimport random\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nsplits = ['test']\n\ndata_path = '../data/datasets/roomnav/mp3d/v1/{split}/{split}.json.gz'\n\nfor split in splits:\n \n avg_gd = 0\n avg_ed = 0\n min_gd = 10000000000\n max_gd = 0\n min_ed = 10000000000\n max_ed = 0\n \n gd_dists = []\n ed_dists = []\n gd2ed = []\n\n config = habitat.get_config(config_paths='../configs/tasks/pointnav_roomnav_mp3d.yaml')\n config.defrost()\n config.DATASET.DATA_PATH = data_path.format(split=split)\n config.DATASET.SCENES_DIR = '../data/scene_datasets/'\n config.freeze()\n \n env = habitat.Env(config=config)\n \n print(len(env.episodes))\n\n for i in range(len(env.episodes)):\n observations = env.reset()\n \n sp = env.current_episode.start_position\n tp = env.current_episode.goals[0].position\n \n gd = env.sim.geodesic_distance(sp, tp)\n ed = np.power(np.power(np.array(sp) - np.array(tp), 2).sum(0), 0.5)\n \n gd2ed.append(gd/ed)\n gd_dists.append(gd)\n ed_dists.append(ed)\n\n env.close()\n \n ed_dists = np.asarray(ed_dists)\n gd_dists = np.asarray(gd_dists)\n gd2ed = np.asarray(gd2ed)\n \n print('SPLIT: ', split)\n print('Average Euclidean Distance: ', np.mean(ed_dists))\n print('Max Euclidean Distance: ', np.max(ed_dists))\n print('Min Euclidean Distance: ', np.min(ed_dists))\n \n print('Average Geodesic Distance: ', np.mean(gd_dists))\n print('Max Geodesic Distance: ', np.max(gd_dists))\n print('Min Geodesic Distance: ', np.min(gd_dists))\n \n plt.hist(gd_dists.astype(int), bins=int(np.max(gd_dists)))\n plt.title(\"Geodesic Distance\")\n plt.ylabel('Episodes')\n plt.show()\n\n plt.hist(ed_dists.astype(int), bins=int(np.max(ed_dists)))\n plt.title(\"Euclidean Distance\")\n plt.ylabel('Episodes')\n plt.show()\n \n plt.hist(np.around(gd2ed, decimals=4), bins=100)\n plt.title(\"Geodesic to Euclidean Ratio\")\n plt.ylabel('Episodes')\n plt.show()\n ",
"2019-08-01 01:00:56,338 initializing sim Sim-v0\nI0801 01:00:56.341329 595 simulator.py:78] Loaded navmesh /private/home/medhini/navigation-analysis-habitat/habitat-api/data/scene_datasets/mp3d/2t7WUuJeko7/2t7WUuJeko7.navmesh\n2019-08-01 01:00:58,313 initializing task Nav-v0\nI0801 01:00:58.432654 595 simulator.py:78] Loaded navmesh /private/home/medhini/navigation-analysis-habitat/habitat-api/data/scene_datasets/mp3d/5ZKStnWn8Zo/5ZKStnWn8Zo.navmesh\n"
],
[
"'''Oracle Path Lengths'''\n\nimport habitat_sim\nimport json\nimport gzip\nfrom pydash import py_\nimport numpy as np\nimport tqdm\nimport glob\n\nsplits = ['train', 'test', 'val']\n\ndata_path = '../data/datasets/roomnav/mp3d/v1/{split}/{split}_all.json.gz'\n\nfor split in splits:\n with gzip.open(data_path.format(split=split), \"rt\") as f:\n episodes = json.load(f)[\"episodes\"]\n\n act_path_lens = []\n for scene_id, eps in tqdm.tqdm(py_.group_by(episodes, \"scene_id\").items()):\n agent_cfg = habitat_sim.AgentConfiguration()\n sim_cfg = habitat_sim.SimulatorConfiguration()\n sim_cfg.scene.id = scene_id\n sim = habitat_sim.Simulator(\n habitat_sim.Configuration(sim_cfg, [agent_cfg])\n )\n\n for ep in tqdm.tqdm(eps, leave=False):\n state = sim.get_agent(0).state\n state.position = ep[\"start_position\"]\n state.rotation = ep[\"start_rotation\"]\n state.sensor_states = dict()\n\n sim.get_agent(0).state = state\n\n act_path_lens.append(\n len(\n sim.make_greedy_follower().find_path(\n ep[\"goals\"][0][\"position\"]\n )\n )\n )\n\n\n act_path_lens = np.array(act_path_lens)\n \n print('SPLIT: ', split)\n print(\"Min=\", np.min(act_path_lens))\n print(\"Mean=\", np.mean(act_path_lens))\n print(\"Median=\", np.median(act_path_lens))\n print(\"Max=\", np.max(act_path_lens))",
" 0%| | 0/32 [00:00<?, ?it/s]\n 0%| | 0/187500 [00:00<?, ?it/s]\u001b[A\n \u001b[A\n"
],
[
"import os\nimport shutil\n\nimport cv2\nimport numpy as np\n\nimport habitat\nfrom habitat.tasks.nav.shortest_path_follower import ShortestPathFollower\nfrom habitat.utils.visualizations import maps\n\nclass SimpleRLEnv(habitat.RLEnv):\n def get_reward_range(self):\n return [-1, 1]\n\n def get_reward(self, observations):\n return 0\n\n def get_done(self, observations):\n return self.habitat_env.episode_over\n\n def get_info(self, observations):\n return self.habitat_env.get_metrics()\n\ndef get_original_map():\n top_down_map = maps.get_topdown_map(\n self._sim,\n self._map_resolution,\n self._num_samples,\n self._config.DRAW_BORDER,\n )\n\n range_x = np.where(np.any(top_down_map, axis=1))[0]\n range_y = np.where(np.any(top_down_map, axis=0))[0]\n\n self._ind_x_min = range_x[0]\n self._ind_x_max = range_x[-1]\n self._ind_y_min = range_y[0]\n self._ind_y_max = range_y[-1]\n return top_down_map\n\ndef draw_source_and_target(top_down_map, episode):\n # mark source point\n s_x, s_y = maps.to_grid(\n episode.start_position[0],\n episode.start_position[2],\n self._coordinate_min,\n self._coordinate_max,\n self._map_resolution,\n )\n point_padding = 2 * int(\n np.ceil(self._map_resolution[0] / MAP_THICKNESS_SCALAR)\n )\n top_down_map[\n s_x - point_padding : s_x + point_padding + 1,\n s_y - point_padding : s_y + point_padding + 1,\n ] = maps.MAP_SOURCE_POINT_INDICATOR\n\n # mark target point\n t_x, t_y = maps.to_grid(\n episode.goals[0].position[0],\n episode.goals[0].position[2],\n self._coordinate_min,\n self._coordinate_max,\n self._map_resolution,\n )\n top_down_map[\n t_x - point_padding : t_x + point_padding + 1,\n t_y - point_padding : t_y + point_padding + 1,\n ] = maps.MAP_TARGET_POINT_INDICATOR\n \n return top_down_map\n \n\ndef draw_top_down_map(info, heading, output_size):\n top_down_map = maps.colorize_topdown_map(info[\"top_down_map\"][\"map\"])\n original_map_size = top_down_map.shape[:2]\n map_scale = np.array(\n (1, original_map_size[1] * 1.0 / original_map_size[0])\n )\n new_map_size = np.round(output_size * map_scale).astype(np.int32)\n # OpenCV expects w, h but map size is in h, w\n top_down_map = cv2.resize(top_down_map, (new_map_size[1], new_map_size[0]))\n\n map_agent_pos = info[\"top_down_map\"][\"agent_map_coord\"]\n map_agent_pos = np.round(\n map_agent_pos * new_map_size / original_map_size\n ).astype(np.int32)\n top_down_map = maps.draw_agent(\n top_down_map,\n map_agent_pos,\n heading - np.pi / 2,\n agent_radius_px=top_down_map.shape[0] / 40,\n )\n return top_down_map\n\ndef shortest_path_example():\n \n splits = ['train', 'test', 'val']\n\n data_path = '../data/datasets/roomnav/mp3d/v1/{split}/{split}.json.gz'\n\n for split in splits:\n \n config = habitat.get_config(config_paths=\"configs/tasks/roomnav_mp3d.yaml\")\n config.defrost()\n config.DATASET.DATA_PATH = data_path.format(split=split)\n config.DATASET.SCENES_DIR = '../data/scene_datasets/'\n config.TASK.MEASUREMENTS.append(\"TOP_DOWN_MAP\")\n config.TASK.SENSORS.append(\"HEADING_SENSOR\")\n config.freeze()\n \n outfile = 'AverageTopDown-{split}'.format(split=split)\n\n env = SimpleRLEnv(config=config)\n goal_radius = env.episodes[0].goals[0].radius\n if goal_radius is None:\n goal_radius = config.SIMULATOR.FORWARD_STEP_SIZE\n # follower = ShortestPathFollower(env.habitat_env.sim, goal_radius, False)\n # follower.mode = mode\n\n print(\"Environment creation successful\")\n for episode in range(len(env.episodes)):\n observations = env.reset()\n # dirname = os.path.join(\n # IMAGE_DIR, \"shortest_path_example\", mode, \"%02d\" % episode\n # )\n # if os.path.exists(dirname):\n # shutil.rmtree(dirname)\n # os.makedirs(dirname)\n \n top_down_map = env.get_info(observations)\n \n print(top_down_map)\n# top_down_map = draw_source_and_target(draw_source_and_target, env.episode)\n \n",
"_____no_output_____"
],
[
"plt.hist(gd2ed, bins=600)\nplt.title(\"Geodesic to Euclidean Ratio\")\nplt.ylabel('Episodes')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7f0f6265d344772ccc13d75f7e2f1ec02d27268 | 28,066 | ipynb | Jupyter Notebook | SBert.ipynb | slawekslex/shopee | 843a912ef067abac19c0273aa8473e01f4184197 | [
"MIT"
] | 1 | 2021-05-28T03:04:47.000Z | 2021-05-28T03:04:47.000Z | SBert.ipynb | slawekslex/shopee | 843a912ef067abac19c0273aa8473e01f4184197 | [
"MIT"
] | null | null | null | SBert.ipynb | slawekslex/shopee | 843a912ef067abac19c0273aa8473e01f4184197 | [
"MIT"
] | 1 | 2021-06-05T16:22:04.000Z | 2021-06-05T16:22:04.000Z | 45.784666 | 11,788 | 0.694078 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from fastai.vision.all import *\nimport sklearn.metrics as skm\nfrom tqdm.notebook import tqdm\nimport sklearn.feature_extraction.text\nfrom transformers import (BertTokenizer, BertModel,AutoModel,AutoTokenizer,\n DistilBertTokenizer, DistilBertModel,AutoModelForSeq2SeqLM,AutoConfig)\n\nfrom shopee_utils import *\nfrom train_utils import *\nimport codecs",
"_____no_output_____"
],
[
"def string_escape(s, encoding='utf-8'):\n return (\n s.encode('latin1') # To bytes, required by 'unicode-escape'\n .decode('unicode-escape') # Perform the actual octal-escaping decode\n .encode('latin1') # 1:1 mapping back to bytes\n .decode(encoding)\n ) # Decode original encoding",
"_____no_output_____"
],
[
"import debugpy\n#debugpy.listen(5678)",
"_____no_output_____"
],
[
"BERT_MODEL_CLASS = AutoModel\nBERT_TOKENIZER_CLASS = BertTokenizer\nclass CONF(ConfigClass):\n bert_path ='sentence-transformers/paraphrase-xlm-r-multilingual-v1'\n arcface_m = .5\n arcface_s = 30\n lr = 1e-2\n lr_mult = 100\n train_epochs = 8\n train_freeze_epochs = 1\n droput_p = .25\n embs_dim = 768\n tokens_max_length = 80\n batch_size = 64\n\n \n def toDict(self):\n return {k:self.__getattribute__(k) for k in dir(self) if k[:2]!='__' and not inspect.isroutine(self.__getattribute__(k))}\n \n def __repr__(self):\n return str(self.toDict())\n \n def foo():\n return CONF()\nconfig = CONF()",
"_____no_output_____"
],
[
"# PATH = Path('../input/shopee-product-matching')\n# model_file = '../input/resnet-model/bert814.pth'\n# if not PATH.is_dir():\n# PATH = Path('/home/slex/data/shopee')\n# model_file ='models/bert814.pth'\n#BERT_PATH = './bert_indonesian' #.823\n#BERT_PATH='bert-base-multilingual-cased'\n#BERT_PATH='bert-base-cased'\n#BERT_PATH='cahya/distilbert-base-indonesian'\n#BERT_PATH='indobenchmark/indobert-base-p1' #.815\n#BERT_PATH='indobenchmark/indobert-base-p2' #.817\n#BERT_PATH='indobenchmark/indobert-large-p1' #.829\n#BERT_PATH='indobenchmark/indobert-large-p2' #.828\n#BERT_PATH='indobenchmark/indobert-lite-large-p2'#.768\n\n\n# BertModelClass = AutoModel\n# BertTokenizerClass = BertTokenizer",
"_____no_output_____"
],
[
"train_df = pd.read_csv(PATH/'train_split.csv', encoding='utf_8')\ntrain_df['is_valid'] = train_df.split==0",
"_____no_output_____"
],
[
"class ArcFaceClassifier(nn.Module):\n def __init__(self, in_features, output_classes):\n super().__init__()\n self.initial_layers=nn.Sequential(\n nn.BatchNorm1d(in_features),\n nn.Dropout(config.droput_p))\n self.W = nn.Parameter(torch.Tensor(in_features, output_classes))\n nn.init.kaiming_uniform_(self.W)\n def forward(self, x):\n x = self.initial_layers(x)\n x_norm = F.normalize(x)\n W_norm = F.normalize(self.W, dim=0)\n return x_norm @ W_norm\n \n",
"_____no_output_____"
],
[
"def mean_pooling(model_output, attention_mask):\n token_embeddings = model_output[0] #First element of model_output contains all token embeddings\n input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()\n sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)\n sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)\n return sum_embeddings / sum_mask\n",
"_____no_output_____"
],
[
"class BertArcFace(nn.Module):\n def __init__(self, bert_model):\n super().__init__()\n self.bert_model = bert_model\n self.classifier = ArcFaceClassifier(config.embs_dim, dls.c)\n self.outputEmbs = False\n def forward(self, x):\n output = self.bert_model(*x)\n #last_hidden =output.last_hidden_state[:,0,:]\n #embeddings=last_hidden\n embeddings = mean_pooling(output, x[1])\n if self.outputEmbs:\n return embeddings\n return self.classifier(embeddings)\n",
"_____no_output_____"
],
[
"class TitleTransform(Transform):\n def __init__(self):\n super().__init__()\n self.tokenizer = AutoTokenizer.from_pretrained(config.bert_path)\n \n \n def encodes(self, row):\n text = row.title\n #text=(codecs.decode(text, 'unicode_escape'))\n text = string_escape(text)\n encodings = self.tokenizer(text, padding = 'max_length', max_length=config.tokens_max_length,\n truncation=True,return_tensors='pt')\n keys =['input_ids', 'attention_mask']#, 'token_type_ids'] \n return tuple(encodings[key].squeeze() for key in keys)",
"_____no_output_____"
],
[
"tfm = TitleTransform()\n\ndata_block = DataBlock(\n blocks = (TransformBlock(type_tfms=tfm), \n CategoryBlock(vocab=train_df.label_group.to_list())),\n splitter=ColSplitter(),\n get_y=ColReader('label_group'),\n )\ndls = data_block.dataloaders(train_df, bs=config.batch_size)\n",
"_____no_output_____"
],
[
"def new_model():\n bert_model = BERT_MODEL_CLASS.from_pretrained(config.bert_path)\n return BertArcFace(bert_model)",
"_____no_output_____"
],
[
"def split_2way(model):\n return L(params(model.bert_model),\n params(model.classifier))",
"_____no_output_____"
],
[
"f1_tracker = TrackerCallback(monitor='F1 embeddings', comp=np.greater)",
"_____no_output_____"
],
[
"opt_func=RMSProp",
"_____no_output_____"
],
[
"\nloss_func=functools.partial(arcface_loss, m=config.arcface_m, s=config.arcface_s)\n#opt_func=functools.partial(Adam, mom=.9, sqr_mom=.99, eps=1e-5, wd=0.01)\nlearn = Learner(dls,new_model(), opt_func=opt_func, splitter=split_2way, loss_func=loss_func, \n cbs = [F1FromEmbs, f1_tracker],metrics=FakeMetric())",
"_____no_output_____"
],
[
"learn.fine_tune(config.train_epochs, config.lr, freeze_epochs=config.train_freeze_epochs, lr_mult=config.lr_mult)",
"_____no_output_____"
],
[
"best_f1=f1_tracker.best\nprint(best_f1)",
"0.8046468428191103\n"
],
[
"learn.load('bert823val')",
"_____no_output_____"
],
[
"learn.save('bertlarge829val')",
"_____no_output_____"
]
],
[
[
"## Validataion set",
"_____no_output_____"
]
],
[
[
"embs_model = learn.model.eval()\nembs_model.outputEmbs = True",
"_____no_output_____"
],
[
"valid_embs, _ = embs_from_model(embs_model, dls.valid)",
"_____no_output_____"
],
[
"dists, inds = get_nearest(valid_embs, do_chunk(valid_embs))",
"_____no_output_____"
],
[
"valid_df=train_df[train_df.is_valid==True].copy().reset_index()\nvalid_df = add_target_groups(valid_df)",
"_____no_output_____"
],
[
"pairs = sorted_pairs(dists, inds)[:len(valid_df)*10]",
"_____no_output_____"
],
[
"_=build_from_pairs(pairs, valid_df.target.to_list())",
"0.830 at 6.4878082275390625 pairs or 0.568 threshold\n"
],
[
"torch.save(learn.model.bert_model.state_dict(), 'models/bert_large_state.pth')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f101144e1fd930d2efce68eabbe2386b98f5c3 | 1,887 | ipynb | Jupyter Notebook | examples/0-gauss_to_regulus.ipynb | yarden-livnat/ipyregulus | 971ab02cd3676b9ea8c712fd3940d42d974c445d | [
"BSD-3-Clause"
] | 1 | 2018-09-06T17:07:41.000Z | 2018-09-06T17:07:41.000Z | examples/0-gauss_to_regulus.ipynb | yarden-livnat/ipyregulus | 971ab02cd3676b9ea8c712fd3940d42d974c445d | [
"BSD-3-Clause"
] | 3 | 2021-03-10T09:24:25.000Z | 2022-01-22T10:49:25.000Z | examples/0-gauss_to_regulus.ipynb | yarden-livnat/ipyregulus | 971ab02cd3676b9ea8c712fd3940d42d974c445d | [
"BSD-3-Clause"
] | 2 | 2018-08-30T19:11:05.000Z | 2020-01-07T16:29:01.000Z | 20.51087 | 233 | 0.54372 | [
[
[
"### Create a regulus file from a csv \nknn is the size of the neighborhood. The default is 100, which is usually sufficient",
"_____no_output_____"
]
],
[
[
"import regulus",
"_____no_output_____"
],
[
"gauss4 = regulus.from_csv('gauss4', knn=8)",
"Base: 120 partitions 2000 points\n>> HasTree validate tree {'trait': <regulus.tree.traittypes.TreeType object at 0x124ed53d0>, 'value': <regulus.topo.regulus.RegulusTree object at 0x125b5f450>, 'owner': <regulus.topo.regulus.Regulus object at 0x125fcbf50>}\n<< HasTree validate tree\n"
],
[
"regulus.save(gauss4, filename='gauss4')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7f10f26d6303ccd7f486669f68bfa3bb3fd7e8e | 24,023 | ipynb | Jupyter Notebook | Big-Data-Clusters/CU3/Public/content/monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb | gantz-at-incomm/tigertoolbox | 9ea80d39a3c5e0c77553fc851c5ee787fbf9291d | [
"MIT"
] | 541 | 2019-05-07T11:41:25.000Z | 2022-03-29T17:33:19.000Z | Big-Data-Clusters/CU3/Public/content/monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb | gantz-at-incomm/tigertoolbox | 9ea80d39a3c5e0c77553fc851c5ee787fbf9291d | [
"MIT"
] | 89 | 2019-05-09T14:23:52.000Z | 2022-01-13T20:21:04.000Z | Big-Data-Clusters/CU3/Public/content/monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb | gantz-at-incomm/tigertoolbox | 9ea80d39a3c5e0c77553fc851c5ee787fbf9291d | [
"MIT"
] | 338 | 2019-05-08T05:45:16.000Z | 2022-03-28T15:35:03.000Z | 57.471292 | 408 | 0.409233 | [
[
[
"TSG081 - Get namespaces (Kubernetes)\n====================================\n\nDescription\n-----------\n\nGet the kubernetes namespaces\n\nSteps\n-----\n\n### Common functions\n\nDefine helper functions used in this notebook.",
"_____no_output_____"
]
],
[
[
"# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows\nimport sys\nimport os\nimport re\nimport json\nimport platform\nimport shlex\nimport shutil\nimport datetime\n\nfrom subprocess import Popen, PIPE\nfrom IPython.display import Markdown\n\nretry_hints = {} # Output in stderr known to be transient, therefore automatically retry\nerror_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help\ninstall_hint = {} # The SOP to help install the executable if it cannot be found\n\nfirst_run = True\nrules = None\ndebug_logging = False\n\ndef run(cmd, return_output=False, no_output=False, retry_count=0):\n \"\"\"Run shell command, stream stdout, print stderr and optionally return output\n\n NOTES:\n\n 1. Commands that need this kind of ' quoting on Windows e.g.:\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}\n\n Need to actually pass in as '\"':\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='\"'data-pool'\"')].metadata.name}\n\n The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:\n \n `iter(p.stdout.readline, b'')`\n\n The shlex.split call does the right thing for each platform, just use the '\"' pattern for a '\n \"\"\"\n MAX_RETRIES = 5\n output = \"\"\n retry = False\n\n global first_run\n global rules\n\n if first_run:\n first_run = False\n rules = load_rules()\n\n # When running `azdata sql query` on Windows, replace any \\n in \"\"\" strings, with \" \", otherwise we see:\n #\n # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')\n #\n if platform.system() == \"Windows\" and cmd.startswith(\"azdata sql query\"):\n cmd = cmd.replace(\"\\n\", \" \")\n\n # shlex.split is required on bash and for Windows paths with spaces\n #\n cmd_actual = shlex.split(cmd)\n\n # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries\n #\n user_provided_exe_name = cmd_actual[0].lower()\n\n # When running python, use the python in the ADS sandbox ({sys.executable})\n #\n if cmd.startswith(\"python \"):\n cmd_actual[0] = cmd_actual[0].replace(\"python\", sys.executable)\n\n # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail\n # with:\n #\n # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)\n #\n # Setting it to a default value of \"en_US.UTF-8\" enables pip install to complete\n #\n if platform.system() == \"Darwin\" and \"LC_ALL\" not in os.environ:\n os.environ[\"LC_ALL\"] = \"en_US.UTF-8\"\n\n # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`\n #\n if cmd.startswith(\"kubectl \") and \"AZDATA_OPENSHIFT\" in os.environ:\n cmd_actual[0] = cmd_actual[0].replace(\"kubectl\", \"oc\")\n\n # To aid supportabilty, determine which binary file will actually be executed on the machine\n #\n which_binary = None\n\n # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to\n # get JWT tokens, it returns \"(56) Failure when receiving data from the peer\". If another instance\n # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost\n # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we\n # look for the 2nd installation of CURL in the path)\n if platform.system() == \"Windows\" and cmd.startswith(\"curl \"):\n path = os.getenv('PATH')\n for p in path.split(os.path.pathsep):\n p = os.path.join(p, \"curl.exe\")\n if os.path.exists(p) and os.access(p, os.X_OK):\n if p.lower().find(\"system32\") == -1:\n cmd_actual[0] = p\n which_binary = p\n break\n\n # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this\n # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) \n #\n # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.\n #\n if which_binary == None:\n which_binary = shutil.which(cmd_actual[0])\n\n if which_binary == None:\n if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:\n display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\")\n else: \n cmd_actual[0] = which_binary\n\n start_time = datetime.datetime.now().replace(microsecond=0)\n\n print(f\"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)\")\n print(f\" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})\")\n print(f\" cwd: {os.getcwd()}\")\n\n # Command-line tools such as CURL and AZDATA HDFS commands output\n # scrolling progress bars, which causes Jupyter to hang forever, to\n # workaround this, use no_output=True\n #\n\n # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait\n #\n wait = True \n\n try:\n if no_output:\n p = Popen(cmd_actual)\n else:\n p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)\n with p.stdout:\n for line in iter(p.stdout.readline, b''):\n line = line.decode()\n if return_output:\n output = output + line\n else:\n if cmd.startswith(\"azdata notebook run\"): # Hyperlink the .ipynb file\n regex = re.compile(' \"(.*)\"\\: \"(.*)\"') \n match = regex.match(line)\n if match:\n if match.group(1).find(\"HTML\") != -1:\n display(Markdown(f' - \"{match.group(1)}\": \"{match.group(2)}\"'))\n else:\n display(Markdown(f' - \"{match.group(1)}\": \"[{match.group(2)}]({match.group(2)})\"'))\n\n wait = False\n break # otherwise infinite hang, have not worked out why yet.\n else:\n print(line, end='')\n if rules is not None:\n apply_expert_rules(line)\n\n if wait:\n p.wait()\n except FileNotFoundError as e:\n if install_hint is not None:\n display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\") from e\n\n exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()\n\n if not no_output:\n for line in iter(p.stderr.readline, b''):\n try:\n line_decoded = line.decode()\n except UnicodeDecodeError:\n # NOTE: Sometimes we get characters back that cannot be decoded(), e.g.\n #\n # \\xa0\n #\n # For example see this in the response from `az group create`:\n #\n # ERROR: Get Token request returned http error: 400 and server \n # response: {\"error\":\"invalid_grant\",# \"error_description\":\"AADSTS700082: \n # The refresh token has expired due to inactivity.\\xa0The token was \n # issued on 2018-10-25T23:35:11.9832872Z\n #\n # which generates the exception:\n #\n # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte\n #\n print(\"WARNING: Unable to decode stderr line, printing raw bytes:\")\n print(line)\n line_decoded = \"\"\n pass\n else:\n\n # azdata emits a single empty line to stderr when doing an hdfs cp, don't\n # print this empty \"ERR:\" as it confuses.\n #\n if line_decoded == \"\":\n continue\n \n print(f\"STDERR: {line_decoded}\", end='')\n\n if line_decoded.startswith(\"An exception has occurred\") or line_decoded.startswith(\"ERROR: An error occurred while executing the following cell\"):\n exit_code_workaround = 1\n\n # inject HINTs to next TSG/SOP based on output in stderr\n #\n if user_provided_exe_name in error_hints:\n for error_hint in error_hints[user_provided_exe_name]:\n if line_decoded.find(error_hint[0]) != -1:\n display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))\n\n # apply expert rules (to run follow-on notebooks), based on output\n #\n if rules is not None:\n apply_expert_rules(line_decoded)\n\n # Verify if a transient error, if so automatically retry (recursive)\n #\n if user_provided_exe_name in retry_hints:\n for retry_hint in retry_hints[user_provided_exe_name]:\n if line_decoded.find(retry_hint) != -1:\n if retry_count < MAX_RETRIES:\n print(f\"RETRY: {retry_count} (due to: {retry_hint})\")\n retry_count = retry_count + 1\n output = run(cmd, return_output=return_output, retry_count=retry_count)\n\n if return_output:\n return output\n else:\n return\n\n elapsed = datetime.datetime.now().replace(microsecond=0) - start_time\n\n # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so\n # don't wait here, if success known above\n #\n if wait: \n if p.returncode != 0:\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(p.returncode)}.\\n')\n else:\n if exit_code_workaround !=0 :\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(exit_code_workaround)}.\\n')\n\n print(f'\\nSUCCESS: {elapsed}s elapsed.\\n')\n\n if return_output:\n return output\n\ndef load_json(filename):\n \"\"\"Load a json file from disk and return the contents\"\"\"\n\n with open(filename, encoding=\"utf8\") as json_file:\n return json.load(json_file)\n\ndef load_rules():\n \"\"\"Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable\"\"\"\n\n try:\n\n # Load this notebook as json to get access to the expert rules in the notebook metadata.\n #\n j = load_json(\"tsg081-get-kubernetes-namespaces.ipynb\")\n\n except:\n pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?\n\n else:\n if \"metadata\" in j and \\\n \"azdata\" in j[\"metadata\"] and \\\n \"expert\" in j[\"metadata\"][\"azdata\"] and \\\n \"rules\" in j[\"metadata\"][\"azdata\"][\"expert\"]:\n\n rules = j[\"metadata\"][\"azdata\"][\"expert\"][\"rules\"]\n\n rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.\n\n # print (f\"EXPERT: There are {len(rules)} rules to evaluate.\")\n\n return rules\n\ndef apply_expert_rules(line):\n \"\"\"Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so\n inject a 'HINT' to the follow-on SOP/TSG to run\"\"\"\n\n global rules\n\n for rule in rules:\n\n # rules that have 9 elements are the injected (output) rules (the ones we want). Rules\n # with only 8 elements are the source (input) rules, which are not expanded (i.e. TSG029,\n # not ../repair/tsg029-nb-name.ipynb)\n if len(rule) == 9:\n notebook = rule[1]\n cell_type = rule[2]\n output_type = rule[3] # i.e. stream or error\n output_type_name = rule[4] # i.e. ename or name \n output_type_value = rule[5] # i.e. SystemExit or stdout\n details_name = rule[6] # i.e. evalue or text \n expression = rule[7].replace(\"\\\\*\", \"*\") # Something escaped *, and put a \\ in front of it!\n\n if debug_logging:\n print(f\"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.\")\n\n if re.match(expression, line, re.DOTALL):\n\n if debug_logging:\n print(\"EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'\".format(output_type_name, output_type_value, expression, notebook))\n\n match_found = True\n\n display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))\n\n\n\nprint('Common functions defined successfully.')\n\n# Hints for binary (transient fault) retry, (known) error and install guide\n#\nretry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}\nerror_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}\ninstall_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}",
"_____no_output_____"
]
],
[
[
"### Show the Kubernetes namespaces",
"_____no_output_____"
]
],
[
[
"run('kubectl get namespace')",
"_____no_output_____"
]
],
[
[
"### Show the Kubernetes namespaces with labels\n\nKubernetes namespaces containing a SQL Server Big Data Cluster have the\nlabel ‘MSSQL\\_CLUSTER’",
"_____no_output_____"
]
],
[
[
"run('kubectl get namespaces -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,LABELS:.metadata.labels')",
"_____no_output_____"
],
[
"print('Notebook execution complete.')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7f112538f6a3253c9fe89b508f62c7e7684fa46 | 32,267 | ipynb | Jupyter Notebook | lang-r-base.ipynb | duchesnay/dspyr | d1ef435cae014c1bca9561616fc2d67ab1eecc75 | [
"BSD-3-Clause"
] | null | null | null | lang-r-base.ipynb | duchesnay/dspyr | d1ef435cae014c1bca9561616fc2d67ab1eecc75 | [
"BSD-3-Clause"
] | null | null | null | lang-r-base.ipynb | duchesnay/dspyr | d1ef435cae014c1bca9561616fc2d67ab1eecc75 | [
"BSD-3-Clause"
] | null | null | null | 35.11099 | 610 | 0.616326 | [
[
[
"# Introduction to R\n\nThis introduction to the R language aims at understanding how to represent and manipulate data objects as commonly found in *data science*, to provide basic summary statistic and to build relevant graphical representation of the data. \n\n**Important notice:** Only base commands are discussed here, not the [tidyverse](https://www.tidyverse.org). A separate cheatsheet is available for the `ggplot2` package (TODO).\n\n## Installing R and RStudio\n\nThe R statistical package can be installed from [CRAN](https://cran.r-project.org). Be sure to also download [RStudio](https://www.rstudio.com) as it provided a full-featured user interface to interact with R. To use Jupyter notebook, you will also need the [IR kernel](https://irkernel.github.io).\n\n## Useful additional packages\n\nThis tutorial mainly relies on core facilities that come along so called R [base packages](https://stackoverflow.com/a/9705725). However, it is possible to install additional packages and, in particular, the [ggplot2](https://ggplot2.tidyverse.org) package, as shown below:\n\n install.packages(\"ggplot2\")\n\n## Setup\n\nThe following setup will be used for graphical displays:",
"_____no_output_____"
]
],
[
[
"library(ggplot2)\ntheme_set(theme_minimal())",
"_____no_output_____"
]
],
[
[
"Note that you need to load the `ggplot2` package only once, at the start of your R session.",
"_____no_output_____"
],
[
"## Getting started\n\n### Variables\n\nThere are fundamentally two kind of data structures in statistics-oriented programming languages: numbers and strings. Numbers can be integers or real numbers and they are used to represent values observed for a continuous or discrete statistical variable, while strings are everything else that cannot be represented as numbers or list of numbers, e.g. address of a building, answer to an open-ended question in a survey, etc.\n\nHere is how we can create a simple variable, say `x`, to store a list of 5 numerical values:",
"_____no_output_____"
]
],
[
[
"x <- c(1, 3, 2, 5, 4)",
"_____no_output_____"
]
],
[
[
"Note that the symbol `<-` stands for the recommended assignment operator, yet it is possible to use `=` to assign some quantity to a given variable, which appears on the left hand side of the above expression. Also, the series of values is reported between round brackets, and each values is separated by a comma. From now on, we will talk interchangeably of values or of observations as if we were talking of a measure collected on a statistical unit.\n\nSome properties of this newly created variable can be queried online, e.g. how many elements does `x` has or how those elements are represneted in R:",
"_____no_output_____"
]
],
[
[
"length(x)\ntypeof(x)",
"_____no_output_____"
]
],
[
[
"It should be noted that `x` contains values stored as real numbers (`double`) while they may just be stored as integers. It is however possible to ask R to use truly integer values:",
"_____no_output_____"
]
],
[
[
"x <- c(1L, 3L, 2L, 5L, 4L)\ntypeof(x)",
"_____no_output_____"
]
],
[
[
"The distinction between 32 bits integers and reals will not be that important in common analysis tasks, but it is important to keep in mind that it is sometimes useful to check whether data are represented as expected, especially in the case of categorical variables, also called 'factor' in R parlance (more on this latter).\n\nThe list of numbers we stored in `x` is called a *vector*, and it is one of the building block of common R data structures. Oftentimes, we will need richer data structures, especially two-dimensional objects, like *matrix* or *data frame*, or higher-dimensional objects such as *array* or *list*.\n\n",
"_____no_output_____"
],
[
"### Vectors\n\nThe command `c` ('concatenate') we used to create our list of integers will be very useful when it comes to pass multiple options to a command. It can be nested into another call to `c` like in the following exemple:",
"_____no_output_____"
]
],
[
[
"x <- c(c(1, 2, 3), c(4, 5, 6), 7, 8)",
"_____no_output_____"
]
],
[
[
"In passing, note that since we use the same name for our newly created variable, `x`, the old content referenced in `x` (1, 3, 2, 5, 4) is definitively lost. Once you have a vector of values, you can access each item by providing the (one-based) index of the item(s), e.g.:",
"_____no_output_____"
]
],
[
[
"x[1]\nx[3]\nx[c(1,3)]\nx[1:3]",
"_____no_output_____"
]
],
[
[
"A convenient shorhand notation for regular sequence of integers is `start:end`, where `start` is the starting value and `end` is the last value (both included). Hence, `c(1,2,3,4)` is the same as `1:4`. This is useful when one wants to preview the first 3 or 5 values in a vector, for example. A more general function to work with regular sequence of numbers is `seq`. Here is an example of use:",
"_____no_output_____"
]
],
[
[
"seq(1, 10)\nseq(1, 10, by = 2)\nseq(0, 10, length = 5)",
"_____no_output_____"
]
],
[
[
"Updating content of a vector can be done directly by assigning a new value to one of the item:",
"_____no_output_____"
]
],
[
[
"x[3] <- NA",
"_____no_output_____"
]
],
[
[
"In the above statement, the third item has been assigned a missing value, which is coded as `NA` ('not available') in R. Again, there is no way to go back to the previous state of the variable, so be careful when updating the content of a variable.\n\nThe presence of missing data is important to check before engaging into any serious statistical stuff. The `is.na` function can be used to check for the presence of any missing value in a variable, while `which` will return the index that matches a `TRUE` result, if any:",
"_____no_output_____"
]
],
[
[
"is.na(x)\nwhich(is.na(x))",
"_____no_output_____"
]
],
[
[
"Notice that many functions like `is.na`, or `which`, act in a vectorized way, meaning that you don't have to iterate manually over each item in the vector. Moreover, function calls can be nested one into the other. In the latter R expression, `which` is actually processing the values returned by the call to `is.na`.",
"_____no_output_____"
],
[
"### Vectors and random sampling\n\nThe `sample` function allows to randomly shuffle an existing vector or to generate a sequence of random numbers. Whenever we rely on the random number generator (RNG), it is recommend to set the seed of the RNG in order to ensure that those pseudo-random sequence could be reproduced later. Here is an illustration:",
"_____no_output_____"
]
],
[
[
"s <- c(1, 4, 2, 3, 8)\nsample(s)",
"_____no_output_____"
],
[
"sample(1:10, size = 5)\nsample(0:1, size = 10, replace = TRUE)",
"_____no_output_____"
]
],
[
[
"In summary, `sample(1:n, size = n)` returns a permutation of the `n` elements, while `sample(1:n, size = n, replace = TRUE)` provides a bootstrap sample of the original data.",
"_____no_output_____"
],
[
"### Sorting\n\nSorting a list of values or finding the index or rank of any value in a vector are common tasks in statistical programming. It is different from computing ranks of observed values, which is handled by the `rank` function. The two main instructions to sort a list of values and to get the index of the sorted item are `sort` and `order`, respectively:",
"_____no_output_____"
]
],
[
[
"z <- c(1, 6, 7, 2, 8, 3, 9, 4, 5)\nsort(z)\norder(z)",
"_____no_output_____"
]
],
[
[
"### Data frames\n\nData frames are one of the core data structures to store and represent statistical data. Many routine functions that are used to load data stored in flat files or databases or to preprocess data stored in memory rely on data frames. Likewise, graphical commands such as those found in the `ggplot2` package generally assumes a data frame as input. The same applies to functions used in statistical modeling (`lm`, `glm`, etc.).\n\nIn a data frame, observations are arranged in rows and variables are arranged in columns. Each variable can be viewed as a single vector, but those variables are all recorded into a common data structure, each with an unique name. Moreover, each column, or variable, can be of a different type--numeric, factor, character or boolean, which makes data frame slightly different from 'matrix' object where only values of the same type can be stored.\n\nHere is an example of a built-in data frame, readily available by using the command `data`:",
"_____no_output_____"
]
],
[
[
"data(ToothGrowth)\nhead(ToothGrowth)",
"_____no_output_____"
],
[
"str(ToothGrowth)",
"_____no_output_____"
]
],
[
[
"While `head` allows to preview the first 6 lines of a data frame, `str` provides a concise overview of what's available in the data frame, namely: the name of each variable (column), its mode of representation, et the first 10 observations (values).\n\nThe dimensions (number of lines and columns) of a data frame can be verified using `dim` (a shortcut for the combination of `nrows` and `ncols`): ",
"_____no_output_____"
]
],
[
[
"dim(ToothGrowth)",
"_____no_output_____"
]
],
[
[
"To access any given cell in this data frame, we will use the indexing trick we used in the case of vectors, but this time we have to indicate the line number as well as the column number, or name: Hence, `ToothGrowth[i,j]` means the value located at line `i` and column `j`, while `ToothGrowth[c(a,b),j]` would mean values at line `a` and `b` for the same column `j`.\n\n\n\nHere is how we can retrieve the second observation in the first column:",
"_____no_output_____"
]
],
[
[
"ToothGrowth[2,1]",
"_____no_output_____"
]
],
[
[
"Since the columns of a data frame have names, it is equivalent to use `ToothGrowth[2,1]` and `ToothGrowth[2,\"len\"]`. In the latter case, variable names must be quoted. Column names can be displayed using `colnames` or `names` (in the special case of data frames), while row names are available *via* `rownames`. Row names can be used as unique identifier for statistical units, but best practice is usually to store unique IDs as characters or factor levels in a dedicated column in the data frame.\n\nSince we know that we can use `c` to create a list of numbers, we can use `c` to create a list of line numbers to look for. Imagine you want to access the content of a given column (`len`, which is the first column, numbered 1), for lines 2 and 4: (`c(2, 4)`):\n\n\n\nHere is how we would do in R:",
"_____no_output_____"
]
],
[
[
"ToothGrowth[c(2,4),1]",
"_____no_output_____"
]
],
[
[
"This amounts to 'indexed selection', meaning that we need to provide the row (or column) numbers, while most of the time we are interested in criterion-based indexation, that is: \"which observation fullfills a given criterion.\" We generally call this a 'filter'. Since most R operations are vectorized, this happens to be really easy. For instance, to display observations on `supp` that satisfy the condition `len > 6`, we would use: ",
"_____no_output_____"
]
],
[
[
"head(ToothGrowth$supp[ToothGrowth$len > 6])",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"Likewise, it is possible to combine different filters using logical operators: `&` stands for 'and' (logical conjunction) and `|` stands for 'or' (logical disjonction); logical equality is denoted as `==` while its negation reads `!=`. Here is an example where we want to select observations that satisfy a given condition on both the `dose` (dose = 0.5) and `len` (len > 10) variables:\n\n\n\nIn R, we would write:",
"_____no_output_____"
]
],
[
[
"ToothGrowth[,ToothGrowth$len > 10 & ToothGrowth$dose < 1]",
"_____no_output_____"
]
],
[
[
"You will soon realize that for complex queries this notation become quite cumbersome: all variable must be prefixed by the name of the data frame, which can result in a very long statement. While it is recommended practice for programming or when developing dedicated package, it is easier to rely on `subset` in an interactive session. The `subset` command asks for three arguments, namely the name of the data frame we are working on, the rows we want to select (or filter), and the columns we want to return. The result of a call to `subset` is always a data frame.\n\n\n\nHere is an example of use:",
"_____no_output_____"
]
],
[
[
"subset(ToothGrowth, len > 10 & dose < 1)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"It is also possible to use the technique discussed in the case of vectors to sort a data frame in ascending or descending order according to one or more variables. Here is an example using the `len` variable:",
"_____no_output_____"
]
],
[
[
"head(ToothGrowth)\nhead(ToothGrowth[order(ToothGrowth$len),])",
"_____no_output_____"
]
],
[
[
"The `which` function can also be used to retrieve a specific observation in a data frame, like in the following instruction:",
"_____no_output_____"
]
],
[
[
"which(ToothGrowth$len < 8)",
"_____no_output_____"
]
],
[
[
"## Statistical summaries\n\nAs explained above, the `str` function is useful to check a given data structure, and individual properties of a data frame can be queried using dedicated functions, e.g. `nrow` or `ncol`. Now, to compute statistical quantities on a variable, we can use dedicated functions like `mean` (arithmetical mean), `sd` (standard deviation; see also `var`), `IQR` (interquartile range), `range` (range of values; see also `min` and `max`), or the `summary` function, which computes a five-point summary in the case of a numerical variable or a table of counts for categorical outcomes.\n\n### Univariate case\n\nHere are some applications in the case we are interested in summarizing one variable at a time:",
"_____no_output_____"
]
],
[
[
"mean(ToothGrowth$len)\nrange(ToothGrowth$len)\nc(min(ToothGrowth$len), max(ToothGrowth$len))\ntable(ToothGrowth$dose)",
"_____no_output_____"
],
[
"summary(ToothGrowth)",
"_____no_output_____"
]
],
[
[
"Of course, the above functions can be applied to a subset of the original data set:",
"_____no_output_____"
]
],
[
[
"mean(ToothGrowth$len[ToothGrowth$dose == 1])\ntable(ToothGrowth$dose[ToothGrowth$len < 20])",
"_____no_output_____"
]
],
[
[
"### Bivariate case\n\nIf we want to summarize a numerical variable according the values that a factor variable takes, we can use `tapply` or `aggregate`. The latter expects a 'formula' describing the relation between the variables we are interested in: the response variable or outcome appears on the left-hand side (LHS), while the factors or descriptors are listed in the right-hand side (RHS). The last argument to `aggregate` is the function we want to apply to each chunk of observations defined by the RHS. Here is an example of use:",
"_____no_output_____"
]
],
[
[
"aggregate(len ~ dose, data = ToothGrowth, mean)\naggregate(len ~ supp + dose, data = ToothGrowth, mean)",
"_____no_output_____"
]
],
[
[
"Note that only one function can be applied to the 'formula'. Even if it possible to write a custom function that computes the mean and standard deviation of a variable, both results will be returned as single column in the data frame returned by `aggregate`. There do exist other ways to perform such computation, though (see, e.g., the `plyr`, `dplyr` or `Hmisc` packages, to name a few), if the results are to be kept in separate variables for later. This, however, does not preclude from using `aggregate` for printing multivariate results in the console:",
"_____no_output_____"
]
],
[
[
"aggregate(len ~ dose, data = ToothGrowth, summary)\nf <- function(x) c(mean = mean(x), sd = sd(x))\naggregate(len ~ dose, data = ToothGrowth, f)",
"_____no_output_____"
]
],
[
[
"The `table` functions also works with two (or even three) variables:",
"_____no_output_____"
]
],
[
[
"table(ToothGrowth$dose, ToothGrowth$supp)",
"_____no_output_____"
]
],
[
[
"If formulas are to be preferred, the `xtabs` function provides a convenient replacement for `table`:",
"_____no_output_____"
]
],
[
[
"xtabs(~ dose + supp, data = ToothGrowth)",
"_____no_output_____"
]
],
[
[
"In either case, frequencies can be computed from the table of counts using `prop.table`, using the desired margin (row=1, column=2) in the bivariate case:",
"_____no_output_____"
]
],
[
[
"prop.table(table(ToothGrowth$dose))\nprop.table(table(ToothGrowth$dose, ToothGrowth$supp), margins = 1)",
"_____no_output_____"
]
],
[
[
"## Practical use case: The ESS survey\n\nThe `data` directory includes three [RDS](https://www.rdocumentation.org/packages/base/versions/3.5.3/topics/readRDS) files related to the [European Social Survey](https://www.europeansocialsurvey.org) (ESS). This survey first ran in 2002 (round 1), and it is actually renewed every two years. The codebook can be downloaded, along [other data sheets](http://www.europeansocialsurvey.org/data/download.html), on the main website.\n\nThere are two files related to data collected in France (round 1 or rounds 1-5, `ess-*-fr.rds`) and one file for all participating countries (`ess-one-round.rds`).\n\n### French data\n\nAssuming the `data` directory is available in the current working directory, here is how we can load French data for round 1:",
"_____no_output_____"
]
],
[
[
"d <- readRDS(\"data/ess-one-round-fr.rds\")\nhead(d[1:10])",
"_____no_output_____"
],
[
"table(d$yrbrn)",
"_____no_output_____"
],
[
"summary(d$agea)",
"_____no_output_____"
]
],
[
[
"Let us focus on the following list of variables, readily available in the file `ess-one-round-29vars-fr.rds`:\n\n- `tvtot`: TV watching, total time on average weekday\n- `rdtot`: Radio listening, total time on average weekday\n- `nwsptot`: Newspaper reading, total time on average weekday\n- `polintr`: How interested in politics\n- `trstlgl`: Trust in the legal system\n- `trstplc`: Trust in the police\n- `trstplt`: Trust in politicians\n- `vote`: Voted last national election\n- `happy`: How happy are you\n- `sclmeet`: How often socially meet with friends, relatives or colleagues\n- `inmdisc`: Anyone to discuss intimate and personal matters with\n- `sclact`: Take part in social activities compared to others of same age\n- `health`: Subjective general health\n- `ctzcntr`: Citizen of country\n- `brncntr`: Born in country\n- `facntr`: Father born in country\n- `mocntr`: Mother born in country\n- `hhmmb`: Number of people living regularly as member of household\n- `gndr`: Gender\n- `yrbrn`: Year of birth\n- `agea`: Age of respondent, calculated\n- `edulvla`: Highest level of education\n- `eduyrs`: Years of full-time education completed\n- `pdjobyr`: Year last in paid job\n- `wrkctr`: Employment contract unlimited or limited duration\n- `wkhct`: Total contracted hours per week in main job overtime excluded\n- `marital`: Legal marital status\n- `martlfr`: Legal marital status, France\n- `lvghw`: Currently living with husband/wife\n\n### Recoded French data\n\nNote that variables in the file `ess-one-round-29vars-fr.rds` have been recoded and categorical variables now have proper labels. See the script file `scripts/ess-one-round-29vars-fr.r` to see what has been done to the base file.",
"_____no_output_____"
]
],
[
[
"d <- readRDS(\"data/ess-one-round-29vars-fr.rds\")",
"_____no_output_____"
]
],
[
[
"First, let us look at the distribution of the `gndr` variable, using a simple bar diagram:",
"_____no_output_____"
]
],
[
[
"summary(d$gndr)",
"_____no_output_____"
],
[
"p <- ggplot(data = d, aes(x = gndr)) +\n geom_bar() +\n labs(x = \"Sex of respondant\", y = \"Counts\")\np",
"_____no_output_____"
]
],
[
[
"Now, let's look at the distribution of age. The `ggplot2` package offer a `geom_density` function but it is also possible to draw a line using the precomputed empirical density function, or to let `ggplot2` compute the density function itself using the `stat=` option. Here is how it looks:",
"_____no_output_____"
]
],
[
[
"summary(d$agea)",
"_____no_output_____"
],
[
"p <- ggplot(data = d, aes(x = agea)) +\n geom_line(stat = \"density\", bw = 2) +\n labs(x = \"Age of respondant\")\np",
"_____no_output_____"
]
],
[
[
"The distribution of age can also be represented as an histogram, and `ggplot2` makes it quite easy to split the display depending on the sex of the respondants, which is called a 'facet' in `ggplot2` parlance:",
"_____no_output_____"
]
],
[
[
"p <- ggplot(data = d, aes(x = agea)) +\n geom_histogram(binwidth = 5) +\n facet_grid(~ gndr) +\n labs(x = \"Age of respondant\")\np",
"_____no_output_____"
]
],
[
[
"Finally, a boxplot might also be an option, especially when we want to compare the distribution of a numerical variable across levels of a categorical variable. The `coord_flip` instruction is used to swap the X and Y axes, but keep in mind that `x=` and `y=` labels still refer to the `x=` and `y=` variable defined in the `aes()`:",
"_____no_output_____"
]
],
[
[
"p <- ggplot(data = d, aes(x = gndr, y = agea)) +\n geom_boxplot() +\n coord_flip() +\n labs(x = NULL, y = \"Age of respondants\")\np",
"_____no_output_____"
]
],
[
[
"**Sidenote:** In the above instructions, we used the following convention to build a `ggplot2` object:\n\n- we assign to a variable, say `p`, the call to `ggplot2` plus any further instructions ('geom', 'scale', 'coord_', etc.) using the `+` operator;\n- we use only one `aes()` structure, when calling `ggplot`, so that it makes it clear what are the data and what variables are used;\n- we display the graph at the end, by simpling calling our variable `p`.\n\nYet, it is possible to proceed in many different ways, depending on your taste and needs. The following instructions are all valid expressions and will yield the same result:\n\n ggplot(data = d, aes(x = gndr, y = agea, color = vote)) + geom_boxplot()\n\n p <- ggplot(data = d, aes(x = gndr, y = agea, color = vote))\n p <- p + geom_boxplot() + labs(x = \"Gender\")\n p\n\n p <- ggplot(data = d, aes(x = gndr, y = agea))\n p + geom_boxplot(aes(color = vote)) + labs(x = \"Gender\")\n\nMoreover, it is also possible to use the quick one-liner version of `ggplot`, namely `qplot`:\n\n qplot(x = gndr, y = agea, data = d, color = vote, geom = \"boxplot\") + labs(x = \"Gender\")\n\nor even:\n\n qplot(x = gndr, y = agea, data = d, color = vote, geom = \"boxplot\", xlab = \"Gender\")\n \nFurther details are available in the handout \"lang-r-ggplot\".",
"_____no_output_____"
],
[
"### Data from other countries\n\nData from all other participating countries can be loaded in the same manner:",
"_____no_output_____"
]
],
[
[
"db <- readRDS(\"data/ess-one-round.rds\")\ncat(\"No. observations =\", nrow(db))\ntable(db$cntry)",
"_____no_output_____"
]
],
[
[
"Since French data are (deliberately) missing from this dataset, we can append them to the above data frame as follows: ",
"_____no_output_____"
]
],
[
[
"db <- rbind.data.frame(db, d)\ncat(\"No. observations =\", nrow(db))",
"_____no_output_____"
],
[
"db$cntry <- factor(db$cntry)\ntable(db$cntry)",
"_____no_output_____"
]
],
[
[
"Remember that is also possible to use `summary()` with a factor variable to display a table of counts.",
"_____no_output_____"
],
[
"In this particular case, we are just appending a data frame to another data frame already loaded in memory. This assumes that both share the name columns, of course. Sometimes, another common operation might be performed, an 'inner join' between two data tables. For example, imagine that part of the information is spread out in one data frame, and the rest of the data sits in another data frame. If the two data frames have a common unique ID, it is then easy to merge both data frames using the `merge` command. Here is a simplified example using the above data, that we will split in two beforehand:",
"_____no_output_____"
]
],
[
[
"db$id <- 1:nrow(db)\ndb1 <- db[,c(1:10,ncol(db))]\ndb2 <- db[,c(11:(ncol(db)-1),ncol(db))]\nall <- merge(db1, db2, by = \"id\")",
"_____no_output_____"
]
],
[
[
"In case the unique ID is spelled differently in the two data frames, it is possible to replace `by=` with `by.x=` and `by.y=`. Notice that we created a column to store the unique ID (as an integer), since it would be more difficult to use `rownames` as the key.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7f112b1831261f9811823a1672cd609943c2263 | 30,369 | ipynb | Jupyter Notebook | cifar100/alexnet/trainings/cifar100_alexnet_relu.ipynb | dee-ex/EE3063-SEM202-PROJECT | 439a7edf5acade1581d7d08856532e8919bc1604 | [
"MIT"
] | null | null | null | cifar100/alexnet/trainings/cifar100_alexnet_relu.ipynb | dee-ex/EE3063-SEM202-PROJECT | 439a7edf5acade1581d7d08856532e8919bc1604 | [
"MIT"
] | null | null | null | cifar100/alexnet/trainings/cifar100_alexnet_relu.ipynb | dee-ex/EE3063-SEM202-PROJECT | 439a7edf5acade1581d7d08856532e8919bc1604 | [
"MIT"
] | null | null | null | 82.97541 | 8,136 | 0.663506 | [
[
[
"import functools\nimport numpy as np\nimport tensorflow as tf\nfrom keras.datasets import cifar100\nfrom keras import Sequential\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom sklearn.model_selection import train_test_split\nfrom keras.optimizers import SGD\nfrom keras.layers import Dense, Activation, Dropout, Flatten, Conv2D, MaxPooling2D, LeakyReLU\nfrom keras.layers.normalization import BatchNormalization\nfrom tensorflow.keras.utils import to_categorical\nfrom keras.utils.generic_utils import get_custom_objects",
"_____no_output_____"
],
[
"(x_train, y_train), (x_test, y_test) = cifar100.load_data()\nx_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=.2)",
"_____no_output_____"
],
[
"y_train = to_categorical(y_train)\ny_val = to_categorical(y_val)\ny_test = to_categorical(y_test)",
"_____no_output_____"
],
[
"x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape",
"_____no_output_____"
],
[
"train_generator = ImageDataGenerator(rotation_range=2, horizontal_flip=True)\nval_generator = ImageDataGenerator(rotation_range=2, horizontal_flip=True)\ntest_generator = ImageDataGenerator(rotation_range=2, horizontal_flip=True)\n\ntrain_generator.fit(x_train)\nval_generator.fit(x_val)\ntest_generator.fit(x_test)",
"_____no_output_____"
],
[
"def alexnet(activation):\n AlexNet = Sequential()\n\n AlexNet.add(Conv2D(filters=96, input_shape=(32, 32, 3), kernel_size=(11, 11), strides=(4, 4), padding='same'))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n AlexNet.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))\n\n AlexNet.add(Conv2D(filters=256, kernel_size=(5, 5), strides=(1, 1), padding='same'))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n AlexNet.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))\n\n AlexNet.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), padding='same'))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n\n AlexNet.add(Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), padding='same'))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n\n AlexNet.add(Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 1), padding='same'))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n AlexNet.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))\n\n AlexNet.add(Flatten())\n AlexNet.add(Dense(4096, input_shape=(32, 32, 3)))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n AlexNet.add(Dropout(0.5))\n\n AlexNet.add(Dense(4096))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n AlexNet.add(Dropout(0.5))\n\n AlexNet.add(Dense(1000))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation(activation))\n AlexNet.add(Dropout(0.5))\n\n AlexNet.add(Dense(100))\n AlexNet.add(BatchNormalization())\n AlexNet.add(Activation('softmax'))\n\n return AlexNet",
"_____no_output_____"
],
[
"def gelu(x):\n return 0.5 * x * (1 + tf.tanh(tf.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))\nget_custom_objects().update({'gelu': Activation(gelu)})\n\ndef swish(x):\n return x * tf.sigmoid(x)\nget_custom_objects().update({'swish': Activation(swish)})\n\nget_custom_objects().update({'leaky-relu': Activation(LeakyReLU(alpha=0.2))})\n\n# act_func = ['tanh', 'relu', 'leaky-relu', 'elu', 'selu', 'gelu', 'swish']\n\nmodel = alexnet('relu')\n\nbatch_size = 128\nepochs = 50\n\ntop3_acc = functools.partial(tf.keras.metrics.top_k_categorical_accuracy, k=3)\n\ntop3_acc.__name__ = 'top3_acc'\n\nmodel.compile(optimizer=SGD(), loss='categorical_crossentropy', metrics=['accuracy', top3_acc, 'top_k_categorical_accuracy'])\n\nhistory = model.fit(train_generator.flow(x_train, y_train, batch_size=batch_size), epochs=epochs,\n validation_data=val_generator.flow(x_val, y_val, batch_size=batch_size), verbose=1)",
"Epoch 1/50\n313/313 [==============================] - 45s 79ms/step - loss: 4.8244 - accuracy: 0.0181 - top3_acc: 0.0518 - top_k_categorical_accuracy: 0.0807 - val_loss: 4.2340 - val_accuracy: 0.0685 - val_top3_acc: 0.1666 - val_top_k_categorical_accuracy: 0.2346\nEpoch 2/50\n313/313 [==============================] - 25s 80ms/step - loss: 4.2572 - accuracy: 0.0518 - top3_acc: 0.1325 - top_k_categorical_accuracy: 0.1968 - val_loss: 3.9779 - val_accuracy: 0.1073 - val_top3_acc: 0.2380 - val_top_k_categorical_accuracy: 0.3211\nEpoch 3/50\n313/313 [==============================] - 24s 77ms/step - loss: 4.0490 - accuracy: 0.0752 - top3_acc: 0.1842 - top_k_categorical_accuracy: 0.2633 - val_loss: 3.8471 - val_accuracy: 0.1303 - val_top3_acc: 0.2690 - val_top_k_categorical_accuracy: 0.3574\nEpoch 4/50\n313/313 [==============================] - 23s 74ms/step - loss: 3.8954 - accuracy: 0.1032 - top3_acc: 0.2300 - top_k_categorical_accuracy: 0.3153 - val_loss: 3.6996 - val_accuracy: 0.1548 - val_top3_acc: 0.3110 - val_top_k_categorical_accuracy: 0.4047\nEpoch 5/50\n313/313 [==============================] - 26s 82ms/step - loss: 3.7857 - accuracy: 0.1211 - top3_acc: 0.2593 - top_k_categorical_accuracy: 0.3540 - val_loss: 3.6563 - val_accuracy: 0.1623 - val_top3_acc: 0.3196 - val_top_k_categorical_accuracy: 0.4174\nEpoch 6/50\n313/313 [==============================] - 23s 75ms/step - loss: 3.6881 - accuracy: 0.1378 - top3_acc: 0.2904 - top_k_categorical_accuracy: 0.3874 - val_loss: 3.5717 - val_accuracy: 0.1842 - val_top3_acc: 0.3471 - val_top_k_categorical_accuracy: 0.4425\nEpoch 7/50\n313/313 [==============================] - 24s 75ms/step - loss: 3.5964 - accuracy: 0.1578 - top3_acc: 0.3173 - top_k_categorical_accuracy: 0.4190 - val_loss: 3.4509 - val_accuracy: 0.1984 - val_top3_acc: 0.3715 - val_top_k_categorical_accuracy: 0.4733\nEpoch 8/50\n313/313 [==============================] - 25s 80ms/step - loss: 3.5294 - accuracy: 0.1725 - top3_acc: 0.3365 - top_k_categorical_accuracy: 0.4382 - val_loss: 3.5227 - val_accuracy: 0.1778 - val_top3_acc: 0.3445 - val_top_k_categorical_accuracy: 0.4413\nEpoch 9/50\n313/313 [==============================] - 23s 74ms/step - loss: 3.4535 - accuracy: 0.1870 - top3_acc: 0.3579 - top_k_categorical_accuracy: 0.4583 - val_loss: 3.4340 - val_accuracy: 0.1992 - val_top3_acc: 0.3667 - val_top_k_categorical_accuracy: 0.4650\nEpoch 10/50\n313/313 [==============================] - 25s 79ms/step - loss: 3.3896 - accuracy: 0.1996 - top3_acc: 0.3752 - top_k_categorical_accuracy: 0.4776 - val_loss: 3.3266 - val_accuracy: 0.2165 - val_top3_acc: 0.3950 - val_top_k_categorical_accuracy: 0.4926\nEpoch 11/50\n313/313 [==============================] - 24s 78ms/step - loss: 3.3271 - accuracy: 0.2087 - top3_acc: 0.3933 - top_k_categorical_accuracy: 0.4971 - val_loss: 3.2500 - val_accuracy: 0.2299 - val_top3_acc: 0.4111 - val_top_k_categorical_accuracy: 0.5106\nEpoch 12/50\n313/313 [==============================] - 23s 74ms/step - loss: 3.2578 - accuracy: 0.2237 - top3_acc: 0.4113 - top_k_categorical_accuracy: 0.5152 - val_loss: 3.2836 - val_accuracy: 0.2260 - val_top3_acc: 0.4050 - val_top_k_categorical_accuracy: 0.4993\nEpoch 13/50\n313/313 [==============================] - 25s 81ms/step - loss: 3.2100 - accuracy: 0.2312 - top3_acc: 0.4231 - top_k_categorical_accuracy: 0.5297 - val_loss: 3.1649 - val_accuracy: 0.2438 - val_top3_acc: 0.4268 - val_top_k_categorical_accuracy: 0.5250\nEpoch 14/50\n313/313 [==============================] - 25s 80ms/step - loss: 3.1392 - accuracy: 0.2476 - top3_acc: 0.4408 - top_k_categorical_accuracy: 0.5464 - val_loss: 3.2513 - val_accuracy: 0.2321 - val_top3_acc: 0.4049 - val_top_k_categorical_accuracy: 0.5096\nEpoch 15/50\n313/313 [==============================] - 25s 79ms/step - loss: 3.0613 - accuracy: 0.2624 - top3_acc: 0.4612 - top_k_categorical_accuracy: 0.5709 - val_loss: 3.2036 - val_accuracy: 0.2405 - val_top3_acc: 0.4116 - val_top_k_categorical_accuracy: 0.5111\nEpoch 16/50\n313/313 [==============================] - 24s 77ms/step - loss: 3.0122 - accuracy: 0.2702 - top3_acc: 0.4753 - top_k_categorical_accuracy: 0.5820 - val_loss: 3.1043 - val_accuracy: 0.2518 - val_top3_acc: 0.4333 - val_top_k_categorical_accuracy: 0.5405\nEpoch 17/50\n313/313 [==============================] - 26s 82ms/step - loss: 2.9523 - accuracy: 0.2838 - top3_acc: 0.4932 - top_k_categorical_accuracy: 0.5981 - val_loss: 3.0158 - val_accuracy: 0.2686 - val_top3_acc: 0.4563 - val_top_k_categorical_accuracy: 0.5661\nEpoch 18/50\n313/313 [==============================] - 24s 77ms/step - loss: 2.9048 - accuracy: 0.2924 - top3_acc: 0.5020 - top_k_categorical_accuracy: 0.6100 - val_loss: 3.0383 - val_accuracy: 0.2655 - val_top3_acc: 0.4549 - val_top_k_categorical_accuracy: 0.5531\nEpoch 19/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.8536 - accuracy: 0.3020 - top3_acc: 0.5146 - top_k_categorical_accuracy: 0.6183 - val_loss: 2.9865 - val_accuracy: 0.2752 - val_top3_acc: 0.4587 - val_top_k_categorical_accuracy: 0.5700\nEpoch 20/50\n313/313 [==============================] - 26s 82ms/step - loss: 2.7985 - accuracy: 0.3169 - top3_acc: 0.5281 - top_k_categorical_accuracy: 0.6303 - val_loss: 2.9383 - val_accuracy: 0.2714 - val_top3_acc: 0.4654 - val_top_k_categorical_accuracy: 0.5697\nEpoch 21/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.7423 - accuracy: 0.3237 - top3_acc: 0.5403 - top_k_categorical_accuracy: 0.6462 - val_loss: 3.0482 - val_accuracy: 0.2566 - val_top3_acc: 0.4453 - val_top_k_categorical_accuracy: 0.5479\nEpoch 22/50\n313/313 [==============================] - 24s 76ms/step - loss: 2.6992 - accuracy: 0.3322 - top3_acc: 0.5558 - top_k_categorical_accuracy: 0.6540 - val_loss: 3.0172 - val_accuracy: 0.2724 - val_top3_acc: 0.4534 - val_top_k_categorical_accuracy: 0.5510\nEpoch 23/50\n313/313 [==============================] - 25s 80ms/step - loss: 2.6454 - accuracy: 0.3416 - top3_acc: 0.5644 - top_k_categorical_accuracy: 0.6655 - val_loss: 2.7591 - val_accuracy: 0.3232 - val_top3_acc: 0.5174 - val_top_k_categorical_accuracy: 0.6171\nEpoch 24/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.6078 - accuracy: 0.3506 - top3_acc: 0.5716 - top_k_categorical_accuracy: 0.6772 - val_loss: 3.0216 - val_accuracy: 0.2680 - val_top3_acc: 0.4548 - val_top_k_categorical_accuracy: 0.5512\nEpoch 25/50\n313/313 [==============================] - 25s 79ms/step - loss: 2.5501 - accuracy: 0.3648 - top3_acc: 0.5887 - top_k_categorical_accuracy: 0.6894 - val_loss: 2.8618 - val_accuracy: 0.2908 - val_top3_acc: 0.4928 - val_top_k_categorical_accuracy: 0.5948\nEpoch 26/50\n313/313 [==============================] - 25s 80ms/step - loss: 2.4975 - accuracy: 0.3727 - top3_acc: 0.6031 - top_k_categorical_accuracy: 0.7052 - val_loss: 2.8987 - val_accuracy: 0.2951 - val_top3_acc: 0.4838 - val_top_k_categorical_accuracy: 0.5814\nEpoch 27/50\n313/313 [==============================] - 25s 81ms/step - loss: 2.4520 - accuracy: 0.3858 - top3_acc: 0.6116 - top_k_categorical_accuracy: 0.7093 - val_loss: 2.9376 - val_accuracy: 0.2787 - val_top3_acc: 0.4743 - val_top_k_categorical_accuracy: 0.5731\nEpoch 28/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.4012 - accuracy: 0.3965 - top3_acc: 0.6227 - top_k_categorical_accuracy: 0.7193 - val_loss: 2.7637 - val_accuracy: 0.3197 - val_top3_acc: 0.5138 - val_top_k_categorical_accuracy: 0.6107\nEpoch 29/50\n313/313 [==============================] - 25s 81ms/step - loss: 2.3758 - accuracy: 0.4011 - top3_acc: 0.6293 - top_k_categorical_accuracy: 0.7250 - val_loss: 2.9542 - val_accuracy: 0.2831 - val_top3_acc: 0.4742 - val_top_k_categorical_accuracy: 0.5697\nEpoch 30/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.3205 - accuracy: 0.4126 - top3_acc: 0.6409 - top_k_categorical_accuracy: 0.7376 - val_loss: 2.7069 - val_accuracy: 0.3347 - val_top3_acc: 0.5268 - val_top_k_categorical_accuracy: 0.6226\nEpoch 31/50\n313/313 [==============================] - 25s 79ms/step - loss: 2.2804 - accuracy: 0.4239 - top3_acc: 0.6515 - top_k_categorical_accuracy: 0.7436 - val_loss: 2.7589 - val_accuracy: 0.3237 - val_top3_acc: 0.5158 - val_top_k_categorical_accuracy: 0.6129\nEpoch 32/50\n313/313 [==============================] - 24s 78ms/step - loss: 2.2198 - accuracy: 0.4380 - top3_acc: 0.6627 - top_k_categorical_accuracy: 0.7528 - val_loss: 2.7511 - val_accuracy: 0.3267 - val_top3_acc: 0.5190 - val_top_k_categorical_accuracy: 0.6168\nEpoch 33/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.1814 - accuracy: 0.4450 - top3_acc: 0.6729 - top_k_categorical_accuracy: 0.7636 - val_loss: 2.7389 - val_accuracy: 0.3234 - val_top3_acc: 0.5173 - val_top_k_categorical_accuracy: 0.6203\nEpoch 34/50\n313/313 [==============================] - 25s 81ms/step - loss: 2.1368 - accuracy: 0.4550 - top3_acc: 0.6803 - top_k_categorical_accuracy: 0.7720 - val_loss: 2.8515 - val_accuracy: 0.3045 - val_top3_acc: 0.5011 - val_top_k_categorical_accuracy: 0.6026\nEpoch 35/50\n313/313 [==============================] - 24s 76ms/step - loss: 2.0978 - accuracy: 0.4655 - top3_acc: 0.6898 - top_k_categorical_accuracy: 0.7800 - val_loss: 2.6472 - val_accuracy: 0.3414 - val_top3_acc: 0.5413 - val_top_k_categorical_accuracy: 0.6386\nEpoch 36/50\n313/313 [==============================] - 23s 74ms/step - loss: 2.0416 - accuracy: 0.4790 - top3_acc: 0.7060 - top_k_categorical_accuracy: 0.7904 - val_loss: 2.8315 - val_accuracy: 0.3164 - val_top3_acc: 0.5058 - val_top_k_categorical_accuracy: 0.6037\nEpoch 37/50\n313/313 [==============================] - 26s 83ms/step - loss: 2.0067 - accuracy: 0.4835 - top3_acc: 0.7112 - top_k_categorical_accuracy: 0.7977 - val_loss: 2.9511 - val_accuracy: 0.2954 - val_top3_acc: 0.4858 - val_top_k_categorical_accuracy: 0.5914\nEpoch 38/50\n313/313 [==============================] - 23s 75ms/step - loss: 1.9677 - accuracy: 0.4911 - top3_acc: 0.7214 - top_k_categorical_accuracy: 0.8061 - val_loss: 2.8696 - val_accuracy: 0.3135 - val_top3_acc: 0.5007 - val_top_k_categorical_accuracy: 0.5937\nEpoch 39/50\n313/313 [==============================] - 24s 78ms/step - loss: 1.9224 - accuracy: 0.5051 - top3_acc: 0.7282 - top_k_categorical_accuracy: 0.8110 - val_loss: 3.0140 - val_accuracy: 0.2865 - val_top3_acc: 0.4777 - val_top_k_categorical_accuracy: 0.5737\nEpoch 40/50\n313/313 [==============================] - 25s 80ms/step - loss: 1.8753 - accuracy: 0.5150 - top3_acc: 0.7379 - top_k_categorical_accuracy: 0.8208 - val_loss: 2.7592 - val_accuracy: 0.3307 - val_top3_acc: 0.5231 - val_top_k_categorical_accuracy: 0.6177\nEpoch 41/50\n313/313 [==============================] - 26s 84ms/step - loss: 1.8372 - accuracy: 0.5165 - top3_acc: 0.7477 - top_k_categorical_accuracy: 0.8274 - val_loss: 2.7805 - val_accuracy: 0.3353 - val_top3_acc: 0.5279 - val_top_k_categorical_accuracy: 0.6197\nEpoch 42/50\n313/313 [==============================] - 24s 75ms/step - loss: 1.7810 - accuracy: 0.5379 - top3_acc: 0.7589 - top_k_categorical_accuracy: 0.8382 - val_loss: 2.8015 - val_accuracy: 0.3253 - val_top3_acc: 0.5230 - val_top_k_categorical_accuracy: 0.6156\nEpoch 43/50\n313/313 [==============================] - 24s 77ms/step - loss: 1.7531 - accuracy: 0.5414 - top3_acc: 0.7667 - top_k_categorical_accuracy: 0.8459 - val_loss: 2.9798 - val_accuracy: 0.3002 - val_top3_acc: 0.4897 - val_top_k_categorical_accuracy: 0.5821\nEpoch 44/50\n313/313 [==============================] - 25s 81ms/step - loss: 1.7141 - accuracy: 0.5468 - top3_acc: 0.7744 - top_k_categorical_accuracy: 0.8511 - val_loss: 2.9180 - val_accuracy: 0.3185 - val_top3_acc: 0.5057 - val_top_k_categorical_accuracy: 0.6017\nEpoch 45/50\n313/313 [==============================] - 23s 74ms/step - loss: 1.6751 - accuracy: 0.5621 - top3_acc: 0.7844 - top_k_categorical_accuracy: 0.8562 - val_loss: 2.9020 - val_accuracy: 0.3189 - val_top3_acc: 0.5070 - val_top_k_categorical_accuracy: 0.5980\nEpoch 46/50\n313/313 [==============================] - 25s 81ms/step - loss: 1.6142 - accuracy: 0.5755 - top3_acc: 0.7943 - top_k_categorical_accuracy: 0.8662 - val_loss: 2.7434 - val_accuracy: 0.3400 - val_top3_acc: 0.5336 - val_top_k_categorical_accuracy: 0.6320\nEpoch 47/50\n313/313 [==============================] - 24s 77ms/step - loss: 1.5759 - accuracy: 0.5865 - top3_acc: 0.8044 - top_k_categorical_accuracy: 0.8744 - val_loss: 2.8074 - val_accuracy: 0.3340 - val_top3_acc: 0.5302 - val_top_k_categorical_accuracy: 0.6323\nEpoch 48/50\n313/313 [==============================] - 23s 74ms/step - loss: 1.5196 - accuracy: 0.5972 - top3_acc: 0.8169 - top_k_categorical_accuracy: 0.8813 - val_loss: 2.9358 - val_accuracy: 0.3236 - val_top3_acc: 0.5158 - val_top_k_categorical_accuracy: 0.6118\nEpoch 49/50\n313/313 [==============================] - 26s 83ms/step - loss: 1.4975 - accuracy: 0.6066 - top3_acc: 0.8184 - top_k_categorical_accuracy: 0.8832 - val_loss: 2.9354 - val_accuracy: 0.3189 - val_top3_acc: 0.5158 - val_top_k_categorical_accuracy: 0.6092\nEpoch 50/50\n313/313 [==============================] - 24s 75ms/step - loss: 1.4258 - accuracy: 0.6223 - top3_acc: 0.8364 - top_k_categorical_accuracy: 0.8985 - val_loss: 2.9551 - val_accuracy: 0.3199 - val_top3_acc: 0.5095 - val_top_k_categorical_accuracy: 0.6004\n"
],
[
"print(history.history)\n# y_pred = np.argmax(model.predict(x_test), axis=1)\n# y_true = np.argmax(y_test,axis=1)\n\n# print(y_pred.shape)\n# print(y_true.shape)\n\n# print(np.sum(y_pred == y_true) / y_pred.shape[0])\n\nprint(model.evaluate(x_test, y_test))",
"{'loss': [4.637686729431152, 4.19889497756958, 4.000064849853516, 3.866473436355591, 3.764915943145752, 3.6666934490203857, 3.5829319953918457, 3.5164573192596436, 3.4361367225646973, 3.3743674755096436, 3.308612823486328, 3.2555992603302, 3.1886487007141113, 3.1250598430633545, 3.0680079460144043, 3.0102760791778564, 2.955095052719116, 2.89695143699646, 2.8450210094451904, 2.7892537117004395, 2.74092698097229, 2.690120220184326, 2.645820379257202, 2.602909803390503, 2.5529327392578125, 2.510305166244507, 2.457820415496826, 2.412214517593384, 2.3754165172576904, 2.325422525405884, 2.276616334915161, 2.2363250255584717, 2.18762469291687, 2.151061534881592, 2.1057353019714355, 2.061483144760132, 2.0241153240203857, 1.9866318702697754, 1.9312798976898193, 1.8967260122299194, 1.8507988452911377, 1.805557370185852, 1.7684029340744019, 1.7239269018173218, 1.6843966245651245, 1.6447759866714478, 1.595732569694519, 1.5516139268875122, 1.5186333656311035, 1.4669721126556396], 'accuracy': [0.027124999091029167, 0.058674998581409454, 0.0840499997138977, 0.1071000024676323, 0.1251000016927719, 0.14380000531673431, 0.15880000591278076, 0.1725749969482422, 0.18957500159740448, 0.2019750028848648, 0.2120250016450882, 0.22282500565052032, 0.23512500524520874, 0.250900000333786, 0.2599000036716461, 0.272724986076355, 0.283050000667572, 0.2922999858856201, 0.3048250079154968, 0.319225013256073, 0.3250750005245209, 0.3370499908924103, 0.3419249951839447, 0.3527750074863434, 0.3635999858379364, 0.37095001339912415, 0.3849000036716461, 0.3942500054836273, 0.4000000059604645, 0.411175012588501, 0.423550009727478, 0.4315750002861023, 0.44132500886917114, 0.4491249918937683, 0.4614250063896179, 0.4725250005722046, 0.475600004196167, 0.48442500829696655, 0.5004000067710876, 0.5081750154495239, 0.5143499970436096, 0.5260999798774719, 0.5349500179290771, 0.5449000000953674, 0.5573999881744385, 0.5656499862670898, 0.5780500173568726, 0.5876500010490417, 0.599049985408783, 0.6090499758720398], 'top3_acc': [0.074024997651577, 0.14637500047683716, 0.19807499647140503, 0.23557500541210175, 0.2677749991416931, 0.2960749864578247, 0.3197000026702881, 0.33867499232292175, 0.3628750145435333, 0.38042500615119934, 0.3989500105381012, 0.4124250113964081, 0.4281750023365021, 0.44495001435279846, 0.4628250002861023, 0.47402501106262207, 0.49079999327659607, 0.5045499801635742, 0.5161749720573425, 0.53125, 0.5422000288963318, 0.5558500289916992, 0.5641250014305115, 0.5738499760627747, 0.5870000123977661, 0.596750020980835, 0.609375, 0.6184250116348267, 0.6266250014305115, 0.6406000256538391, 0.6516749858856201, 0.6591500043869019, 0.6709499955177307, 0.6788750290870667, 0.6875249743461609, 0.7001000046730042, 0.7063249945640564, 0.7162500023841858, 0.7252749800682068, 0.7328500151634216, 0.7452250123023987, 0.7531499862670898, 0.7602999806404114, 0.7724499702453613, 0.7822750210762024, 0.7860249876976013, 0.7987750172615051, 0.8089749813079834, 0.8130249977111816, 0.826324999332428], 'top_k_categorical_accuracy': [0.11367499828338623, 0.21412500739097595, 0.28049999475479126, 0.3241249918937683, 0.36207500100135803, 0.39274999499320984, 0.42182499170303345, 0.44029998779296875, 0.4636499881744385, 0.4820750057697296, 0.5023999810218811, 0.5169000029563904, 0.5353749990463257, 0.5515750050544739, 0.5687749981880188, 0.5795249938964844, 0.5974000096321106, 0.6108499765396118, 0.6214249730110168, 0.6334249973297119, 0.6455749869346619, 0.6563500165939331, 0.6656500101089478, 0.6775000095367432, 0.6868749856948853, 0.697825014591217, 0.7080749869346619, 0.7151250243186951, 0.7236999869346619, 0.7368500232696533, 0.7436749935150146, 0.7510499954223633, 0.7626000046730042, 0.7698500156402588, 0.7773500084877014, 0.7858499884605408, 0.7945250272750854, 0.8014249801635742, 0.8085749745368958, 0.8168500065803528, 0.8256750106811523, 0.8344500064849854, 0.8397250175476074, 0.8485749959945679, 0.8555499911308289, 0.8600000143051147, 0.8690500259399414, 0.8752250075340271, 0.8792750239372253, 0.8898249864578247], 'val_loss': [4.234034538269043, 3.9778621196746826, 3.847080945968628, 3.69958233833313, 3.6563034057617188, 3.571653127670288, 3.45085072517395, 3.522679328918457, 3.434016466140747, 3.3265581130981445, 3.249983072280884, 3.2835800647735596, 3.164933204650879, 3.2512600421905518, 3.2036032676696777, 3.1043267250061035, 3.015775680541992, 3.038287401199341, 2.9864799976348877, 2.9382858276367188, 3.0482192039489746, 3.0172340869903564, 2.7591354846954346, 3.021636486053467, 2.8618431091308594, 2.898709297180176, 2.9375603199005127, 2.7637124061584473, 2.954200267791748, 2.7068729400634766, 2.7589271068573, 2.751072883605957, 2.7388877868652344, 2.8514785766601562, 2.6471827030181885, 2.8314521312713623, 2.9510750770568848, 2.8696069717407227, 3.0139729976654053, 2.7592389583587646, 2.7805070877075195, 2.801464080810547, 2.979811191558838, 2.9179508686065674, 2.901992082595825, 2.7434399127960205, 2.8073999881744385, 2.935804605484009, 2.935406446456909, 2.9550540447235107], 'val_accuracy': [0.06849999725818634, 0.10729999840259552, 0.13030000030994415, 0.15479999780654907, 0.1623000055551529, 0.1842000037431717, 0.19840000569820404, 0.1777999997138977, 0.19920000433921814, 0.21649999916553497, 0.22990000247955322, 0.22599999606609344, 0.24379999935626984, 0.2320999950170517, 0.24050000309944153, 0.251800000667572, 0.2685999870300293, 0.265500009059906, 0.2752000093460083, 0.27140000462532043, 0.2565999925136566, 0.27239999175071716, 0.323199987411499, 0.2680000066757202, 0.290800005197525, 0.29510000348091125, 0.27869999408721924, 0.3197000026702881, 0.2831000089645386, 0.33469998836517334, 0.3237000107765198, 0.32670000195503235, 0.32339999079704285, 0.304500013589859, 0.34139999747276306, 0.3163999915122986, 0.2953999936580658, 0.31349998712539673, 0.2865000069141388, 0.33070001006126404, 0.3352999985218048, 0.325300008058548, 0.3001999855041504, 0.31850001215934753, 0.3188999891281128, 0.3400000035762787, 0.33399999141693115, 0.32359999418258667, 0.3188999891281128, 0.3199000060558319], 'val_top3_acc': [0.16660000383853912, 0.23800000548362732, 0.26899999380111694, 0.3109999895095825, 0.319599986076355, 0.34709998965263367, 0.3714999854564667, 0.34450000524520874, 0.3666999936103821, 0.39500001072883606, 0.41110000014305115, 0.4050000011920929, 0.426800012588501, 0.4049000144004822, 0.4115999937057495, 0.4332999885082245, 0.4562999904155731, 0.45489999651908875, 0.4587000012397766, 0.46540001034736633, 0.44530001282691956, 0.45339998602867126, 0.5174000263214111, 0.454800009727478, 0.4927999973297119, 0.4837999939918518, 0.47429999709129333, 0.5138000249862671, 0.4742000102996826, 0.5267999768257141, 0.5157999992370605, 0.5189999938011169, 0.517300009727478, 0.5011000037193298, 0.5412999987602234, 0.5058000087738037, 0.48579999804496765, 0.5006999969482422, 0.47769999504089355, 0.5231000185012817, 0.527899980545044, 0.5230000019073486, 0.48969998955726624, 0.5056999921798706, 0.5070000290870667, 0.5335999727249146, 0.5302000045776367, 0.5157999992370605, 0.5157999992370605, 0.5095000267028809], 'val_top_k_categorical_accuracy': [0.2345999926328659, 0.32109999656677246, 0.35740000009536743, 0.40470001101493835, 0.4174000024795532, 0.4424999952316284, 0.4733000099658966, 0.44130000472068787, 0.4650000035762787, 0.4925999939441681, 0.5105999708175659, 0.4993000030517578, 0.5249999761581421, 0.5095999836921692, 0.5110999941825867, 0.5404999852180481, 0.566100001335144, 0.5530999898910522, 0.5699999928474426, 0.5697000026702881, 0.5479000210762024, 0.5509999990463257, 0.6171000003814697, 0.5511999726295471, 0.5947999954223633, 0.5813999772071838, 0.5730999708175659, 0.6107000112533569, 0.5697000026702881, 0.6226000189781189, 0.6129000186920166, 0.6168000102043152, 0.6202999949455261, 0.6025999784469604, 0.6385999917984009, 0.6036999821662903, 0.5914000272750854, 0.5936999917030334, 0.5737000107765198, 0.6176999807357788, 0.619700014591217, 0.6155999898910522, 0.582099974155426, 0.6017000079154968, 0.5979999899864197, 0.6320000290870667, 0.6323000192642212, 0.6118000149726868, 0.6092000007629395, 0.6003999710083008]}\n313/313 [==============================] - 3s 6ms/step - loss: 2.9399 - accuracy: 0.3268 - top3_acc: 0.5149 - top_k_categorical_accuracy: 0.6068\n[2.939875841140747, 0.32679998874664307, 0.5149000287055969, 0.6068000197410583]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7f1172d4d7ba51d40bc657a40caf4ae98f4e63f | 176,953 | ipynb | Jupyter Notebook | egs/singing_synthesis/s3/run.ipynb | YongliangHe/SingingVoiceSynthesis | 2121f690e56154eeeb998786558bd4eba2183132 | [
"Apache-2.0"
] | 15 | 2018-10-29T19:46:46.000Z | 2021-03-24T08:35:17.000Z | egs/singing_synthesis/s3/run.ipynb | audier/SingingVoiceSynthesis | 2121f690e56154eeeb998786558bd4eba2183132 | [
"Apache-2.0"
] | 3 | 2018-12-11T13:50:42.000Z | 2019-03-05T04:35:55.000Z | egs/singing_synthesis/s3/run.ipynb | audier/SingingVoiceSynthesis | 2121f690e56154eeeb998786558bd4eba2183132 | [
"Apache-2.0"
] | 1 | 2019-05-22T15:21:04.000Z | 2019-05-22T15:21:04.000Z | 52.758795 | 3,004 | 0.618729 | [
[
[
"# IMPORT",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport logging\nimport subprocess\nimport numpy as np\nfrom shutil import copy\nsys.path.insert(0, '/home/yongliang/third_party/merlin/src')\nfrom io_funcs.binary_io import BinaryIOCollection\n%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"# UTILITY",
"_____no_output_____"
]
],
[
[
"def get_file_list_of_dir(dir_path):\n res = [os.path.join(dir_path, f) for f in os.listdir(dir_path)]\n res.sort()\n return res\n\ndef gen_file_list(dir_path, file_id_list, ext):\n return [os.path.join(dir_path, f + '.' + ext) for f in file_id_list]\n\ndef get_file_id_list(file_list):\n return [os.path.splitext(os.path.basename(f))[0] for f in file_list]\n\n\nio_funcs = BinaryIOCollection()",
"_____no_output_____"
]
],
[
[
"# CONFIGURATION\n",
"_____no_output_____"
]
],
[
[
"merlin_dir = '/home/yongliang/third_party/merlin'\nsilence_pattern = ['*-pau+*', '*-sil+*']\ncurr_dir = os.getcwd()\n# hardcoded\nnit_dir = os.path.join(curr_dir, 'nit')\nwav_dir = os.path.join(nit_dir, 'wav2')\nexp_dir = os.path.join(curr_dir, 'exp')\nif not os.path.exists(exp_dir):\n os.makedirs(exp_dir)\nlab_dir = os.path.join(exp_dir, 'lab')\norig_lab_file_list = get_file_list_of_dir(lab_dir)\nfile_id_list = get_file_id_list(orig_lab_file_list)\n\nSPTK = {'VOPR': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/vopr', \n 'MGC2SP': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/mgc2sp', \n 'C2ACR': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/c2acr', \n 'FREQT': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/freqt', \n 'MC2B': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/mc2b', \n 'MLPG': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/mlpg', \n 'B2MC': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/b2mc', \n 'VSUM': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/vsum', \n 'MERGE': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/merge', \n 'SOPR': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/sopr', \n 'BCP': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/bcp', \n 'VSTAT': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/vstat', \n 'X2X': '/home/yongliang/third_party/merlin/tools/bin/SPTK-3.9/x2x'}\nWORLD = {'SYNTHESIS': '/home/yongliang/third_party/merlin/tools/bin/WORLD/synth', \n 'ANALYSIS': '/home/yongliang/third_party/merlin/tools/bin/WORLD/analysis'}\n",
"_____no_output_____"
]
],
[
[
"# Prepare label files",
"_____no_output_____"
]
],
[
[
"hmm_dir = os.path.join(exp_dir, 'hmm')\nfull_dir = os.path.join(nit_dir, 'full')\nmono_dir = os.path.join(nit_dir, 'mono')\nphones = os.path.join(nit_dir, 'monophone')",
"_____no_output_____"
],
[
"from src.forced_alignment import ForcedAlignment\n\naligner = ForcedAlignment(hmm_dir, wav_dir, full_dir, mono_dir, phones, lab_dir)\naligner.prepare_training()\naligner.train_hmm(7, 32)\naligner.align()",
"---make file_id_list.scp: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/file_id_list.scp\n---make copy.scp: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/config/copy.scp\n---mfcc extraction at: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/mfc\n------make copy.cfg: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/config/cfg\n------extracting mfcc features...\n"
]
],
[
[
"# Feature extraction",
"_____no_output_____"
]
],
[
[
"feat_dir = os.path.join(exp_dir, 'feat')\nlf0_dir = os.path.join(feat_dir, 'lf0')\nbap_dir = os.path.join(feat_dir, 'bap')\nmgc_dir = os.path.join(feat_dir, 'mgc')\nsample_rate = 16000",
"_____no_output_____"
],
[
"from src.feature_extraction import FeatureExtractor\nfeature_extractor = FeatureExtractor(wav_dir, sample_rate, feat_dir)\nfeature_extractor.extract_feat()",
"\nnitech_jp_song070_f001_003\nRunning REAPER f0 extraction...\n"
]
],
[
[
"# Duration model",
"_____no_output_____"
],
[
"## Model configuration",
"_____no_output_____"
]
],
[
[
"# duration model related\n\n# hardcoded\ndur_lab_dim = 368\ndur_cmp_dim = 5\ndur_train_file_number = 27\ndur_valid_file_number = 1\ndur_test_file_number = 1\n\ndur_mdl_dir = os.path.join(exp_dir, 'duration_model')\nif not os.path.exists(dur_mdl_dir):\n os.makedirs(dur_mdl_dir)\n\ndur_tmp_dir = os.path.join(dur_mdl_dir, 'tmp')\nif not os.path.exists(dur_tmp_dir):\n os.makedirs(dur_tmp_dir)\n \ndur_inter_dir = os.path.join(dur_mdl_dir, 'inter')\nif not os.path.exists(dur_inter_dir):\n os.makedirs(dur_inter_dir)\n\ndur_lab_dir = os.path.join(dur_inter_dir, 'lab_' + str(dur_lab_dim))\nif not os.path.exists(dur_lab_dir):\n os.makedirs(dur_lab_dir)\n \ndur_lab_no_silence_dir = os.path.join(dur_inter_dir, 'lab_no_silence_' + str(dur_lab_dim))\nif not os.path.exists(dur_lab_no_silence_dir):\n os.makedirs(dur_lab_no_silence_dir)\n \ndur_lab_no_silence_norm_dir = os.path.join(dur_inter_dir, 'lab_no_silence_norm_' + str(dur_lab_dim))\nif not os.path.exists(dur_lab_no_silence_norm_dir):\n os.makedirs(dur_lab_no_silence_norm_dir)\n\ndur_dur_dir = os.path.join(dur_inter_dir, 'dur')\nif not os.path.exists(dur_dur_dir):\n os.makedirs(dur_dur_dir)\n \ndur_cmp_dir = os.path.join(dur_inter_dir, 'cmp_' + str(dur_cmp_dim))\nif not os.path.exists(dur_cmp_dir):\n os.makedirs(dur_cmp_dir)\n \ndur_cmp_no_silence_dir = os.path.join(dur_inter_dir, 'cmp_no_silence_' + str(dur_cmp_dim))\nif not os.path.exists(dur_cmp_no_silence_dir):\n os.makedirs(dur_cmp_no_silence_dir)\n \ndur_cmp_no_silence_norm_dir = os.path.join(dur_inter_dir, 'cmp_no_silence_norm_' + str(dur_cmp_dim))\nif not os.path.exists(dur_cmp_no_silence_norm_dir):\n os.makedirs(dur_cmp_no_silence_norm_dir)\n \ndur_variance_dir = os.path.join(dur_inter_dir, 'variance')\nif not os.path.exists(dur_variance_dir):\n os.makedirs(dur_variance_dir)\n \ndur_nn_mdl_dir = os.path.join(dur_mdl_dir, 'mdl')\nif not os.path.exists(dur_nn_mdl_dir):\n os.makedirs(dur_nn_mdl_dir)\n \n \n\ndur_lab_norm_file = os.path.join(dur_inter_dir, 'lab_norm_' + str(dur_lab_dim) + '.dat')\ndur_cmp_norm_file = os.path.join(dur_inter_dir, 'cmp_norm_' + str(dur_cmp_dim) + '.dat')\n\ndur_dur_var_file = os.path.join(dur_variance_dir, 'dur')\n\n \ndur_lab_file_list = gen_file_list(dur_lab_dir, file_id_list, 'labbin')\ndur_lab_no_silence_file_list = gen_file_list(dur_lab_no_silence_dir, file_id_list, 'labbin')\ndur_lab_no_silence_norm_file_list = gen_file_list(dur_lab_no_silence_norm_dir, file_id_list, 'labbin')\ndur_dur_file_list = gen_file_list(dur_dur_dir, file_id_list, 'dur')\ndur_cmp_file_list = gen_file_list(dur_cmp_dir, file_id_list, 'cmp')\ndur_cmp_no_silence_file_list = gen_file_list(dur_cmp_no_silence_dir, file_id_list, 'cmp')\ndur_cmp_no_silence_norm_file_list = gen_file_list(dur_cmp_no_silence_norm_dir, file_id_list, 'cmp')",
"_____no_output_____"
]
],
[
[
"## Feature extraction from label files",
"_____no_output_____"
]
],
[
[
"ques_dir = os.path.join(curr_dir, 'ques')\nquestion = os.path.join(ques_dir, 'general')",
"_____no_output_____"
],
[
"from frontend.label_normalisation import HTSLabelNormalisation\ndur_lab_normaliser = HTSLabelNormalisation(question, add_frame_features=False, subphone_feats='none')\ndur_lab_normaliser.perform_normalisation(orig_lab_file_list, dur_lab_file_list)",
"_____no_output_____"
]
],
[
[
"## Remove silence phone",
"_____no_output_____"
]
],
[
[
"from frontend.silence_remover import SilenceRemover\ndur_silence_remover = SilenceRemover(n_cmp=dur_lab_dim, silence_pattern=silence_pattern, remove_frame_features=False, subphone_feats='none')\ndur_silence_remover.remove_silence(dur_lab_file_list, orig_lab_file_list, dur_lab_no_silence_file_list)",
"_____no_output_____"
],
[
"_, num_frame = io_funcs.load_binary_file_frame(dur_lab_file_list[2], 368)\n_, num_frame_nn = io_funcs.load_binary_file_frame(dur_lab_no_silence_file_list[2], 368)\nprint(num_frame)\nprint(num_frame_nn)",
"131\n124\n"
],
[
"tmp, _ = io_funcs.load_binary_file_frame(dur_lab_file_list[2], 368)\nprint(tmp)",
"[[ 0. 0. 0. ... -1. -1. -1.]\n [ 0. 0. 0. ... -1. -1. -1.]\n [ 0. 0. 0. ... 192. 0. 100.]\n ...\n [ 0. 0. 0. ... 72. 57. 43.]\n [ 0. 0. 0. ... -1. -1. -1.]\n [ 0. 0. 0. ... -1. -1. -1.]]\n"
]
],
[
[
"## Input feature normalization",
"_____no_output_____"
]
],
[
[
"from frontend.min_max_norm import MinMaxNormalisation\ndur_min_max_normaliser = MinMaxNormalisation(feature_dimension=dur_lab_dim, min_value=0.01, max_value=0.99)\ndur_min_max_normaliser.find_min_max_values(dur_lab_no_silence_file_list[0: dur_train_file_number])\ndur_min_max_normaliser.normalise_data(dur_lab_no_silence_file_list, dur_lab_no_silence_norm_file_list)\ndur_label_min_vector = dur_min_max_normaliser.min_vector\ndur_label_max_vector = dur_min_max_normaliser.max_vector\ndur_label_norm_info = np.concatenate((dur_label_min_vector, dur_label_max_vector), axis=0)\ndur_label_norm_info = np.array(dur_label_norm_info, 'float32')\nfid = open(dur_lab_norm_file, 'wb')\ndur_label_norm_info.tofile(fid)\nfid.close()",
"_____no_output_____"
],
[
"dur_label_norm_info.shape",
"_____no_output_____"
]
],
[
[
"## Compute duration from label files",
"_____no_output_____"
]
],
[
[
"dur_lab_normaliser.prepare_dur_data(orig_lab_file_list, dur_dur_file_list, feature_type='numerical')",
"_____no_output_____"
],
[
"feat, num_frame = io_funcs.load_binary_file_frame(dur_dur_file_list[0], 5)\nprint(feat.shape)\nprint(num_frame)\nprint(feat[0:10, :])",
"(151, 5)\n151\n[[ 1. 161. 34. 18. 22.]\n [ 2. 2. 2. 1. 1.]\n [ 2. 1. 66. 7. 3.]\n [ 5. 10. 1. 1. 15.]\n [ 2. 5. 2. 1. 1.]\n [ 4. 1. 32. 4. 5.]\n [ 6. 1. 27. 20. 1.]\n [ 1. 6. 2. 1. 2.]\n [ 1. 54. 1. 1. 2.]\n [ 10. 9. 4. 2. 4.]]\n"
],
[
"dur_dur_file_list[2]",
"_____no_output_____"
]
],
[
[
"## Make output features for duration model",
"_____no_output_____"
]
],
[
[
"delta_win = [-0.5, 0.0, 0.5]\nacc_win = [1.0, -2.0, 1.0]\n\"\"\"\n\"in\" & \"out\" just mean before & after feature composition \nlike if we compute dynamic features, dimensions of out will be 3 times of in\nnot really mean in & out of the network \n\"\"\" \ndur_in_dimension_dict = {'dur': 5} \ndur_out_dimension_dict = {'dur': 5}\ndur_in_file_list_dict = {'dur': dur_dur_file_list}",
"_____no_output_____"
],
[
"from frontend.acoustic_composition import AcousticComposition\ndur_acoustic_worker = AcousticComposition(delta_win = delta_win, acc_win = acc_win)\ndur_acoustic_worker.prepare_nn_data(dur_in_file_list_dict, dur_cmp_file_list, dur_in_dimension_dict, dur_out_dimension_dict)",
"_____no_output_____"
],
[
"feat, num_frame = io_funcs.load_binary_file_frame(dur_cmp_file_list[2], 5)\nprint(feat.shape)\nprint(num_frame)\nprint(feat[0:10, :])",
"(131, 5)\n131\n[[ 13. 72. 166. 2. 54.]\n [ 9. 47. 1. 54. 32.]\n [ 1. 18. 4. 4. 4.]\n [ 3. 100. 1. 2. 6.]\n [ 4. 5. 3. 3. 3.]\n [ 1. 32. 1. 1. 3.]\n [ 5. 3. 2. 2. 3.]\n [ 2. 26. 1. 5. 8.]\n [ 2. 3. 3. 7. 6.]\n [ 6. 77. 17. 5. 5.]]\n"
]
],
[
[
"## Remove silence phone",
"_____no_output_____"
]
],
[
[
"dur_silence_remover = SilenceRemover(n_cmp = dur_cmp_dim, silence_pattern = silence_pattern, remove_frame_features = False, subphone_feats = 'none')\ndur_silence_remover.remove_silence(dur_cmp_file_list, orig_lab_file_list, dur_cmp_no_silence_file_list) ",
"_____no_output_____"
],
[
"_, num_frame = io_funcs.load_binary_file_frame(dur_cmp_file_list[2], 5)\n_, num_frame_nn = io_funcs.load_binary_file_frame(dur_cmp_no_silence_file_list[2], 5)\nprint(num_frame)\nprint(num_frame_nn)",
"131\n124\n"
]
],
[
[
"## Output feature (duration) normalization",
"_____no_output_____"
]
],
[
[
"from frontend.mean_variance_norm import MeanVarianceNorm\ndur_mvn_normaliser = MeanVarianceNorm(feature_dimension=dur_cmp_dim)\ndur_global_mean_vector = dur_mvn_normaliser.compute_mean(dur_cmp_no_silence_file_list[0: dur_train_file_number], 0, dur_cmp_dim)\ndur_global_std_vector = dur_mvn_normaliser.compute_std(dur_cmp_no_silence_file_list[0: dur_train_file_number], dur_global_mean_vector, 0, dur_cmp_dim)\ndur_mvn_normaliser.feature_normalisation(dur_cmp_no_silence_file_list, dur_cmp_no_silence_norm_file_list)\ndur_cmp_norm_info = np.concatenate((dur_global_mean_vector, dur_global_std_vector), axis=0)\ndur_cmp_norm_info = np.array(dur_cmp_norm_info, 'float32')\nfid = open(dur_cmp_norm_file, 'wb')\ndur_cmp_norm_info.tofile(fid)\nfid.close()",
"_____no_output_____"
],
[
"tmp1, num1 = io_funcs.load_binary_file_frame(dur_cmp_no_silence_file_list[0], 5)\ntmp2, num2 = io_funcs.load_binary_file_frame(dur_cmp_no_silence_norm_file_list[0], 5)\nprint(num1 == num2)\nprint(tmp2)",
"True\n[[-0.7233047 -0.43827328 -0.4121524 -0.33808738 -0.66881937]\n [-0.7233047 -0.46908876 1.6973542 -0.10570705 -0.3786838 ]\n [-0.09544833 -0.19174956 -0.44511345 -0.33808738 1.3621298 ]\n [-0.7233047 -0.3458269 -0.4121524 -0.33808738 -0.66881937]\n [-0.30473378 -0.46908876 0.5766788 -0.22189721 -0.0885482 ]\n [ 0.11383712 -0.46908876 0.4118736 0.3977837 -0.66881937]\n [-0.9325901 -0.3150114 -0.4121524 -0.33808738 -0.5237516 ]\n [-0.9325901 1.164131 -0.44511345 -0.33808738 -0.5237516 ]\n [ 0.95097893 -0.22256503 -0.34623033 -0.29935732 -0.233616 ]\n [-0.9325901 -0.46908876 -0.4121524 -0.29935732 -0.0885482 ]\n [ 0.11383712 -0.37664235 -0.44511345 -0.33808738 -0.5237516 ]\n [-0.30473378 0.39374432 -0.44511345 -0.33808738 -0.233616 ]\n [ 0.95097893 -0.16093409 -0.34623033 -0.26062727 -0.3786838 ]\n [-0.7233047 -0.46908876 0.4118736 -0.33808738 0.3466552 ]\n [ 0.11383712 -0.37664235 -0.37919137 -0.29935732 -0.3786838 ]\n [-0.51401925 -0.46908876 1.4336659 -0.29935732 0.0565196 ]\n [ 0.11383712 -0.37664235 -0.4121524 -0.29935732 -0.233616 ]\n [-0.09544833 -0.46908876 -0.44511345 -0.33808738 0.3466552 ]\n [-0.51401925 -0.3458269 -0.2473472 -0.29935732 -0.233616 ]\n [-0.7233047 0.45537525 -0.4121524 -0.33808738 -0.0885482 ]\n [-0.09544833 -0.22256503 -0.4121524 -0.33808738 -0.5237516 ]\n [ 0.11383712 0.45537525 -0.44511345 -0.33808738 -0.0885482 ]\n [-0.30473378 -0.37664235 -0.44511345 -0.29935732 -0.66881937]\n [-0.9325901 -0.43827328 3.14764 -0.33808738 0.3466552 ]\n [-0.7233047 0.6094526 0.44483465 -0.29935732 0.3466552 ]\n [-0.51401925 -0.37664235 -0.37919137 -0.26062727 -0.3786838 ]\n [-0.51401925 -0.13011862 -0.34623033 -0.10570705 -0.66881937]\n [ 0.32312256 -0.46908876 0.80740607 -0.26062727 -0.233616 ]\n [-0.30473378 -0.3458269 -0.4121524 -0.22189721 -0.66881937]\n [-0.09544833 -0.46908876 -0.44511345 1.3660351 -0.5237516 ]\n [-0.09544833 -0.46908876 1.56551 -0.22189721 0.0565196 ]\n [-0.30473378 -0.40745783 -0.28030825 -0.26062727 -0.233616 ]\n [-0.09544833 -0.46908876 -0.28030825 -0.26062727 -0.5237516 ]\n [-0.30473378 -0.3150114 -0.4121524 -0.29935732 -0.3786838 ]\n [-0.7233047 -0.46908876 0.44483465 -0.18316716 -0.66881937]\n [ 0.95097893 -0.40745783 -0.34623033 -0.22189721 -0.0885482 ]\n [-0.9325901 -0.43827328 0.4118736 -0.26062727 0.0565196 ]\n [-0.09544833 -0.40745783 -0.44511345 -0.29935732 -0.5237516 ]\n [ 0.532408 0.85597634 -0.21438617 -0.33808738 0.6367908 ]\n [-0.51401925 -0.40745783 -0.37919137 -0.29935732 -0.3786838 ]\n [ 0.11383712 -0.40745783 -0.44511345 -0.33808738 0.3466552 ]\n [ 0.32312256 -0.43827328 -0.4121524 -0.33808738 1.217062 ]\n [-0.9325901 -0.46908876 0.44483465 -0.26062727 0.3466552 ]\n [ 1.7881207 0.3321134 -0.44511345 -0.06697699 -0.233616 ]\n [-0.7233047 -0.37664235 -0.4121524 -0.29935732 -0.5237516 ]\n [-0.7233047 0.20885152 1.4666269 -0.33808738 4.698689 ]\n [-0.09544833 -0.37664235 -0.18142512 -0.29935732 -0.0885482 ]\n [-0.51401925 -0.46908876 0.84036714 -0.33808738 -0.233616 ]\n [ 0.11383712 -0.43827328 -0.34623033 -0.26062727 0.0565196 ]\n [-0.9325901 -0.46908876 -0.44511345 0.78508425 1.0719942 ]\n [-0.7233047 -0.40745783 -0.4121524 -0.26062727 -0.233616 ]\n [-0.9325901 -0.46908876 -0.44511345 0.66889405 0.0565196 ]\n [ 0.32312256 -0.25338048 -0.4121524 -0.29935732 -0.233616 ]\n [-0.7233047 -0.46908876 0.84036714 -0.33808738 -0.66881937]\n [ 0.95097893 -0.3458269 -0.44511345 -0.1444371 -0.233616 ]\n [ 1.9974061 -0.03767221 -0.44511345 -0.22189721 -0.233616 ]\n [-0.51401925 -0.3458269 -0.4121524 -0.33808738 -0.66881937]\n [-0.51401925 0.6094526 -0.44511345 -0.33808738 -0.233616 ]\n [-0.09544833 -0.43827328 -0.44511345 -0.29935732 0.0565196 ]\n [ 0.95097893 -0.46908876 0.04930216 -0.18316716 -0.0885482 ]\n [-0.09544833 -0.22256503 -0.4121524 -0.29935732 -0.233616 ]\n [-0.30473378 -0.16093409 0.34595153 -0.29935732 0.6367908 ]\n [ 2.834548 0.05477419 -0.44511345 -0.33808738 0.2015874 ]\n [ 0.11383712 -0.3150114 -0.37919137 -0.22189721 -0.233616 ]\n [ 0.532408 0.30129793 -0.44511345 -0.33808738 -0.5237516 ]\n [-0.09544833 -0.25338048 -0.4121524 -0.26062727 -0.3786838 ]\n [-0.9325901 -0.46908876 1.3677437 -0.29935732 -0.0885482 ]\n [ 0.95097893 -0.40745783 -0.4121524 -0.29935732 -0.66881937]\n [-0.7233047 -0.46908876 -0.44511345 -0.33808738 0.6367908 ]\n [-0.09544833 -0.25338048 -0.2473472 -0.29935732 -0.3786838 ]\n [-0.30473378 1.9345177 -0.44511345 -0.33808738 2.3776042 ]\n [-0.7233047 -0.19174956 -0.44511345 -0.29935732 -0.5237516 ]\n [ 3.2531188 -0.46908876 1.0710944 -0.22189721 -0.3786838 ]\n [-0.51401925 0.36292887 -0.44511345 -0.29935732 -0.0885482 ]\n [-0.9325901 -0.43827328 -0.44511345 -0.29935732 -0.66881937]\n [-0.7233047 -0.46908876 0.6426009 -0.29935732 0.0565196 ]\n [ 2.6252625 0.5170062 -0.44511345 -0.1444371 -0.66881937]\n [-0.9325901 -0.09930315 -0.4121524 -0.33808738 -0.5237516 ]\n [-0.09544833 0.701899 -0.44511345 -0.33808738 -0.233616 ]\n [ 0.7416935 -0.06848768 -0.34623033 -0.26062727 -0.0885482 ]\n [-0.9325901 -0.46908876 -0.44511345 -0.29935732 -0.0885482 ]\n [-0.51401925 -0.3458269 -0.4121524 -0.33808738 -0.5237516 ]\n [-0.7233047 0.20885152 -0.44511345 -0.26062727 -0.233616 ]\n [ 0.532408 -0.22256503 -0.28030825 -0.29935732 -0.233616 ]\n [-0.9325901 -0.46908876 0.44483465 -0.33808738 -0.0885482 ]\n [ 0.11383712 -0.37664235 -0.34623033 -0.26062727 -0.233616 ]\n [ 0.7416935 0.02395872 0.6096398 -0.29935732 0.3466552 ]\n [-0.30473378 -0.40745783 -0.4121524 -0.26062727 -0.3786838 ]\n [-0.09544833 -0.46908876 -0.44511345 -0.33808738 0.3466552 ]\n [-0.09544833 -0.37664235 -0.21438617 -0.26062727 -0.233616 ]\n [-0.51401925 0.30129793 -0.44511345 -0.29935732 -0.3786838 ]\n [-0.09544833 -0.22256503 -0.44511345 -0.33808738 -0.3786838 ]\n [-0.7233047 0.701899 -0.44511345 -0.29935732 -0.5237516 ]\n [-0.09544833 -0.3150114 -0.4121524 -0.33808738 -0.66881937]\n [-0.9325901 -0.46908876 2.7850685 -0.33808738 2.6677399 ]\n [-0.7233047 1.0716846 0.04930216 -0.29935732 -0.3786838 ]\n [-0.51401925 -0.3458269 -0.37919137 -0.26062727 -0.3786838 ]\n [ 0.532408 -0.3458269 -0.3132693 0.04921318 -0.66881937]\n [-0.51401925 -0.46908876 0.80740607 -0.29935732 -0.233616 ]\n [-0.30473378 -0.3458269 -0.4121524 -0.18316716 -0.66881937]\n [ 0.11383712 -0.46908876 -0.44511345 0.90127444 -0.3786838 ]\n [-0.30473378 -0.25338048 -0.44511345 -0.29935732 -0.5237516 ]\n [-0.51401925 -0.46908876 1.4666269 -0.22189721 -0.233616 ]\n [ 0.11383712 -0.46908876 -0.4121524 -0.29935732 0.0565196 ]\n [-0.7233047 -0.43827328 -0.37919137 -0.22189721 -0.66881937]\n [-0.09544833 -0.28419596 -0.4121524 -0.29935732 -0.3786838 ]\n [-0.7233047 -0.46908876 0.708523 -0.22189721 -0.66881937]\n [ 0.32312256 -0.37664235 -0.34623033 -0.22189721 0.491723 ]\n [-0.9325901 -0.40745783 0.28002945 -0.22189721 0.0565196 ]\n [ 0.11383712 -0.40745783 -0.44511345 -0.33808738 -0.5237516 ]\n [ 0.532408 0.94842273 -0.44511345 -0.33808738 0.6367908 ]\n [-0.30473378 -0.40745783 -0.34623033 -0.26062727 0.0565196 ]\n [-0.51401925 -0.3150114 -0.44511345 -0.33808738 0.2015874 ]\n [ 0.32312256 -0.40745783 -0.37919137 -0.33808738 0.6367908 ]\n [-0.9325901 -0.46908876 0.4777957 -0.33808738 0.6367908 ]\n [ 1.7881207 0.23966698 -0.44511345 -0.02824693 -0.233616 ]\n [-0.09544833 -0.37664235 -0.44511345 -0.26062727 -0.3786838 ]\n [-0.7233047 0.36292887 1.3018217 -0.33808738 1.9424009 ]\n [ 0.11383712 -0.3458269 -0.2473472 -0.29935732 -0.0885482 ]\n [-0.9325901 -0.46908876 0.87332815 -0.33808738 -0.5237516 ]\n [-0.09544833 -0.28419596 -0.4121524 -0.18316716 -0.5237516 ]\n [-0.51401925 -0.46908876 -0.44511345 0.94000447 0.491723 ]\n [-0.30473378 -0.46908876 -0.4121524 -0.22189721 -0.233616 ]\n [-0.51401925 -0.46908876 -0.44511345 0.3977837 0.3466552 ]\n [ 0.32312256 -0.28419596 -0.34623033 -0.33808738 -0.233616 ]\n [-0.30473378 -0.46908876 0.84036714 -0.33808738 -0.66881937]\n [ 1.7881207 -0.46908876 -0.44511345 -0.1444371 0.0565196 ]\n [ 0.532408 -0.40745783 0.21410736 -0.22189721 -0.5237516 ]\n [-0.09544833 -0.37664235 -0.4121524 -0.33808738 -0.66881937]\n [-0.9325901 0.6402681 -0.44511345 -0.33808738 -0.0885482 ]\n [-0.30473378 -0.46908876 -0.4121524 -0.29935732 0.3466552 ]\n [-0.51401925 -0.46908876 0.14818528 -0.26062727 -0.0885482 ]\n [-0.09544833 -0.22256503 -0.37919137 -0.33808738 -0.0885482 ]\n [-0.09544833 0.4245598 -0.37919137 -0.33808738 -0.0885482 ]\n [-0.51401925 -0.19174956 -0.4121524 -0.26062727 -0.66881937]\n [-0.7233047 0.4245598 -0.44511345 -0.33808738 0.2015874 ]\n [-0.09544833 -0.37664235 -0.4121524 -0.29935732 -0.3786838 ]\n [ 1.3695499 -0.22256503 0.01634112 -0.33808738 -0.3786838 ]\n [-0.09544833 -0.28419596 -0.37919137 -0.26062727 -0.233616 ]\n [-0.7233047 -0.3458269 1.2358996 -0.29935732 0.0565196 ]\n [ 0.532408 -0.3458269 -0.44511345 -0.29935732 -0.66881937]\n [ 0.7416935 -0.46908876 -0.44511345 -0.33808738 -0.0885482 ]\n [ 0.11383712 -0.3150114 -0.21438617 -0.29935732 -0.5237516 ]\n [ 0.11383712 2.088595 -0.44511345 -0.33808738 6.294435 ]]\n"
],
[
"dur_cmp_norm_info.shape",
"_____no_output_____"
],
[
"dur_variance_file_dict = {'dur': dur_dur_var_file}",
"_____no_output_____"
],
[
"feat_ind = 0\nfor feat in list(dur_out_dimension_dict.keys()):\n feat_std_vector = np.array(dur_global_std_vector[:, feat_ind: feat_ind + dur_out_dimension_dict[feat]], 'float32')\n fid = open(dur_variance_file_dict[feat], 'w')\n feat_var_vector = feat_std_vector**2\n feat_var_vector.tofile(fid)\n fid.close()\n feat_ind += dur_out_dimension_dict[feat]",
"_____no_output_____"
],
[
"print(dur_global_std_vector)\nprint(feat_var_vector)",
"[[ 4.77816306 32.451236 30.33884862 25.81974051 6.89332861]]\n[[ 22.830841 1053.0828 920.4457 666.659 47.51798 ]]\n"
]
],
[
[
"## Model training",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.utils.data as data\nfrom torch.autograd import Variable\nimport math\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"class DurationDataset(data.Dataset):\n def __init__(self, lab_file_list, cmp_file_list, lab_dim=368, cmp_dim=5):\n assert(len(lab_file_list) == len(cmp_file_list))\n for i in range(len(lab_file_list)):\n lab_basename = os.path.splitext(os.path.basename(lab_file_list[i]))[0]\n cmp_basename = os.path.splitext(os.path.basename(cmp_file_list[i]))[0]\n# print(lab_basename)\n# print(cmp_basename)\n# print('*' * 20)\n assert lab_basename == cmp_basename\n self.lab_file_list = lab_file_list\n self.cmp_file_list = cmp_file_list\n self.lab_dim = lab_dim\n self.cmp_dim = cmp_dim\n self.io_funcs = BinaryIOCollection()\n \n def __len__(self):\n return len(self.lab_file_list)\n \n def __getitem__(self, ind):\n X = torch.from_numpy(self.io_funcs.load_binary_file_frame(self.lab_file_list[ind], self.lab_dim)[0])\n Y = torch.from_numpy(self.io_funcs.load_binary_file_frame(self.cmp_file_list[ind], self.cmp_dim)[0])\n \n return X, Y",
"_____no_output_____"
],
[
"def collate_fn(batch):\n def func(p):\n return p[0].size(0)\n \n batch_size = len(batch)\n \n max_seq_len = max(batch, key=func)[0].size(0)\n min_seq_len = min(batch, key=func)[0].size(0)\n \n lab_dim = batch[0][0].size(1)\n cmp_dim = batch[0][1].size(1)\n \n\n \n \n if max_seq_len <= 2 * min_seq_len:\n sample_len = int(min_seq_len/2)\n else:\n sample_len = min_seq_len\n \n \n total_sample_num = 0\n for i in range(batch_size):\n lab = batch[i][0]\n cmp = batch[i][1]\n num_seq = math.ceil(lab.size(0)/sample_len)\n total_sample_num += num_seq\n\n\n \n labs = torch.zeros(total_sample_num, sample_len, lab_dim)\n cmps = torch.zeros(total_sample_num, sample_len, cmp_dim)\n \n curr_sample_ind = 0\n for i in range(batch_size):\n ind_in_file = 0\n for j in range(math.floor(batch[i][0].size(0)/sample_len)):\n labs[curr_sample_ind].copy_(batch[i][0][ind_in_file * sample_len: (ind_in_file+1) * sample_len][:])\n cmps[curr_sample_ind].copy_(batch[i][1][ind_in_file * sample_len: (ind_in_file+1) * sample_len][:])\n ind_in_file += 1\n curr_sample_ind += 1\n if batch[i][0].size(0) % sample_len != 0:\n labs[curr_sample_ind].copy_(batch[i][0][-sample_len:][:])\n cmps[curr_sample_ind].copy_(batch[i][1][-sample_len:][:])\n curr_sample_ind += 1\n \n assert(curr_sample_ind == total_sample_num)\n \n# print(\"lab dimension: \" + str(lab_dim))\n# print(\"cmp dimension: \" + str(cmp_dim))\n# seq_len_list = [i[0].size(0) for i in batch]\n# print(\"sequence length for each file: \" + str(seq_len_list))\n# print('max sequence length of original file: ', str(max_seq_len))\n# print('min sequence length of original file: ', str(min_seq_len))\n# print(\"sample length: \", str(sample_len))\n# print('total_sample_num: ' + str(total_sample_num))\n \n# torch.set_printoptions(profile=\"full\")\n# print(batch[0][1])\n# print(cmps)\n# torch.set_printoptions(profile=\"default\")\n \n return labs, cmps, sample_len",
"_____no_output_____"
],
[
"class DurationModel(nn.Module):\n def __init__(self, dur_lab_dim, dur_cmp_dim):\n super(DurationModel, self).__init__()\n \n self.fc1 = nn.Linear(dur_lab_dim, 512)\n self.fc2 = nn.Linear(512, 512)\n self.fc3 = nn.Linear(512, 512)\n self.fc4 = nn.Linear(512, 512)\n# self.fc4 = nn.Linear(512, dur_cmp_dim)\n self.fc5 = nn.Linear(512, dur_cmp_dim)\n \n def forward(self, dur_lab):\n res = F.relu(self.fc1(dur_lab))\n res = F.relu(self.fc2(res))\n res = F.relu(self.fc3(res))\n# res = self.fc4(res)\n res = F.relu(self.fc4(res))\n res = self.fc5(res)\n return res",
"_____no_output_____"
],
[
"def train(model, train_loader, valid_loader, epoch):\n model.train()\n for batch_ind, batch in enumerate(train_loader):\n lab, cmp = Variable(batch[0]), Variable(batch[1])\n loss = criterion(model(lab), cmp)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n if batch_ind % log_interval == 0:\n val_loss = evaluate(model, valid_loader)\n train_loss = loss.item()\n epoch_progress = 100. * batch_ind / len(train_loader)\n print('Train Epoch: {}({:.0f}%)\\tTrain Loss: {:.6f}\\tVal Loss: {:.6f}'.format(\n epoch, epoch_progress, train_loss, val_loss))\n \n \ndef evaluate(model, valid_loader):\n model.eval()\n total_loss = 0\n n_examples = 0\n for batch_ind, batch in enumerate(valid_loader):\n lab, cmp, sample_len = Variable(batch[0], volatile=True), Variable(batch[1]), batch[2]\n output = model(lab)\n loss = criterion(output, cmp, size_average=False).item()\n# print('loss without average: ' + str(loss))\n total_loss += loss\n n = len(lab)\n# print('num samples this batch: ' + str(n))\n# print(cmp.size())\n n_examples += n\n \n total_loss /= (n_examples * sample_len * cmp.size()[-1])\n return total_loss",
"_____no_output_____"
],
[
"# TODO to implement cross-validation, print something here to see whether could do normalisation in Pytorch\nbatch_size = int(dur_train_file_number)\n# batch_size = 1\n# dur_train_set = DurationDataset([dur_lab_no_silence_norm_file_list[0]] * dur_train_file_number, \n# [dur_cmp_no_silence_norm_file_list[0]] * dur_train_file_number)\n# dur_valid_set = DurationDataset(dur_lab_no_silence_norm_file_list[dur_train_file_number: dur_train_file_number + dur_valid_file_number],\n# dur_cmp_no_silence_norm_file_list[dur_train_file_number: dur_train_file_number + dur_valid_file_number])\ndur_train_set = DurationDataset(dur_lab_no_silence_norm_file_list[:dur_train_file_number], dur_cmp_no_silence_norm_file_list[:dur_train_file_number])\ndur_valid_set = DurationDataset(dur_lab_no_silence_norm_file_list[:dur_valid_file_number], dur_cmp_no_silence_norm_file_list[:dur_valid_file_number])\ndur_train_loader = data.DataLoader(dur_train_set, shuffle=False, batch_size=batch_size, collate_fn=collate_fn)\ndur_valid_loader = data.DataLoader(dur_valid_set, shuffle=False, batch_size=dur_valid_file_number, collate_fn=collate_fn)\n\ntmp = next(iter(dur_train_loader))\nlab_, cmp_, _ = tmp\nprint(lab_.size())\nprint(cmp_.size())\nprint(len(lab_))\nprint(len(Variable(lab_)))\nprint(len(dur_valid_set))",
"_____no_output_____"
],
[
"lr = 0.001\nlog_interval = 1\nepochs = 100\nduration_model = nn.Sequential(\n nn.Linear(dur_lab_dim, 512),\n nn.Dropout(0.5),\n nn.ReLU(),\n nn.Linear(512, 512),\n nn.Dropout(0.5),\n nn.ReLU(),\n nn.Linear(512, 512),\n nn.Dropout(0.5),\n nn.ReLU(),\n nn.Linear(512, 512),\n nn.Dropout(0.5),\n nn.ReLU(),\n nn.Linear(512, dur_cmp_dim)\n)\nprint(duration_model)\noptimizer = torch.optim.Adam(duration_model.parameters(), lr=lr)\ncriterion = F.mse_loss\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[90], gamma=0.01)\nfor epoch in range(1, epochs+1):\n scheduler.step()\n train(duration_model, dur_train_loader, dur_valid_loader, epoch)",
"Sequential(\n (0): Linear(in_features=368, out_features=512, bias=True)\n (1): Dropout(p=0.5)\n (2): ReLU()\n (3): Linear(in_features=512, out_features=512, bias=True)\n (4): Dropout(p=0.5)\n (5): ReLU()\n (6): Linear(in_features=512, out_features=512, bias=True)\n (7): Dropout(p=0.5)\n (8): ReLU()\n (9): Linear(in_features=512, out_features=512, bias=True)\n (10): Dropout(p=0.5)\n (11): ReLU()\n (12): Linear(in_features=512, out_features=5, bias=True)\n)\n"
],
[
"lr = 0.25\nlog_interval = 1\nepochs = 200\nduration_model = DurationModel(dur_lab_dim, dur_cmp_dim)\nprint(duration_model)\noptimizer = torch.optim.SGD(duration_model.parameters(), lr=lr)\ncriterion = F.mse_loss\nfor epoch in range(1, epochs+1):\n train(duration_model, dur_train_loader, dur_valid_loader, epoch)",
"DurationModel(\n (fc1): Linear(in_features=368, out_features=512, bias=True)\n (fc2): Linear(in_features=512, out_features=512, bias=True)\n (fc3): Linear(in_features=512, out_features=512, bias=True)\n (fc4): Linear(in_features=512, out_features=512, bias=True)\n (fc5): Linear(in_features=512, out_features=5, bias=True)\n)\n"
],
[
"dur_nn_mdl_file = os.path.join(dur_nn_mdl_dir, 'dur_nn_mdl.pt')\ntorch.save(duration_model.state_dict(), dur_nn_mdl_file)",
"_____no_output_____"
]
],
[
[
"## Test",
"_____no_output_____"
]
],
[
[
"print('files for test: ')\nprint(dur_lab_no_silence_norm_file_list[-1])\nprint(dur_cmp_no_silence_norm_file_list[-1])\nprint(dur_cmp_no_silence_file_list[-1])\nprint('*' * 20)\n\n\n\n\ninput_lab, num_input_frame = io_funcs.load_binary_file_frame(dur_lab_no_silence_norm_file_list[-1], 368)\ntarget_cmp, num_target_frame = io_funcs.load_binary_file_frame(dur_cmp_no_silence_norm_file_list[-1], 5)\nassert(num_input_frame == num_target_frame)\nprint('target output cmp: ')\nprint(target_cmp[:10, :])\n\ninput_lab = torch.from_numpy(input_lab)\ninput_lab = input_lab[None, :, :]\noutput_cmp = duration_model(input_lab)\nassert(output_cmp.size()[1] == num_target_frame)\nprint('output cmp: ')\nprint(output_cmp[0].detach().numpy()[:10, :])\n\ntarget_dur, num_dur_frame = io_funcs.load_binary_file_frame(dur_cmp_no_silence_file_list[-1], 5)\nassert(num_dur_frame == num_target_frame)\nprint('target dur: ')\nprint(target_dur[:10, :])\n\nfid = open(dur_cmp_norm_file, 'rb')\ndur_cmp_norm_info = np.fromfile(fid, dtype=np.float32)\nfid.close()\ndur_cmp_norm_info = dur_cmp_norm_info.reshape(2, -1)\ndur_cmp_mean = dur_cmp_norm_info[0, ]\ndur_cmp_std = dur_cmp_norm_info[1, ]\n\ntest_tmp_file = os.path.join(dur_tmp_dir, 'dur_output.cmp')\noutput_cmp = output_cmp.detach().numpy()[0]\nio_funcs.array_to_binary_file(output_cmp, test_tmp_file)\n\nprint('mean: ', dur_cmp_mean)\nprint('std: ', dur_cmp_std)\n\ntest_dur_denormaliser = MeanVarianceNorm(feature_dimension=dur_cmp_dim)\ntest_dur_denormaliser.feature_denormalisation([test_tmp_file], [test_tmp_file], dur_cmp_mean, dur_cmp_std)\npred_dur = io_funcs.load_binary_file_frame(test_tmp_file, 5)\nprint('predicted dur: ')\nprint(type(pred_dur))\nprint(pred_dur[0][:10, :])",
"files for test: \n"
]
],
[
[
"# Acoustic model",
"_____no_output_____"
],
[
"## Model configuration",
"_____no_output_____"
]
],
[
[
"# acoustic model related\n\n# hardcoded\nacou_lab_dim = 377\nacou_cmp_dim = 187\nacou_train_file_number = 27\nacou_valid_file_number = 1\nacou_test_file_number = 1\n\nacou_mdl_dir = os.path.join(exp_dir, 'acoustic_model')\nif not os.path.exists(acou_mdl_dir):\n os.makedirs(acou_mdl_dir)\n \nacou_inter_dir = os.path.join(acou_mdl_dir, 'inter')\nif not os.path.exists(acou_inter_dir):\n os.makedirs(acou_inter_dir)\n\nacou_lab_dir = os.path.join(acou_inter_dir, 'lab_' + str(acou_lab_dim))\nif not os.path.exists(acou_lab_dir):\n os.makedirs(acou_lab_dir)\n \nacou_lab_no_silence_dir = os.path.join(acou_inter_dir, 'lab_no_silence_' + str(acou_lab_dim))\nif not os.path.exists(acou_lab_no_silence_dir):\n os.makedirs(acou_lab_no_silence_dir)\n \nacou_lab_no_silence_norm_dir = os.path.join(acou_inter_dir, 'lab_no_silence_norm_' + str(acou_lab_dim))\nif not os.path.exists(acou_lab_no_silence_norm_dir):\n os.makedirs(acou_lab_no_silence_norm_dir)\n\n'''\nacou_dur_dir = os.path.join(acou_inter_dir, 'dur')\nif not os.path.exists(acou_dur_dir):\n os.makedirs(acou_dur_dir)\n'''\n\n \nacou_cmp_dir = os.path.join(acou_inter_dir, 'cmp_' + str(acou_cmp_dim))\nif not os.path.exists(acou_cmp_dir):\n os.makedirs(acou_cmp_dir)\n \nacou_cmp_no_silence_dir = os.path.join(acou_inter_dir, 'cmp_no_silence_' + str(acou_cmp_dim))\nif not os.path.exists(acou_cmp_no_silence_dir):\n os.makedirs(acou_cmp_no_silence_dir)\n \nacou_cmp_no_silence_norm_dir = os.path.join(acou_inter_dir, 'cmp_no_silence_norm_' + str(acou_cmp_dim))\nif not os.path.exists(acou_cmp_no_silence_norm_dir):\n os.makedirs(acou_cmp_no_silence_norm_dir)\n \nacou_variance_dir = os.path.join(acou_inter_dir, 'variance')\nif not os.path.exists(acou_variance_dir):\n os.makedirs(acou_variance_dir)\n \nacou_nn_mdl_dir = os.path.join(acou_mdl_dir, 'mdl')\nif not os.path.exists(acou_nn_mdl_dir):\n os.makedirs(acou_nn_mdl_dir)\n \n \n\nacou_lab_norm_file = os.path.join(acou_inter_dir, 'lab_norm_' + str(acou_lab_dim) + '.dat')\nacou_cmp_norm_file = os.path.join(acou_inter_dir, 'cmp_norm_' + str(acou_cmp_dim) + '.dat')\n\nacou_vuv_var_file = os.path.join(acou_variance_dir, 'vuv')\nacou_mgc_var_file = os.path.join(acou_variance_dir, 'mgc')\nacou_lf0_var_file = os.path.join(acou_variance_dir, 'lf0')\nacou_bap_var_file = os.path.join(acou_variance_dir, 'bap')\n \nacou_lab_file_list = gen_file_list(acou_lab_dir, file_id_list, 'labbin')\nacou_lab_no_silence_file_list = gen_file_list(acou_lab_no_silence_dir, file_id_list, 'labbin')\nacou_lab_no_silence_norm_file_list = gen_file_list(acou_lab_no_silence_norm_dir, file_id_list, 'labbin')\n# dur_dur_file_list = gen_file_list(dur_dur_dir, file_id_list, 'dur')\nacou_cmp_file_list = gen_file_list(acou_cmp_dir, file_id_list, 'cmp')\nacou_cmp_no_silence_file_list = gen_file_list(acou_cmp_no_silence_dir, file_id_list, 'cmp')\nacou_cmp_no_silence_norm_file_list = gen_file_list(acou_cmp_no_silence_norm_dir, file_id_list, 'cmp')\n\n\nacou_lf0_file_list = gen_file_list(lf0_dir, file_id_list, 'lf0')\nacou_mgc_file_list = gen_file_list(mgc_dir, file_id_list, 'mgc')\nacou_bap_file_list = gen_file_list(bap_dir, file_id_list, 'bap')",
"_____no_output_____"
]
],
[
[
"## Feature extraction from label files",
"_____no_output_____"
]
],
[
[
"acou_lab_normaliser = HTSLabelNormalisation(question, add_frame_features=True, subphone_feats='full')\nacou_lab_normaliser.perform_normalisation(orig_lab_file_list, acou_lab_file_list)",
"_____no_output_____"
]
],
[
[
"## Remove silence phone",
"_____no_output_____"
]
],
[
[
"acou_silence_remover = SilenceRemover(n_cmp=acou_lab_dim, silence_pattern=silence_pattern, remove_frame_features=True, subphone_feats='full')\nacou_silence_remover.remove_silence(acou_lab_file_list, orig_lab_file_list, acou_lab_no_silence_file_list)",
"_____no_output_____"
],
[
"_, num_frame = io_funcs.load_binary_file_frame(acou_lab_file_list[2], 377)\n_, num_frame_nn = io_funcs.load_binary_file_frame(acou_lab_no_silence_file_list[2], 377)\nprint(num_frame)\nprint(num_frame_nn)",
"8635\n7201\n"
],
[
"tmp, _ = io_funcs.load_binary_file_frame(acou_lab_file_list[2], 377)\nprint(tmp)",
"[[0. 0. 0. ... 0.04234528 1. 0.00325733]\n [0. 0. 0. ... 0.04234528 0.99674267 0.00651466]\n [0. 0. 0. ... 0.04234528 0.99348533 0.00977199]\n ...\n [0. 0. 0. ... 0.2925373 0.00895522 0.9940299 ]\n [0. 0. 0. ... 0.2925373 0.00597015 0.99701494]\n [0. 0. 0. ... 0.2925373 0.00298507 1. ]]\n"
]
],
[
[
"## Input feature normalization",
"_____no_output_____"
]
],
[
[
"acou_min_max_normaliser = MinMaxNormalisation(feature_dimension=acou_lab_dim, min_value=0.01, max_value=0.99)\nacou_min_max_normaliser.find_min_max_values(acou_lab_no_silence_file_list[0: acou_train_file_number])\nacou_min_max_normaliser.normalise_data(acou_lab_no_silence_file_list, acou_lab_no_silence_norm_file_list)\nacou_label_min_vector = acou_min_max_normaliser.min_vector\nacou_label_max_vector = acou_min_max_normaliser.max_vector\nacou_label_norm_info = np.concatenate((acou_label_min_vector, acou_label_max_vector), axis=0)\nacou_label_norm_info = np.array(acou_label_norm_info, 'float32')\nfid = open(acou_lab_norm_file, 'wb')\nacou_label_norm_info.tofile(fid)\nfid.close()",
"_____no_output_____"
],
[
"acou_label_norm_info.shape",
"_____no_output_____"
]
],
[
[
"## Make output features for acoustic model",
"_____no_output_____"
]
],
[
[
"\"\"\"\n\"in\" & \"out\" just mean before & after feature composition \nlike if we compute dynamic features, dimensions of out will be 3 times of in\nnot really mean in & out of the network \n\"\"\" \nacou_in_dimension_dict = {'bap': 1, 'mgc': 60, 'lf0': 1} \nacou_out_dimension_dict = {'bap': 3, 'vuv': 1, 'mgc': 180, 'lf0': 3}\n# acou_in_dir_dict = {'bap': bap_dir, 'mgc': mgc_dir, 'lf0': lf0_dir}\nacou_in_file_list_dict = {'bap': acou_bap_file_list, 'mgc': acou_mgc_file_list, 'lf0': acou_lf0_file_list}",
"_____no_output_____"
],
[
"acou_acoustic_worker = AcousticComposition(delta_win = delta_win, acc_win = acc_win)\nacou_acoustic_worker.prepare_nn_data(acou_in_file_list_dict, acou_cmp_file_list, acou_in_dimension_dict, acou_out_dimension_dict)",
"_____no_output_____"
],
[
"feat, num_frame = io_funcs.load_binary_file_frame(dur_cmp_file_list[2], 5)\nprint(feat.shape)\nprint(feat[0:10, :])\nfeat, num_frame = io_funcs.load_binary_file_frame(acou_cmp_file_list[2], 187)\nprint(feat.shape)\nprint(feat[0:10, :])",
"(131, 5)\n[[ 13. 72. 166. 2. 54.]\n [ 9. 47. 1. 54. 32.]\n [ 1. 18. 4. 4. 4.]\n [ 3. 100. 1. 2. 6.]\n [ 4. 5. 3. 3. 3.]\n [ 1. 32. 1. 1. 3.]\n [ 5. 3. 2. 2. 3.]\n [ 2. 26. 1. 5. 8.]\n [ 2. 3. 3. 7. 6.]\n [ 6. 77. 17. 5. 5.]]\n(8640, 187)\n[[ 0. 0. 0. ... 0.0016107 0.01590419\n 0. ]\n [ 0. 0. 0. ... -0.03265311 0.00975457\n 0. ]\n [ 0. 0. 0. ... 0.00190853 0.00436544\n 0. ]\n ...\n [ 0. 0. 0. ... -0.00899457 -0.00381122\n 0. ]\n [ 0. 0. 0. ... 0.02834933 -0.01433814\n 0. ]\n [ 0. 0. 0. ... 0.02132037 -0.0286779\n 0. ]]\n"
]
],
[
[
"## Remove silence phone",
"_____no_output_____"
]
],
[
[
"acou_silence_remover = SilenceRemover(n_cmp = acou_cmp_dim, silence_pattern = silence_pattern, remove_frame_features = True, subphone_feats = 'full')\nacou_silence_remover.remove_silence(acou_cmp_file_list, orig_lab_file_list, acou_cmp_no_silence_file_list) ",
"_____no_output_____"
],
[
"_, num_frame = io_funcs.load_binary_file_frame(acou_cmp_file_list[2], 187)\n_, num_frame_nn = io_funcs.load_binary_file_frame(acou_cmp_no_silence_file_list[2], 187)\nprint(num_frame)\nprint(num_frame_nn)",
"8640\n7201\n"
]
],
[
[
"## Output feature (dim 187) normalization",
"_____no_output_____"
]
],
[
[
"acou_mvn_normaliser = MeanVarianceNorm(feature_dimension=acou_cmp_dim)\nacou_global_mean_vector = acou_mvn_normaliser.compute_mean(acou_cmp_no_silence_file_list[0: acou_train_file_number], 0, acou_cmp_dim)\nacou_global_std_vector = acou_mvn_normaliser.compute_std(acou_cmp_no_silence_file_list[0: acou_train_file_number], acou_global_mean_vector, 0, acou_cmp_dim)\nacou_mvn_normaliser.feature_normalisation(acou_cmp_no_silence_file_list, acou_cmp_no_silence_norm_file_list)\nacou_cmp_norm_info = np.concatenate((acou_global_mean_vector, acou_global_std_vector), axis=0)\nacou_cmp_norm_info = np.array(acou_cmp_norm_info, 'float32')\nfid = open(acou_cmp_norm_file, 'wb')\nacou_cmp_norm_info.tofile(fid)\nfid.close()",
"_____no_output_____"
],
[
"tmp1, num1 = io_funcs.load_binary_file_frame(acou_cmp_no_silence_file_list[0], 187)\ntmp2, num2 = io_funcs.load_binary_file_frame(acou_cmp_no_silence_norm_file_list[0], 187)\nprint(num1 == num2)\nprint(tmp2[:, :4])",
"True\n[[ 1.4198911 -0.69523036 -0.5961235 0.45662737]\n [ 1.1533878 -0.47438782 0.7849358 0.45662737]\n [ 1.2379152 -0.63909817 -0.92615116 0.45662737]\n ...\n [ 1.288423 -0.9105048 -0.6175857 -1.4407294 ]\n [ 0.9759231 -0.0580794 1.3470236 -1.5170972 ]\n [ 1.265789 -0.49311343 -1.7196321 -1.5170972 ]]\n"
],
[
"print(acou_cmp_norm_info.shape)\nacou_global_std_vector",
"(2, 187)\n"
],
[
"acou_variance_file_dict = {'vuv': acou_vuv_var_file,\n 'mgc': acou_mgc_var_file,\n 'lf0': acou_lf0_var_file,\n 'bap': acou_bap_var_file}",
"_____no_output_____"
],
[
"feat_ind = 0\nfor feat in list(acou_out_dimension_dict.keys()):\n feat_std_vector = np.array(acou_global_std_vector[:, feat_ind: feat_ind + acou_out_dimension_dict[feat]], 'float32')\n fid = open(acou_variance_file_dict[feat], 'w')\n feat_var_vector = feat_std_vector**2\n feat_var_vector.tofile(fid)\n fid.close()\n feat_ind += acou_out_dimension_dict[feat]",
"_____no_output_____"
]
],
[
[
"## Model training",
"_____no_output_____"
]
],
[
[
"# TODO to implement cross-validation, print something here to see whether could do normalisation in Pytorch\nbatch_size = int(acou_train_file_number)\n# batch_size = 1\nprint('batch_size: ' + str(batch_size))\nacou_train_set = DurationDataset(acou_lab_no_silence_norm_file_list[:10], \n acou_cmp_no_silence_norm_file_list[:10], lab_dim=377, cmp_dim=187)\nacou_valid_set = DurationDataset(acou_lab_no_silence_norm_file_list[acou_train_file_number: acou_train_file_number + acou_valid_file_number],\n acou_cmp_no_silence_norm_file_list[acou_train_file_number: acou_train_file_number + acou_valid_file_number], lab_dim=377, cmp_dim=187)\nacou_train_loader = data.DataLoader(acou_train_set, shuffle=True, batch_size=batch_size, collate_fn=collate_fn)\nacou_valid_loader = data.DataLoader(acou_valid_set, shuffle=True, batch_size=acou_valid_file_number, collate_fn=collate_fn)\n\ntmp = next(iter(dur_train_loader))\nlab_, cmp_, _ = tmp\nprint(lab_.size())\nprint(cmp_.size())\nprint(len(Variable(lab_)))\nprint(len(dur_valid_set))\n\ntmp = next(iter(acou_train_loader))\nlab_, cmp_, sp_len = tmp\nprint(lab_.size())\nprint(cmp_.size())\nprint(len(Variable(lab_)))\nprint(len(acou_valid_set))\nprint(sp_len)",
"batch_size: 27\n"
],
[
"lr = 0.015\nlog_interval = 1\nepochs = 50\nacoustic_model = nn.Sequential(\n nn.Linear(acou_lab_dim, 300),\n nn.ReLU(),\n nn.Linear(300, acou_cmp_dim)\n)\n# acoustic_model = nn.Sequential(\n# nn.Linear(acou_lab_dim, 512),\n# nn.Dropout(0.5),\n# nn.ReLU(),\n# nn.Linear(512, 512),\n# nn.Dropout(0.5),\n# nn.ReLU(),\n# nn.Linear(512, 512),\n# nn.Dropout(0.5),\n# nn.ReLU(),\n# nn.Linear(512, acou_cmp_dim)\n# )\noptimizer = torch.optim.Adam(acoustic_model.parameters(), lr=lr)\ncriterion = F.mse_loss\nfor epoch in range(1, epochs+1):\n train(acoustic_model, acou_train_loader, acou_valid_loader, epoch)",
"/home/yongliang/venv3/lib/python3.5/site-packages/ipykernel_launcher.py:22: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.\n"
],
[
"lr = 0.1\nlog_interval = 1\nepochs = 30\nacoustic_model = DurationModel(acou_lab_dim, acou_cmp_dim)\nprint(acoustic_model)\noptimizer = torch.optim.SGD(acoustic_model.parameters(), lr=lr)\ncriterion = F.mse_loss\nfor epoch in range(1, epochs+1):\n train(acoustic_model, acou_train_loader, acou_valid_loader, epoch)",
"DurationModel(\n (fc1): Linear(in_features=377, out_features=512, bias=True)\n (fc2): Linear(in_features=512, out_features=512, bias=True)\n (fc3): Linear(in_features=512, out_features=512, bias=True)\n (fc4): Linear(in_features=512, out_features=512, bias=True)\n (fc5): Linear(in_features=512, out_features=187, bias=True)\n)\n"
],
[
"acou_nn_mdl_file = os.path.join(acou_nn_mdl_dir, 'acou_nn_mdl.pt')\ntorch.save(acoustic_model.state_dict(), acou_nn_mdl_file)",
"_____no_output_____"
]
],
[
[
"# Synthesis",
"_____no_output_____"
]
],
[
[
"synth_id_list = file_id_list[-dur_test_file_number:]\ninput_lab_file_list = orig_lab_file_list[-dur_test_file_number:]\n\nsynth_dir = os.path.join(exp_dir, 'synth')\nif not os.path.exists(synth_dir):\n os.makedirs(synth_dir)\n \nwav_dir = os.path.join(synth_dir, 'wav')\nif not os.path.exists(wav_dir):\n os.makedirs(wav_dir)\n \nsynth_inter_dir = os.path.join(synth_dir, 'inter')\nif not os.path.exists(synth_inter_dir):\n os.makedirs(synth_inter_dir)\n \nsynth_dur_lab_norm_dir = os.path.join(synth_inter_dir, 'dur_lab_norm')\nif not os.path.exists(synth_dur_lab_norm_dir):\n os.makedirs(synth_dur_lab_norm_dir)\n \nsynth_dur_cmp_pred_dir = os.path.join(synth_inter_dir, 'dur_cmp_pred')\nif not os.path.exists(synth_dur_cmp_pred_dir):\n os.makedirs(synth_dur_cmp_pred_dir)\n \nsynth_dur_lab_file_list = gen_file_list(dur_lab_dir, synth_id_list, 'labbin')\nsynth_dur_lab_norm_file_list = gen_file_list(synth_dur_lab_norm_dir, synth_id_list, 'labbin')\nsynth_dur_cmp_pred_file_list = gen_file_list(synth_dur_cmp_pred_dir, synth_id_list, 'cmp')\n\n\n\n\norig_lab_file_list = get_file_list_of_dir(lab_dir)\nfile_id_list = get_file_id_list(orig_lab_file_list)\ndur_lab_file_list = gen_file_list(dur_lab_dir, file_id_list, 'labbin')\ndur_lab_no_silence_file_list = gen_file_list(dur_lab_no_silence_dir, file_id_list, 'labbin')\ndur_lab_no_silence_norm_file_list = gen_file_list(dur_lab_no_silence_norm_dir, file_id_list, 'labbin')\ndur_dur_file_list = gen_file_list(dur_dur_dir, file_id_list, 'dur')\ndur_cmp_file_list = gen_file_list(dur_cmp_dir, file_id_list, 'cmp')\ndur_cmp_no_silence_file_list = gen_file_list(dur_cmp_no_silence_dir, file_id_list, 'cmp')\ndur_cmp_no_silence_norm_file_list = gen_file_list(dur_cmp_no_silence_norm_dir, file_id_list, 'cmp')\n\nsynth_acou_lab_norm_dir = os.path.join(synth_inter_dir, 'acou_lab_norm')\nif not os.path.exists(synth_acou_lab_norm_dir):\n os.makedirs(synth_acou_lab_norm_dir)\n \nsynth_acou_cmp_pred_dir = os.path.join(synth_inter_dir, 'acou_cmp_pred')\nif not os.path.exists(synth_acou_cmp_pred_dir):\n os.makedirs(synth_acou_cmp_pred_dir)\n \nsynth_acou_lab_file_list = gen_file_list(synth_acou_lab_norm_dir, synth_id_list, 'labbin')\nsynth_acou_lab_norm_file_list = gen_file_list(synth_acou_lab_norm_dir, synth_id_list, 'labbin')\nsynth_acou_cmp_pred_file_list = gen_file_list(synth_acou_cmp_pred_dir, synth_id_list, 'cmp')\n",
"_____no_output_____"
]
],
[
[
"## Normalize label files for duration model (silence not removed)",
"_____no_output_____"
]
],
[
[
"synth_dur_lab_normaliser = MinMaxNormalisation(feature_dimension = dur_lab_dim, min_value = 0.01, max_value = 0.99)\nsynth_dur_lab_normaliser.load_min_max_values(dur_lab_norm_file)\nsynth_dur_lab_normaliser.normalise_data(synth_dur_lab_file_list, synth_dur_lab_norm_file_list)",
"_____no_output_____"
],
[
"tmp1, num1 = io_funcs.load_binary_file_frame(synth_dur_lab_norm_file_list[0], 368)\ntmp2, num2 = io_funcs.load_binary_file_frame(synth_dur_lab_file_list[0], 368)\nprint(synth_dur_lab_norm_file_list[0])\nprint('num1: ', str(num1))\nprint('num2: ', str(num2))\n# print(tmp1[0: 10, :])\n# print(synth_dur_lab_norm_file_list[0])",
"/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/inter/dur_lab_norm/nitech_jp_song070_f001_070.labbin\nnum1: 55\nnum2: 55\n"
]
],
[
[
"## Predict durations",
"_____no_output_____"
]
],
[
[
"synth_duration_model = DurationModel(dur_lab_dim, dur_cmp_dim)\nsynth_duration_model.load_state_dict(torch.load(dur_nn_mdl_file))\nsynth_duration_model.eval()",
"_____no_output_____"
],
[
"lab, num_frame = io_funcs.load_binary_file_frame(synth_dur_lab_norm_file_list[0], 368)\nlab = torch.from_numpy(lab)\nlab = lab[None, :, :]\ndur_cmp_pred = synth_duration_model(lab)\ndur_cmp_pred = dur_cmp_pred.detach().numpy()[0]\ndur_cmp_pred",
"_____no_output_____"
]
],
[
[
"## Denormalization",
"_____no_output_____"
]
],
[
[
"fid = open(dur_cmp_norm_file, 'rb')\ndur_cmp_norm_info = np.fromfile(fid, dtype=np.float32)\nfid.close()\ndur_cmp_norm_info = dur_cmp_norm_info.reshape(2, -1)\ndur_cmp_mean = dur_cmp_norm_info[0, ]\ndur_cmp_std = dur_cmp_norm_info[1, ]\n\nprint(synth_dur_cmp_pred_file_list[0])\n\nio_funcs.array_to_binary_file(dur_cmp_pred, synth_dur_cmp_pred_file_list[0])\n\nprint(dur_cmp_mean)\nprint(dur_cmp_std)",
"/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/inter/dur_cmp_pred/nitech_jp_song070_f001_070.cmp\n[ 5.4560676 16.22251 14.50423 9.729328 5.6103916]\n[ 4.778163 32.451237 30.338848 25.81974 6.8933287]\n"
],
[
"synth_dur_denormaliser = MeanVarianceNorm(feature_dimension=dur_cmp_dim)\nsynth_dur_denormaliser.feature_denormalisation(synth_dur_cmp_pred_file_list, synth_dur_cmp_pred_file_list, dur_cmp_mean, dur_cmp_std)\ndur_cmp_pred, _ = io_funcs.load_binary_file_frame(synth_dur_cmp_pred_file_list[0], 5)\nprint(dur_cmp_pred[:10, :])",
"[[ 5.551061 15.752278 14.200991 10.206416 5.2901797]\n [ 5.5533843 15.87799 14.0966015 10.179635 5.295764 ]\n [ 5.550715 15.884184 14.2144 10.24537 5.2878685]\n [ 5.5525837 15.890511 14.132585 10.232053 5.256593 ]\n [ 5.543806 15.852673 14.130717 10.249583 5.27523 ]\n [ 5.546036 15.958624 14.043604 10.331711 5.2914195]\n [ 5.55692 16.027588 14.147421 10.241272 5.3083267]\n [ 5.5535684 15.856915 14.245941 10.315426 5.292195 ]\n [ 5.5677176 15.877155 14.226478 10.230915 5.242901 ]\n [ 5.556892 15.876377 14.209519 10.219336 5.262225 ]]\n"
]
],
[
[
"## Change original label files with newly predicted durations",
"_____no_output_____"
]
],
[
[
"from frontend.parameter_generation import ParameterGeneration\nfrom frontend.label_modifier import HTSLabelModification\nsynth_dur_extention_dict = {'dur': '.dur'}\nsynth_dur_out_dimension_dict = {'dur': 5}\nsynth_dur_cmp_dim = 5\n\n\nsynth_dur_list = [os.path.splitext(synth_dur_cmp_pred_file_list[0])[0] + synth_dur_extention_dict['dur']]\nsynth_lab_list = [os.path.splitext(synth_dur_cmp_pred_file_list[0])[0] + '.lab']\nprint(synth_dur_list)\nprint(synth_lab_list)\n\nsynth_decomposer = ParameterGeneration(['mgc', 'bap', 'lf0'])\nsynth_decomposer.duration_decomposition(synth_dur_cmp_pred_file_list, synth_dur_cmp_dim, synth_dur_out_dimension_dict, synth_dur_extention_dict)\nsynth_label_modifier = HTSLabelModification(silence_pattern = silence_pattern)\nsynth_label_modifier.modify_duration_labels(input_lab_file_list, synth_dur_list, synth_lab_list)",
"['/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/inter/dur_cmp_pred/nitech_jp_song070_f001_070.dur']\n['/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/inter/dur_cmp_pred/nitech_jp_song070_f001_070.lab']\n"
]
],
[
[
"## Normalize label files for acoustic model (silence not removed)",
"_____no_output_____"
]
],
[
[
"synth_acou_lab_normaliser = HTSLabelNormalisation(question, add_frame_features=True, subphone_feats='full')\nsynth_acou_lab_normaliser.perform_normalisation(synth_lab_list, synth_acou_lab_file_list)\nsynth_acou_lab_normaliser = MinMaxNormalisation(feature_dimension = acou_lab_dim, min_value = 0.01, max_value = 0.99)\nsynth_acou_lab_normaliser.load_min_max_values(acou_lab_norm_file)\nsynth_acou_lab_normaliser.normalise_data(synth_acou_lab_file_list, synth_acou_lab_norm_file_list)",
"_____no_output_____"
]
],
[
[
"## Predict acoustic features",
"_____no_output_____"
]
],
[
[
"synth_acoustic_model = DurationModel(acou_lab_dim, acou_cmp_dim)\nsynth_acoustic_model.load_state_dict(torch.load(acou_nn_mdl_file))\nsynth_acoustic_model.eval()",
"_____no_output_____"
],
[
"lab, num_frame = io_funcs.load_binary_file_frame(synth_acou_lab_norm_file_list[0], 377)\nlab = torch.from_numpy(lab)\nlab = lab[None, :, :]\nacou_cmp_pred = synth_acoustic_model(lab)\nacou_cmp_pred = acou_cmp_pred.detach().numpy()[0]\nacou_cmp_pred.shape",
"_____no_output_____"
]
],
[
[
"## Denormalization",
"_____no_output_____"
]
],
[
[
"fid = open(acou_cmp_norm_file, 'rb')\nacou_cmp_norm_info = np.fromfile(fid, dtype=np.float32)\nfid.close()\nacou_cmp_norm_info = acou_cmp_norm_info.reshape(2, -1)\nacou_cmp_mean = acou_cmp_norm_info[0, ]\nacou_cmp_std = acou_cmp_norm_info[1, ]\n\nprint(synth_acou_cmp_pred_file_list[0])\n\n\nio_funcs.array_to_binary_file(acou_cmp_pred, synth_acou_cmp_pred_file_list[0])\n\nprint(acou_cmp_mean)\nprint(acou_cmp_std)",
"/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/inter/acou_cmp_pred/nitech_jp_song070_f001_070.cmp\n[-1.33646011e+01 -1.90148572e-03 4.75082546e-04 5.96542645e+00 -3.50371033e-06 -1.21180547e-05 5.07352877e+00 1.88283157e+00 -2.73135006e-01 1.37185544e-01 -4.23395693e-01 2.41781697e-01 -2.04522818e-01 -4.18005884e-02 3.96365196e-01 -1.26078025e-01 3.74176562e-01 -1.93812922e-01 -3.76398936e-02 -1.38404528e-02 -5.09452028e-03 1.28273636e-01 -1.30870998e-01 8.73090476e-02 -2.49601901e-03 -8.83054454e-03 -3.25787514e-02 -2.69498564e-02 2.26364378e-02 -5.02169877e-02 4.38574776e-02 -4.56125587e-02 2.92196702e-02 -3.07271220e-02 2.09769118e-03 2.67485492e-02 -4.82758544e-02 5.80238625e-02 -5.66445962e-02 4.08596694e-02 -1.98573247e-02 4.57508303e-03 -4.61094460e-04 -1.20418239e-03 1.06062898e-02 -2.40169801e-02 2.86857504e-02 -2.29858626e-02 1.30661651e-02 -4.51678643e-03 -4.03065840e-03 1.32622905e-02 -2.11629681e-02 2.48682052e-02 -1.93230826e-02 8.35328922e-03 2.99246400e-03 -5.26264030e-03 2.37871916e-03 4.83263703e-03 -7.48290401e-03 8.38070177e-03 -5.56744961e-03 5.58886351e-03 -5.19738067e-03 5.24853682e-03 9.39571648e-04 8.44951021e-04 1.61141670e-05 -2.41569942e-04 -2.26512842e-04 -2.17061228e-04 -2.52631726e-04 -1.18810829e-04 4.92765830e-05 -9.69187167e-05 3.09795032e-05 -1.54342255e-04 -3.72998124e-06 3.95451389e-06 -2.83918398e-05 -7.97869961e-06 -2.04145599e-05 2.54602801e-05 2.34525887e-05 -9.76238789e-06 3.25788242e-05 1.53196343e-05 2.32092279e-05 1.94366548e-05 1.79972994e-05 3.23178028e-06 3.07410482e-05 -1.21219982e-05 -6.16994294e-06 1.00928828e-05 -2.89393870e-06 6.83943654e-06 -6.11672067e-06 5.28538931e-06 -6.64820755e-06 2.78131415e-06 2.76579794e-06 -2.92234540e-06 2.87460489e-06 -6.22570406e-06 8.65893526e-06 -4.74401804e-06 3.75802983e-07 1.40423765e-06 -4.98241434e-07 1.17370178e-06 -4.67765130e-06 3.11781787e-06 5.20443962e-07 -6.52864082e-06 3.79428457e-06 -2.07191465e-06 -2.35974994e-06 6.47399645e-08 -1.54142924e-06 1.00479696e-07 -1.84753651e-06 8.33545002e-08 5.58115858e-07 -1.35463813e-06 -3.68580106e-04 -2.90173281e-04 5.20442882e-06 6.19480634e-05 4.53602770e-05 -5.81086988e-06 2.64733972e-05 3.59655169e-05 -4.42786804e-05 3.35612935e-06 -7.97545908e-06 4.47786770e-05 -1.33591839e-05 6.83167491e-06 2.36693922e-05 2.41776479e-06 -1.25154361e-06 3.35961818e-06 9.14512384e-06 -6.72206443e-06 -3.15925490e-06 -7.43154123e-06 -5.66276776e-06 1.75299715e-06 -3.28285296e-06 3.49531706e-06 -1.22088277e-05 3.02858348e-06 2.65761514e-06 -6.95959034e-06 3.91838284e-06 -8.41016913e-07 2.09679388e-06 -3.56328155e-06 1.58765488e-06 -1.38214284e-06 3.38836139e-06 -7.08939012e-07 -3.44994055e-06 2.52165660e-06 3.31361832e-07 8.61366729e-08 -9.85291535e-07 -1.81131497e-07 3.09651000e-06 -4.15464456e-06 1.16001195e-06 1.19295214e-06 -1.89193440e-06 -1.11387317e-06 3.74759247e-06 -4.09929044e-06 2.32599882e-06 6.09038864e-08 -8.15032138e-07 -4.41149872e-08 1.26465568e-06 -7.06078254e-07 1.45156251e-08 4.09283984e-07 8.69738579e-01]\n[9.412413 1.8012996 4.208716 0.22775437 0.02732092 0.06407295 1.5248871 0.86403745 0.74400973 0.59257406 0.4841286 0.3923912 0.30851033 0.28519675 0.36933756 0.41165707 0.32822326 0.28492895 0.29147884 0.25433478 0.20159926 0.19080894 0.15436257 0.19601114 0.19408262 0.2105544 0.14320897 0.11563111 0.09560336 0.10722646 0.11261903 0.15451424 0.13682862 0.14987114 0.14291312 0.10829363 0.10096107 0.0889494 0.07560817 0.07302475 0.08292949 0.09943618 0.1028903 0.09976558 0.10206005 0.10056029 0.08708021 0.07575486 0.06951946 0.06276173 0.05988566 0.06239593 0.06657607 0.06998015 0.07346546 0.07865112 0.08020094 0.07525863 0.06977282 0.06375562 0.05756272 0.05088478 0.04439725 0.04177266 0.04052918 0.04215691 0.23098677 0.14314069 0.10022262 0.07618567 0.07058521 0.06608377 0.06079415 0.05678634 0.06002548 0.07100924 0.0704289 0.05589593 0.05586179 0.05853933 0.04358428 0.0472016 0.03954647 0.04424097 0.04026958 0.04325016 0.03644021 0.03594404 0.03191633 0.03178075 0.03469878 0.03385955 0.03373558 0.03623361 0.0301153 0.02994606 0.02777007 0.0260447 0.02510259 0.02616375 0.02751757 0.02628847 0.02835548 0.02947895 0.0264891 0.0262272 0.02554259 0.02352404 0.02176763 0.0209435 0.02154545 0.02271469 0.02250564 0.02155565 0.02238389 0.02391988 0.02306413 0.02131459 0.02075475 0.02011121 0.01887289 0.01774064 0.01678979 0.01628225 0.01628126 0.01684994 0.28003797 0.23980382 0.1709194 0.14593449 0.12961878 0.126744 0.11563729 0.11031958 0.11288798 0.13689955 0.14123376 0.11372157 0.11102027 0.12029053 0.08935279 0.09838665 0.08014144 0.09045165 0.07751207 0.08665168 0.07314873 0.07684392 0.0686731 0.06653555 0.07056335 0.066424 0.06465922 0.07008776 0.05870239 0.06021639 0.0580417 0.05520671 0.05401498 0.05576374 0.05579863 0.05131929 0.05433712 0.05449566 0.04867448 0.05036331 0.05040244 0.04769661 0.04553442 0.04481848 0.04598135 0.0468513 0.04461323 0.04208625 0.04389333 0.04569512 0.04256402 0.03959534 0.03996718 0.03983006 0.03839369 0.03716945 0.03606939 0.03533025 0.03562297 0.0364862 0.33659083]\n"
],
[
"synth_acou_denormaliser = MeanVarianceNorm(feature_dimension=acou_cmp_dim)\nsynth_acou_denormaliser.feature_denormalisation(synth_acou_cmp_pred_file_list, synth_acou_cmp_pred_file_list, acou_cmp_mean, acou_cmp_std)\ndur_cmp_pred, _ = io_funcs.load_binary_file_frame(synth_acou_cmp_pred_file_list[0], 187)\nprint(dur_cmp_pred[:10, :10])",
"[[-1.38575516e+01 1.01585373e-01 -5.46068065e-02 5.96189833e+00 6.89454260e-04 -2.46225647e-03 5.09212208e+00 1.87044120e+00 -2.88518071e-01 1.22004353e-01]\n [-1.38563013e+01 1.00108132e-01 -5.43414243e-02 5.96184349e+00 6.82886865e-04 -2.47346167e-03 5.09386969e+00 1.86894822e+00 -2.89718211e-01 1.22529343e-01]\n [-1.38563023e+01 1.00061432e-01 -5.43866716e-02 5.96184492e+00 6.82633079e-04 -2.47423304e-03 5.09383392e+00 1.86895490e+00 -2.89702773e-01 1.22511789e-01]\n [-1.38563204e+01 1.00020953e-01 -5.44308163e-02 5.96184683e+00 6.82366954e-04 -2.47490872e-03 5.09380102e+00 1.86896157e+00 -2.89689153e-01 1.22494146e-01]\n [-1.38563423e+01 9.99839976e-02 -5.44703752e-02 5.96184874e+00 6.82100654e-04 -2.47563026e-03 5.09376812e+00 1.86896980e+00 -2.89676309e-01 1.22475892e-01]\n [-1.38563643e+01 9.99474004e-02 -5.45096397e-02 5.96185064e+00 6.81819394e-04 -2.47638649e-03 5.09373426e+00 1.86897981e+00 -2.89664090e-01 1.22457244e-01]\n [-1.38563890e+01 9.99107435e-02 -5.45403101e-02 5.96185207e+00 6.81546400e-04 -2.47718301e-03 5.09370041e+00 1.86898839e+00 -2.89651722e-01 1.22438923e-01]\n [-1.38564091e+01 9.98726338e-02 -5.45758195e-02 5.96185303e+00 6.81370148e-04 -2.47796741e-03 5.09366655e+00 1.86899626e+00 -2.89639503e-01 1.22421421e-01]\n [-1.38563900e+01 9.98384506e-02 -5.46084717e-02 5.96185398e+00 6.81302103e-04 -2.47862795e-03 5.09363413e+00 1.86900294e+00 -2.89627761e-01 1.22403778e-01]\n [-1.38563643e+01 9.98053476e-02 -5.46406657e-02 5.96185493e+00 6.81248843e-04 -2.47926894e-03 5.09360266e+00 1.86900938e+00 -2.89616227e-01 1.22386232e-01]]\n"
],
[
"synth_acou_extention_dict = {'lf0': '.lf0', 'mgc': '.mgc', 'bap': '.bap'}\nsynth_acou_out_dimension_dict = {'lf0': 3, 'mgc': 180, 'bap': 3, 'vuv': 1}\nsynth_acou_cmp_dim = 187\n\n\n# synth_dur_list = [os.path.splitext(synth_dur_cmp_pred_file_list[0])[0] + synth_dur_extention_dict['dur']]\n# synth_lab_list = [os.path.splitext(synth_dur_cmp_pred_file_list[0])[0] + '.lab']\n# print(synth_dur_list)\n# print(synth_lab_list)\n\nsynth_decomposer = ParameterGeneration(['mgc', 'bap', 'lf0'])\n\nsynth_decomposer.acoustic_decomposition(synth_acou_cmp_pred_file_list, synth_acou_cmp_dim, synth_acou_out_dimension_dict, synth_acou_extention_dict, acou_variance_file_dict,True)\n\n",
"_____no_output_____"
],
[
"## copy features to wav\nfor file in synth_acou_cmp_pred_file_list:\n base = os.path.splitext(file)[0]\n for ext in (['.mgc', '.bap', '.lf0']):\n feat_file = base + ext\n copy(feat_file, wav_dir)",
"_____no_output_____"
]
],
[
[
"## Synthesize wav",
"_____no_output_____"
]
],
[
[
"def run_process(args,log=True):\n logger = logging.getLogger(\"subprocess\")\n\n # a convenience function instead of calling subprocess directly\n # this is so that we can do some logging and catch exceptions\n\n # we don't always want debug logging, even when logging level is DEBUG\n # especially if calling a lot of external functions\n # so we can disable it by force, where necessary\n if log:\n logger.debug('%s' % args)\n\n try:\n # the following is only available in later versions of Python\n # rval = subprocess.check_output(args)\n\n # bufsize=-1 enables buffering and may improve performance compared to the unbuffered case\n p = subprocess.Popen(args, bufsize=-1, shell=True,\n stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,\n close_fds=True, env=os.environ)\n # better to use communicate() than read() and write() - this avoids deadlocks\n (stdoutdata, stderrdata) = p.communicate()\n\n if p.returncode != 0:\n # for critical things, we always log, even if log==False\n logger.critical('exit status %d' % p.returncode )\n logger.critical(' for command: %s' % args )\n logger.critical(' stderr: %s' % stderrdata )\n logger.critical(' stdout: %s' % stdoutdata )\n raise OSError\n\n return (stdoutdata, stderrdata)\n\n except subprocess.CalledProcessError as e:\n # not sure under what circumstances this exception would be raised in Python 2.6\n logger.critical('exit status %d' % e.returncode )\n logger.critical(' for command: %s' % args )\n # not sure if there is an 'output' attribute under 2.6 ? still need to test this...\n logger.critical(' output: %s' % e.output )\n raise\n\n except ValueError:\n logger.critical('ValueError for %s' % args )\n raise\n\n except OSError:\n logger.critical('OSError for %s' % args )\n raise\n\n except KeyboardInterrupt:\n logger.critical('KeyboardInterrupt during %s' % args )\n try:\n # try to kill the subprocess, if it exists\n p.kill()\n except UnboundLocalError:\n # this means that p was undefined at the moment of the keyboard interrupt\n # (and we do nothing)\n pass\n\n raise KeyboardInterrupt\n",
"_____no_output_____"
],
[
"import pickle\nwith open('/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/test/cfg.pkl', 'rb') as f:\n cfg = pickle.load(f)",
"_____no_output_____"
],
[
"def wavgen_straight_type_vocoder(gen_dir, file_id_list, logger):\n '''\n Waveform generation with STRAIGHT or WORLD vocoders.\n (whose acoustic parameters are: mgc, bap, and lf0)\n '''\n\n pf_coef = 1.4\n fw_coef = 0.58\n co_coef = 511\n fl_coef = 1024\n mgc_dim = 60\n fw_alpha = 0.58\n sr = 16000\n fl = 1024\n\n\n\n counter=1\n max_counter = len(file_id_list)\n\n\n for filename in file_id_list:\n\n logger.info('creating waveform for %4d of %4d: %s' % (counter,max_counter,filename) )\n counter=counter+1\n base = filename\n files = {'sp' : base + '.sp',\n 'mgc' : base + '.mgc',\n 'f0' : base + '.f0',\n 'lf0' : base + '.lf0',\n 'ap' : base + '.ap',\n 'bap' : base + '.bap',\n 'wav' : base + '.wav'}\n\n mgc_file_name = files['mgc']\n bap_file_name = files['bap']\n\n cur_dir = os.getcwd()\n os.chdir(gen_dir)\n\n\n mgc_file_name = files['mgc']+'_p_mgc'\n post_filter(files['mgc'], mgc_file_name, mgc_dim, pf_coef, fw_coef, co_coef, fl_coef, gen_dir, SPTK)\n\n\n\n\n\n run_process('{sopr} -magic -1.0E+10 -EXP -MAGIC 0.0 {lf0} | {x2x} +fd > {f0}'.format(sopr=SPTK['SOPR'], lf0=files['lf0'], x2x=SPTK['X2X'], f0=files['f0']))\n\n run_process('{sopr} -c 0 {bap} | {x2x} +fd > {ap}'.format(sopr=SPTK['SOPR'],bap=files['bap'],x2x=SPTK['X2X'],ap=files['ap']))\n\n\n run_process('{mgc2sp} -a {alpha} -g 0 -m {order} -l {fl} -o 2 {mgc} | {sopr} -d 32768.0 -P | {x2x} +fd > {sp}'\n .format(mgc2sp=SPTK['MGC2SP'], alpha=fw_alpha, order=mgc_dim-1, fl=fl, mgc=mgc_file_name, sopr=SPTK['SOPR'], x2x=SPTK['X2X'], sp=files['sp']))\n\n run_process('{synworld} {fl} {sr} {f0} {sp} {ap} {wav}'\n .format(synworld=WORLD['SYNTHESIS'], fl=fl, sr=sr, f0=files['f0'], sp=files['sp'], ap=files['ap'], wav=files['wav']))\n\n# run_process('rm -f {ap} {sp} {f0}'.format(ap=files['ap'],sp=files['sp'],f0=files['f0']))\n\n os.chdir(cur_dir)",
"_____no_output_____"
],
[
"def post_filter(mgc_file_in, mgc_file_out, mgc_dim, pf_coef, fw_coef, co_coef, fl_coef, gen_dir, SPTK):\n\n\n line = \"echo 1 1 \"\n for i in range(2, mgc_dim):\n line = line + str(pf_coef) + \" \"\n\n run_process('{line} | {x2x} +af > {weight}'\n .format(line=line, x2x=SPTK['X2X'], weight=os.path.join(gen_dir, 'weight')))\n\n run_process('{freqt} -m {order} -a {fw} -M {co} -A 0 < {mgc} | {c2acr} -m {co} -M 0 -l {fl} > {base_r0}'\n .format(freqt=SPTK['FREQT'], order=mgc_dim-1, fw=fw_coef, co=co_coef, mgc=mgc_file_in, c2acr=SPTK['C2ACR'], fl=fl_coef, base_r0=mgc_file_in+'_r0'))\n\n run_process('{vopr} -m -n {order} < {mgc} {weight} | {freqt} -m {order} -a {fw} -M {co} -A 0 | {c2acr} -m {co} -M 0 -l {fl} > {base_p_r0}'\n .format(vopr=SPTK['VOPR'], order=mgc_dim-1, mgc=mgc_file_in, weight=os.path.join(gen_dir, 'weight'),\n freqt=SPTK['FREQT'], fw=fw_coef, co=co_coef,\n c2acr=SPTK['C2ACR'], fl=fl_coef, base_p_r0=mgc_file_in+'_p_r0'))\n\n run_process('{vopr} -m -n {order} < {mgc} {weight} | {mc2b} -m {order} -a {fw} | {bcp} -n {order} -s 0 -e 0 > {base_b0}'\n .format(vopr=SPTK['VOPR'], order=mgc_dim-1, mgc=mgc_file_in, weight=os.path.join(gen_dir, 'weight'),\n mc2b=SPTK['MC2B'], fw=fw_coef,\n bcp=SPTK['BCP'], base_b0=mgc_file_in+'_b0'))\n\n run_process('{vopr} -d < {base_r0} {base_p_r0} | {sopr} -LN -d 2 | {vopr} -a {base_b0} > {base_p_b0}'\n .format(vopr=SPTK['VOPR'], base_r0=mgc_file_in+'_r0', base_p_r0=mgc_file_in+'_p_r0',\n sopr=SPTK['SOPR'],\n base_b0=mgc_file_in+'_b0', base_p_b0=mgc_file_in+'_p_b0'))\n\n run_process('{vopr} -m -n {order} < {mgc} {weight} | {mc2b} -m {order} -a {fw} | {bcp} -n {order} -s 1 -e {order} | {merge} -n {order2} -s 0 -N 0 {base_p_b0} | {b2mc} -m {order} -a {fw} > {base_p_mgc}'\n .format(vopr=SPTK['VOPR'], order=mgc_dim-1, mgc=mgc_file_in, weight=os.path.join(gen_dir, 'weight'),\n mc2b=SPTK['MC2B'], fw=fw_coef,\n bcp=SPTK['BCP'],\n merge=SPTK['MERGE'], order2=mgc_dim-2, base_p_b0=mgc_file_in+'_p_b0',\n b2mc=SPTK['B2MC'], base_p_mgc=mgc_file_out))\n\n return",
"_____no_output_____"
],
[
"## copy features to wav\nfor file in synth_acou_cmp_pred_file_list:\n base = os.path.splitext(file)[0]\n for ext in (['.mgc', '.bap', '.lf0']):\n feat_file = base + ext\n copy(feat_file, wav_dir)",
"_____no_output_____"
],
[
"logger = logging.getLogger(\"wav_generation\")\nwavgen_straight_type_vocoder(wav_dir, synth_id_list, logger)\nprint(wav_dir)\nprint(synth_id_list)",
"/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/wav\n['nitech_jp_song070_f001_070']\n"
]
],
[
[
"# TEST",
"_____no_output_____"
]
],
[
[
"my_cmp_norm_info_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/acoustic_model/inter/cmp_norm_187.dat'\nml_cmp_norm_info_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s1/experiments/acoustic_model/inter_module/norm_info__mgc_lf0_vuv_bap_187_MVN.dat'\n\nfid = open(my_cmp_norm_info_file, 'rb')\nmy_cmp_norm_info = np.fromfile(fid, dtype=np.float32)\nfid.close()\nmy_cmp_norm_info = my_cmp_norm_info.reshape(2, -1)\nmy_cmp_mean = my_cmp_norm_info[0, ]\nmy_cmp_std = my_cmp_norm_info[1, ]\n\nfid = open(ml_cmp_norm_info_file, 'rb')\nml_cmp_norm_info = np.fromfile(fid, dtype=np.float32)\nfid.close()\nml_cmp_norm_info = ml_cmp_norm_info.reshape(2, -1)\nml_cmp_mean = ml_cmp_norm_info[0, ]\nml_cmp_std = ml_cmp_norm_info[1, ]\n\nprint(my_cmp_mean.all() == ml_cmp_mean.all())\nprint(my_cmp_std.all() == ml_cmp_std.all())",
"True\nTrue\n"
],
[
"my_cmp_no_silence_norm_file = acou_cmp_no_silence_norm_file_list[0]\nml_cmp_no_silence_norm_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s1/experiments/acoustic_model/inter_module/nn_norm_mgc_lf0_vuv_bap_187/nitech_jp_song070_f001_003.cmp'\nmy_cmp_no_silence_norm, my_n_frame = io_funcs.load_binary_file_frame(my_cmp_no_silence_norm_file, 187)\nml_cmp_no_silence_norm, ml_n_frame = io_funcs.load_binary_file_frame(ml_cmp_no_silence_norm_file, 187)\nprint(my_n_frame == ml_n_frame)\nprint(my_cmp_no_silence_norm.all() == ml_cmp_no_silence_norm.all())\n\nmy_lab_no_silence_norm_file = acou_lab_no_silence_norm_file_list[0]\nml_lab_no_silence_norm_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s1/experiments/acoustic_model/inter_module/nn_no_silence_lab_norm_377/nitech_jp_song070_f001_003.lab'\nmy_lab_no_silence_norm, my_n_frame = io_funcs.load_binary_file_frame(my_lab_no_silence_norm_file, 377)\nml_lab_no_silence_norm, ml_n_frame = io_funcs.load_binary_file_frame(ml_lab_no_silence_norm_file, 377)\nprint(my_n_frame == ml_n_frame)\nprint(my_lab_no_silence_norm.all() == ml_lab_no_silence_norm.all())",
"True\nTrue\nTrue\nTrue\n"
],
[
"test_cmp_no_silence_norm_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/acoustic_model/inter/cmp_no_silence_norm_187/nitech_jp_song070_f001_003.cmp'\ntest_cmp_file_list = [test_cmp_no_silence_norm_file]\ntest_cmp_no_silence_pred_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/test/nitech_jp_song070_f001_003.cmp'\ntest_denorm_file_list = [test_cmp_no_silence_pred_file]\n\nsynth_acou_denormaliser = MeanVarianceNorm(feature_dimension=acou_cmp_dim)\nsynth_acou_denormaliser.feature_denormalisation(test_cmp_file_list, test_denorm_file_list, acou_cmp_mean, acou_cmp_std)\n\nsynth_decomposer.acoustic_decomposition(test_denorm_file_list, synth_acou_cmp_dim, synth_acou_out_dimension_dict, synth_acou_extention_dict, acou_variance_file_dict,True)\nwavgen_straight_type_vocoder('/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/test/', ['nitech_jp_song070_f001_003'], logger)",
"_____no_output_____"
],
[
"f0_file = os.path.join(wav_dir, 'nitech_jp_song070_f001_070.f0')\nmgc_file = '/home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/synth/inter/acou_cmp_pred/nitech_jp_song070_f001_070.mgc'\nf0, n_frame = io_funcs.load_binary_file_frame(f0_file, 1)\nmgc, n_frame2 = io_funcs.load_binary_file_frame(mgc_file, 60)\nprint(n_frame)\nprint(n_frame2)",
"_____no_output_____"
]
],
[
[
"# PITCH MODEL",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7f11c0d2ce883a13be2c3b85e1fe218de537171 | 20,988 | ipynb | Jupyter Notebook | 03_Grouping/Regiment/Exercises_solutions.ipynb | fung991159/pandas_exercise | ec0c6788dbb8d28900163769aeee58b2403bdafb | [
"BSD-3-Clause"
] | 3 | 2020-06-16T04:22:49.000Z | 2020-10-28T01:18:10.000Z | 03_Grouping/Regiment/Exercises_solutions.ipynb | liuhui998/pandas_exercises | 8124aa87652e8ad64512fb281871b5041178b3bd | [
"BSD-3-Clause"
] | null | null | null | 03_Grouping/Regiment/Exercises_solutions.ipynb | liuhui998/pandas_exercises | 8124aa87652e8ad64512fb281871b5041178b3bd | [
"BSD-3-Clause"
] | null | null | null | 28.247645 | 179 | 0.356585 | [
[
[
"# Regiment",
"_____no_output_____"
],
[
"### Introduction:\n\nSpecial thanks to: http://chrisalbon.com/ for sharing the dataset and materials.\n\n### Step 1. Import the necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"### Step 2. Create the DataFrame with the following values:",
"_____no_output_____"
]
],
[
[
"raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], \n 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'], \n 'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'], \n 'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],\n 'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}",
"_____no_output_____"
]
],
[
[
"### Step 3. Assign it to a variable called regiment.\n#### Don't forget to name each column",
"_____no_output_____"
]
],
[
[
"regiment = pd.DataFrame(raw_data, columns = raw_data.keys())\nregiment",
"_____no_output_____"
]
],
[
[
"### Step 4. What is the mean preTestScore from the regiment Nighthawks? ",
"_____no_output_____"
]
],
[
[
"regiment[regiment['regiment'] == 'Nighthawks'].groupby('regiment').mean()",
"_____no_output_____"
]
],
[
[
"### Step 5. Present general statistics by company",
"_____no_output_____"
]
],
[
[
"regiment.groupby('company').describe()",
"_____no_output_____"
]
],
[
[
"### Step 6. What is the mean each company's preTestScore?",
"_____no_output_____"
]
],
[
[
"regiment.groupby('company').preTestScore.mean()",
"_____no_output_____"
]
],
[
[
"### Step 7. Present the mean preTestScores grouped by regiment and company",
"_____no_output_____"
]
],
[
[
"regiment.groupby(['regiment', 'company']).preTestScore.mean()",
"_____no_output_____"
]
],
[
[
"### Step 8. Present the mean preTestScores grouped by regiment and company without heirarchical indexing",
"_____no_output_____"
]
],
[
[
"regiment.groupby(['regiment', 'company']).preTestScore.mean().unstack()",
"_____no_output_____"
]
],
[
[
"### Step 9. Group the entire dataframe by regiment and company",
"_____no_output_____"
]
],
[
[
"regiment.groupby(['regiment', 'company']).mean()",
"_____no_output_____"
]
],
[
[
"### Step 10. What is the number of observations in each regiment and company",
"_____no_output_____"
]
],
[
[
"regiment.groupby(['company', 'regiment']).size()",
"_____no_output_____"
]
],
[
[
"### Step 11. Iterate over a group and print the name and the whole data from the regiment",
"_____no_output_____"
]
],
[
[
"# Group the dataframe by regiment, and for each regiment,\nfor name, group in regiment.groupby('regiment'):\n # print the name of the regiment\n print(name)\n # print the data of that regiment\n print(group)",
"Dragoons\n regiment company name preTestScore postTestScore\n4 Dragoons 1st Cooze 3 70\n5 Dragoons 1st Jacon 4 25\n6 Dragoons 2nd Ryaner 24 94\n7 Dragoons 2nd Sone 31 57\nNighthawks\n regiment company name preTestScore postTestScore\n0 Nighthawks 1st Miller 4 25\n1 Nighthawks 1st Jacobson 24 94\n2 Nighthawks 2nd Ali 31 57\n3 Nighthawks 2nd Milner 2 62\nScouts\n regiment company name preTestScore postTestScore\n8 Scouts 1st Sloan 2 62\n9 Scouts 1st Piger 3 70\n10 Scouts 2nd Riani 2 62\n11 Scouts 2nd Ali 3 70\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.