hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ec9f68a8c54817142a89a045de9205aef7b8f806 | 92,455 | ipynb | Jupyter Notebook | TP 1/Untitled.ipynb | PierreOreistein/MVA-Foundations_DL | 0b3bbdcfffe6a521b99c46284798063739385df2 | [
"MIT"
] | null | null | null | TP 1/Untitled.ipynb | PierreOreistein/MVA-Foundations_DL | 0b3bbdcfffe6a521b99c46284798063739385df2 | [
"MIT"
] | null | null | null | TP 1/Untitled.ipynb | PierreOreistein/MVA-Foundations_DL | 0b3bbdcfffe6a521b99c46284798063739385df2 | [
"MIT"
] | null | null | null | 372.802419 | 86,324 | 0.937754 | [
[
[
"# 0 - Information",
"_____no_output_____"
],
[
"# 1 - Packages",
"_____no_output_____"
]
],
[
[
"# Import personnal functions\nfrom functions import *\n\n# Keras Functions\nfrom keras.datasets import mnist\n\n\nimport matplotlib as mpl\nmpl.use('TKAgg')\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# 2 - Code",
"_____no_output_____"
]
],
[
[
"# the data, shuffled and split between train and test sets\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\n# Reshape the images as vectors\nX_train = X_train.reshape(60000, 784)\nX_test = X_test.reshape(10000, 784)\n\n# Convert values as float an rescale\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\nX_train /= 255\nX_test /= 255\n\n# Shapes of the data\nprint(X_train.shape[0], 'train samples')\nprint(X_test.shape[0], 'test samples')",
"60000 train samples\n10000 test samples\n"
],
[
"plt.figure(figsize=(7.195, 3.841), dpi=100)\nfor i in range(200):\n plt.subplot(10, 20, i+1)\n plt.imshow(X_train[i, :].reshape([28, 28]), cmap='gray')\n plt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Exercice 1",
"_____no_output_____"
],
[
"Softmax ne posséde aucun paramètre.\nDès lors notre prédiction dépend seulement de W et b. Or W est de dimension (784, 10) et b et de dimension (1, 10). Ainsi, on en déduit que le nombre de paramètres du modéle est 784 x 10 + 10, ie: 7850",
"_____no_output_____"
],
[
"Pour la convexité, ce n'est pas clair, pour ce qui est de la convergence, le choix d'un pas de grdient selon les hypothèses de l'algorithme de Newton-Raphson, nous assure la convergence. Vérifier les conditions de Newtion-Raphson pour la convergence.",
"_____no_output_____"
],
[
"Ici on a rajouté une couche caché mais comme avant les fonctions d'activations n'ont pas de paramètres. Ainsi, le nombre de paramètres correspond à lasomme des paramètres de chaque couche. Ainsi sur la couche caché, on a 784 x L + L paramètres (coefficients de $W^h$) et ceux de $b^h$. Dans la deuxième couche on a L x K + K paramètres (coefficients de la matrice $W^y$ et de $b^y$). Ainsi, on a donc: 784 x L + L + L x 10 + 10, ie si L = 100: 79510",
"_____no_output_____"
],
[
"Les W sont initialisé selon la méthode: \"glorot_uniform\" et les biais par \"zeros\" où \"glorot_unifrom\" est défini là: https://keras.io/initializers/#glorot_uniform. Cela permet d'éviter un vanishing gradient dans le cas où les données sont centrées et réduite.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ec9f6a1cf3cad5e68130cd5d343f2ac533baa8e0 | 6,726 | ipynb | Jupyter Notebook | tutorials/Mnist_global_workflow/dataset_creation.ipynb | DebiAI/py-debiai | fd784fd1ca7a59c38714275b6fad53ba9f09eaa7 | [
"Apache-2.0"
] | 1 | 2022-03-01T13:17:16.000Z | 2022-03-01T13:17:16.000Z | tutorials/Mnist_global_workflow/dataset_creation.ipynb | DebiAI/py-debiai | fd784fd1ca7a59c38714275b6fad53ba9f09eaa7 | [
"Apache-2.0"
] | null | null | null | tutorials/Mnist_global_workflow/dataset_creation.ipynb | DebiAI/py-debiai | fd784fd1ca7a59c38714275b6fad53ba9f09eaa7 | [
"Apache-2.0"
] | null | null | null | 28.379747 | 661 | 0.529884 | [
[
[
"# Dataset creation for the tutorial",
"_____no_output_____"
]
],
[
[
"# System modules\nimport os\nimport pathlib\n\n# Tensorflow modules\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\n# Math modules\nimport numpy as np\nimport scipy\n\n# Image modules\nimport PIL\nimport PIL.Image\nfrom skimage.transform import resize",
"_____no_output_____"
]
],
[
[
"### Load MNIST dataset",
"_____no_output_____"
],
[
"#### Here is a link to download MNIST_M dataset\n[MNIST_M Dataset](https://drive.google.com/file/d/0B9Z4d7lAwbnTNDdNeFlERWRGNVk/view)",
"_____no_output_____"
]
],
[
[
"(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()",
"_____no_output_____"
]
],
[
[
"### Formate MNIST images to MNIST_M images format \n#### In order to create an unique model for multiple dataset, we need to rescale the images to the same dimensions (28 \\* 28 \\* 1 -> 32 \\* 32 \\* 3)",
"_____no_output_____"
]
],
[
[
"def reformate_list(values):\n \"\"\" Reformat a list of images to 32*32*3 \"\"\"\n res = []\n for value in values:\n im = []\n \n # Transform 1 channel into 3 channels\n for i in range(len(value[0])):\n row = []\n for j in range(len(value[1])):\n pix = value[i][j]\n row.append([pix, pix, pix])\n im.append(row)\n \n # Resize to 32*32\n im = np.asarray(im)\n res.append(resize(im, (32,32)) * 255)\n return res",
"_____no_output_____"
]
],
[
[
"##### Reformating all dataset takes time so use those functions only once !",
"_____no_output_____"
]
],
[
[
"resized_xtrain = reformate_list(x_train)\nresized_xtest = reformate_list(x_test)",
"CPU times: user 11min 35s, sys: 6min 26s, total: 18min 2s\nWall time: 6min 4s\n"
]
],
[
[
"### Save new dataset for next usage\n#### To avoid losing our data and make the dataset creation easier on tensorflow, we will save them in directories with a specific architecture. \n##### Be sure to change the paths according to your own case.",
"_____no_output_____"
]
],
[
[
"# Global path variables (change them if you need to)\norigin = os.getcwd()\nhome_dir = origin + \"/data/MNIST_reformat/\"\n\ndef create_dataset_arch():\n \"\"\" Create one directory for each category of the dataset (10 here)\"\"\"\n try: \n os.mkdir(home_dir)\n except OSError:\n print(\"Creation dir failed\")\n else:\n print(\"Successfully created\")\n\n\n for i in range(10):\n try:\n os.mkdir(home_dir + str(i))\n except OSError:\n print(\"Creation dir \" + str(i) + \" failed\")",
"_____no_output_____"
],
[
"def fill_dir(samples, labels, id=0):\n \"\"\" Fill the directories with the dataset depending of their labels \"\"\"\n length = len(samples)\n \n for i in range(length):\n arr = np.asarray(samples[i]).astype(np.uint8)\n img = PIL.Image.fromarray(arr)\n img.save(home_dir + str(labels[i]) + \"/\" + str(id) + \".png\")\n id += 1\n \n print(\"Successfully saved !\")",
"_____no_output_____"
]
],
[
[
"##### Filling directories is a long process, be sure to use this cell only once !",
"_____no_output_____"
]
],
[
[
"fill_dir(resized_xtrain, y_train)\nfill_dir(resized_xtest, y_test, 60000)",
"Successfully saved !\nSuccessfully saved !\nCPU times: user 1min 50s, sys: 1min 3s, total: 2min 54s\nWall time: 13min 2s\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9f71eb5d95bccb89c4f58d0cb6efed39070eae | 170,422 | ipynb | Jupyter Notebook | Controls_test.ipynb | mbrijbhushan/mechatronics | 578b4b48b4c3c17b8a4fb6d77bfb3b23c966d277 | [
"MIT"
] | null | null | null | Controls_test.ipynb | mbrijbhushan/mechatronics | 578b4b48b4c3c17b8a4fb6d77bfb3b23c966d277 | [
"MIT"
] | null | null | null | Controls_test.ipynb | mbrijbhushan/mechatronics | 578b4b48b4c3c17b8a4fb6d77bfb3b23c966d277 | [
"MIT"
] | null | null | null | 521.168196 | 55,688 | 0.948903 | [
[
[
"from control import * #Load Python Control Systems Library\nimport matplotlib.pyplot as plt #Plotting library\nimport numpy as np #Numerical library",
"_____no_output_____"
],
[
"s=tf([1,0],[1]) #Define 's' variable\ns",
"_____no_output_____"
],
[
"sys1=10000*((s+20)*(s+10))/((s+100)*(s*(s**2+20*s+400))) #Define transfer function\nsys1",
"_____no_output_____"
],
[
"pzmap(sys1);",
"_____no_output_____"
],
[
"#Root locus\nroot_locus(sys1);",
"_____no_output_____"
],
[
"bode_plot(sys1); #Plot the frequency response bode plot",
"_____no_output_____"
],
[
"#Closed loop\ncl_sys1 = feedback(sys1,1)\ncl_sys1",
"_____no_output_____"
],
[
"#Plot the step response\nT1 = np.arange(0,0.5,0.001)\nT, yout = step_response(cl_sys1,T1)\nplt.plot(T,yout)\nplt.title('Step Response')\nplt.xlabel('Time')\nplt.ylabel('Position');",
"_____no_output_____"
],
[
"#Nyquist Plot\nnyquist_plot(sys1);",
"_____no_output_____"
],
[
"# Gain margin and phase margin\nmargin(sys1)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9f82106f388917bb269c625023857aafad31f6 | 3,379 | ipynb | Jupyter Notebook | egg.ipynb | brotwasme/refnx2019 | 8b62f7668d196c0ec443b47ea669573417a682e6 | [
"BSD-3-Clause"
] | null | null | null | egg.ipynb | brotwasme/refnx2019 | 8b62f7668d196c0ec443b47ea669573417a682e6 | [
"BSD-3-Clause"
] | null | null | null | egg.ipynb | brotwasme/refnx2019 | 8b62f7668d196c0ec443b47ea669573417a682e6 | [
"BSD-3-Clause"
] | null | null | null | 36.728261 | 1,167 | 0.575022 | [
[
[
"pwd",
"_____no_output_____"
],
[
"import refnx as rx\n\n#from refnx.refnx.dataset import Data1D\n# dataset = data # ...\nx,y,yer = 1,2,3\ndata = rx.dataset.Data1D(data=(x, y, yerr))\n# \n\n# simple setup, no tilt, equation or checking\n# air = SLD(value=0+0j, name='air')\n# polymer = SLD(1,'polymer')\n# water = SLD(2,'water')\n# structure = air(thick=0,rough=0) | polymer(200,4) | water(0,3)\n# #air-polymer roughness of 4, polymer size of 200\n# #Erf() <-error function\n# structure[1].interfaces = Erf() # air-polymer interface\n# structure[2].interfaces = Erf()\n\n# model = ReflectModel(structure, bkg=0, dq=0)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
ec9f8703bdc1f19d618fd9087b97a3720795cc81 | 21,585 | ipynb | Jupyter Notebook | assignment.ipynb | ESrinivas84/python-letsupgrade | 7c2dd5f91353a75c3773a206f3954b08e88c7e69 | [
"Apache-2.0"
] | null | null | null | assignment.ipynb | ESrinivas84/python-letsupgrade | 7c2dd5f91353a75c3773a206f3954b08e88c7e69 | [
"Apache-2.0"
] | null | null | null | assignment.ipynb | ESrinivas84/python-letsupgrade | 7c2dd5f91353a75c3773a206f3954b08e88c7e69 | [
"Apache-2.0"
] | null | null | null | 22.460978 | 239 | 0.379754 | [
[
[
"<a href=\"https://colab.research.google.com/github/ESrinivas84/python-letsupgrade/blob/master/assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"lst=[\"hi\",1,2,3,\"hello\",[10,20,30]]\nlst.append(\"smile\")\nlst\n",
"_____no_output_____"
]
],
[
[
"#List\n",
"_____no_output_____"
]
],
[
[
"lst.count(1)\n",
"_____no_output_____"
],
[
"lst.pop()",
"_____no_output_____"
],
[
"lst.insert(1,\"hello\")\nlst\n",
"_____no_output_____"
],
[
"lst.clear()\nlst",
"_____no_output_____"
]
],
[
[
"#Dictionaries",
"_____no_output_____"
]
],
[
[
"dit={\"name\":\"ram\",\"age\":20,\"school\":\"hps\"}\ndit",
"_____no_output_____"
],
[
"x=dit.get(\"name\")\nx",
"_____no_output_____"
],
[
"dit[\"age\"]=21\ndit",
"_____no_output_____"
],
[
"dit[\"gender\"]=\"male\"\ndit",
"_____no_output_____"
],
[
"len(dit)",
"_____no_output_____"
],
[
"dit.pop(\"age\")\ndit",
"_____no_output_____"
]
],
[
[
"#Tuples",
"_____no_output_____"
]
],
[
[
"tuple=(\"apple\",\"banana\",\"grapes\",\"mango\")\ntuple",
"_____no_output_____"
],
[
"tuple[0]\n",
"_____no_output_____"
],
[
"for x in tuple:\n print(x)",
"apple\nbanana\ngrapes\nmango\n"
],
[
"tuple[0][3]",
"_____no_output_____"
],
[
"tuple[-2]",
"_____no_output_____"
],
[
"tuple[1:3]",
"_____no_output_____"
],
[
"len(tuple)",
"_____no_output_____"
]
],
[
[
"#sets",
"_____no_output_____"
]
],
[
[
"set={\"apple\",\"mango\",\"grapes\",\"orange\"}\nset",
"_____no_output_____"
],
[
"len(set)",
"_____no_output_____"
],
[
"set.add(\"watermelon\")\nset",
"_____no_output_____"
],
[
"set.discard(\"grapes\")\nset",
"_____no_output_____"
],
[
"for x in set:\n print(x)",
"mango\nwatermelon\napple\norange\n"
],
[
"set.clear()\nset",
"_____no_output_____"
]
],
[
[
"#String",
"_____no_output_____"
]
],
[
[
"str=(\"name\",\"age\",\"gender\")\nstr",
"_____no_output_____"
],
[
"print(str[1])\n",
"age\n"
],
[
"print(str[0:2])\n",
"('name', 'age')\n"
],
[
"print(str[-2])",
"age\n"
],
[
"str.index(\"age\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ec9f9978c853a972bcc34accfc91a621e57a0516 | 18,629 | ipynb | Jupyter Notebook | notebooks/building_production_ml_systems/labs/3_kubeflow_pipelines.ipynb | Jonathanpro/asl-ml-immersion | c461aa215339a6816810dfef5a92a6e375f9bc66 | [
"Apache-2.0"
] | 11 | 2021-09-08T05:39:02.000Z | 2022-03-25T14:35:22.000Z | notebooks/building_production_ml_systems/labs/3_kubeflow_pipelines.ipynb | Jonathanpro/asl-ml-immersion | c461aa215339a6816810dfef5a92a6e375f9bc66 | [
"Apache-2.0"
] | 118 | 2021-08-28T03:09:44.000Z | 2022-03-31T00:38:44.000Z | notebooks/building_production_ml_systems/labs/3_kubeflow_pipelines.ipynb | Jonathanpro/asl-ml-immersion | c461aa215339a6816810dfef5a92a6e375f9bc66 | [
"Apache-2.0"
] | 110 | 2021-09-02T15:01:35.000Z | 2022-03-31T12:32:48.000Z | 29.522979 | 404 | 0.593108 | [
[
[
"# Kubeflow pipelines\n\n**Learning Objectives:**\n 1. Learn how to deploy a Kubeflow cluster on GCP\n 1. Learn how to create a experiment in Kubeflow\n 1. Learn how to package you code into a Kubeflow pipeline\n 1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way\n\n\n## Introduction\n\nIn this notebook, we will first setup a Kubeflow cluster on GCP.\nThen, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.",
"_____no_output_____"
]
],
[
[
"!pip3 install --user kfp --upgrade",
"_____no_output_____"
]
],
[
[
"### Restart the kernel\n\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.",
"_____no_output_____"
],
[
"### Import libraries and define constants",
"_____no_output_____"
]
],
[
[
"from os import path\n\nimport kfp\nimport kfp.compiler as compiler\nimport kfp.components as comp\nimport kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport kfp.notebook",
"_____no_output_____"
]
],
[
[
"## Setup a Kubeflow cluster on GCP",
"_____no_output_____"
],
[
"**TODO 1**",
"_____no_output_____"
],
[
"To deploy a [Kubeflow](https://www.kubeflow.org/) cluster\nin your GCP project, use the [AI Platform pipelines](https://console.cloud.google.com/ai-platform/pipelines):\n\n1. Go to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines) in the GCP Console.\n1. Create a new instance\n2. Hit \"Configure\"\n3. Check the box \"Allow access to the following Cloud APIs\"\n1. Hit \"Create Cluster\"\n4. Hit \"Deploy\"\n\nWhen the cluster is ready, go back to the AI Platform pipelines page and click on \"SETTINGS\" entry for your cluster.\nThis will bring up a pop up with code snippets on how to access the cluster \nprogrammatically. \n\nCopy the \"host\" entry and set the \"HOST\" variable below with that.\n",
"_____no_output_____"
]
],
[
[
"HOST = \"\" # TODO: fill in the HOST information for the cluster",
"_____no_output_____"
]
],
[
[
"### Authenticate your KFP cluster with a Kubernetes secret\n\nIf you run pipelines that requires calling any GCP services, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret.\n\nFirst point your kubectl current context to your cluster. Go back to your [Kubeflow cluster dashboard](https://console.cloud.google.com/ai-platform/pipelines/clusters) or navigate to `Navigation menu > AI Platform > Pipelines` and look to see the cluster name, zone and namespace for the pipeline you deployed above. It's likely called `cluster-1` if this is the first AI Pipelines you've created. ",
"_____no_output_____"
]
],
[
[
"# Change below if necessary\nPROJECT = !gcloud config get-value project # noqa: E999\nPROJECT = PROJECT[0]\nBUCKET = PROJECT # change if needed\nCLUSTER = \"cluster-1\" # change if needed\nZONE = \"us-central1-a\" # change if needed\nNAMESPACE = \"default\" # change if needed\n\n%env PROJECT=$PROJECT\n%env CLUSTER=$CLUSTER\n%env ZONE=$ZONE\n%env NAMESPACE=$NAMESPACE",
"_____no_output_____"
],
[
"# Configure kubectl to connect with the cluster\n!gcloud container clusters get-credentials \"$CLUSTER\" --zone \"$ZONE\" --project \"$PROJECT\"",
"_____no_output_____"
]
],
[
[
"We'll create a service account called `kfpdemo` with the necessary IAM permissions for our cluster secret. We'll give this service account permissions for any GCP services it might need. This `taxifare` pipeline needs access to Cloud Storage, so we'll give it the `storage.admin` role and `ml.admin`. Open a Cloud Shell and copy/paste this code in the terminal there.\n\n```bash\nPROJECT=$(gcloud config get-value project)\n\n# Create service account\ngcloud iam service-accounts create kfpdemo \\\n --display-name kfpdemo --project $PROJECT\n\n# Grant permissions to the service account by binding roles\ngcloud projects add-iam-policy-binding $PROJECT \\\n --member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \\\n --role=roles/storage.admin\n \ngcloud projects add-iam-policy-binding $PROJECT \\\n --member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \\\n --role=roles/ml.admin \n```",
"_____no_output_____"
],
[
"Then, we'll create and download a key for this service account and store the service account credential as a Kubernetes secret called `user-gcp-sa` in the cluster.",
"_____no_output_____"
]
],
[
[
"%%bash\ngcloud iam service-accounts keys create application_default_credentials.json \\\n --iam-account kfpdemo@$PROJECT.iam.gserviceaccount.com",
"_____no_output_____"
],
[
"# Check that the key was downloaded correctly.\n!ls application_default_credentials.json",
"_____no_output_____"
],
[
"# Create a k8s secret. If already exists, override.\n!kubectl create secret generic user-gcp-sa \\\n --from-file=user-gcp-sa.json=application_default_credentials.json \\\n -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -",
"_____no_output_____"
]
],
[
[
"## Create an experiment",
"_____no_output_____"
],
[
"**TODO 2**",
"_____no_output_____"
],
[
"We will start by creating a Kubeflow client to pilot the Kubeflow cluster:",
"_____no_output_____"
]
],
[
[
"client = # TODO: create a Kubeflow client",
"_____no_output_____"
]
],
[
[
"Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single \"Default\" experiment:",
"_____no_output_____"
]
],
[
[
"client.list_experiments()",
"_____no_output_____"
]
],
[
[
"Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:",
"_____no_output_____"
]
],
[
[
"exp = # TODO: create an experiment called 'taxifare'",
"_____no_output_____"
]
],
[
[
"Let's make sure the experiment has been created correctly:",
"_____no_output_____"
]
],
[
[
"client.list_experiments()",
"_____no_output_____"
]
],
[
[
"## Packaging your code into Kubeflow components",
"_____no_output_____"
],
[
"We have packaged our taxifare ml pipeline into three components:\n* `./components/bq2gcs` that creates the training and evaluation data from BigQuery and exports it to GCS\n* `./components/trainjob` that launches the training container on AI-platform and exports the model\n* `./components/deploymodel` that deploys the trained model to AI-platform as a REST API\n\nEach of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.\n\nIf you inspect the code in these folders, you'll notice that the `main.py` or `main.sh` files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the `Dockerfile` tells you that these files are executed when the container is run. \nSo we just packaged our ml code into light container images for reproducibility. \n\nWe have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:",
"_____no_output_____"
]
],
[
[
"# Builds the taxifare trainer container in case you skipped the optional part\n# of lab 1\n!taxifare/scripts/build.sh",
"_____no_output_____"
],
[
"# Pushes the taxifare trainer container to gcr/io\n!taxifare/scripts/push.sh",
"_____no_output_____"
],
[
"# Builds the KF component containers and push them to gcr/io\n!cd pipelines && make components",
"_____no_output_____"
]
],
[
[
"Now that the container images are pushed to the [registry in your project](https://console.cloud.google.com/gcr), we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to\n* describing what arguments Kubeflow needs to pass to the containers when it runs them\n* telling Kubeflow where to fetch the corresponding Docker images\n\nIn the cells below, we have three of these \"Kubeflow component description files\", one for each of our components.",
"_____no_output_____"
],
[
"**TODO 3**",
"_____no_output_____"
],
[
"**IMPORTANT: Modify the image URI in the cell \nbelow to reflect that you pushed the images into the gcr.io associated with your project.**",
"_____no_output_____"
]
],
[
[
"%%writefile bq2gcs.yaml\n\nname: bq2gcs\n \ndescription: |\n This component creates the training and\n validation datasets as BiqQuery tables and export\n them into a Google Cloud Storage bucket at\n gs://<BUCKET>/taxifare/data.\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: # TODO: Reference the image URI for taxifare-bq2gcs you just created\n args: [\"--bucket\", {inputValue: Input Bucket}]",
"_____no_output_____"
],
[
"%%writefile trainjob.yaml\n\nname: trainjob\n \ndescription: |\n This component trains a model to predict that taxi fare in NY.\n It takes as argument a GCS bucket and expects its training and\n eval data to be at gs://<BUCKET>/taxifare/data/ and will export\n the trained model at gs://<BUCKET>/taxifare/model/.\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: # TODO: Reference the image URI for taxifare-trainjob you just created\n args: [{inputValue: Input Bucket}]",
"_____no_output_____"
],
[
"%%writefile deploymodel.yaml\n\nname: deploymodel\n \ndescription: |\n This component deploys a trained taxifare model on GCP as taxifare:dnn.\n It takes as argument a GCS bucket and expects the model to deploy \n to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: # TODO: Reference the image URI for taxifare-deployment you just created\n args: [{inputValue: Input Bucket}]",
"_____no_output_____"
]
],
[
[
"## Create a Kubeflow pipeline",
"_____no_output_____"
],
[
"The code below creates a kubeflow pipeline by decorating a regular function with the\n`@dsl.pipeline` decorator. Now the arguments of this decorated function will be\nthe input parameters of the Kubeflow pipeline.\n\nInside the function, we describe the pipeline by\n* loading the yaml component files we created above into a Kubeflow `op`\n* specifying the order into which the Kubeflow ops should be run",
"_____no_output_____"
]
],
[
[
"# TODO 3\nPIPELINE_TAR = \"taxifare.tar.gz\"\nBQ2GCS_YAML = \"./bq2gcs.yaml\"\nTRAINJOB_YAML = \"./trainjob.yaml\"\nDEPLOYMODEL_YAML = \"./deploymodel.yaml\"\n\n\[email protected](\n name=\"Taxifare\",\n description=\"Train a ml model to predict the taxi fare in NY\",\n)\ndef pipeline(gcs_bucket_name=\"<bucket where data and model will be exported>\"):\n\n bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)\n bq2gcs = bq2gcs_op(\n input_bucket=gcs_bucket_name,\n )\n\n\n trainjob_op = # TODO: Load the yaml file for training\n trainjob = # TODO: Add your code to run the training job\n\n deploymodel_op = # TODO: Load the yaml file for deployment\n deploymodel = # TODO: Addd your code to run model deployment\n\n # TODO: Add the code to run 'trainjob' after 'bq2gcs' in the pipeline\n # TODO: Add the code to run 'deploymodel' after 'trainjob' in the pipeline",
"_____no_output_____"
]
],
[
[
"The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:",
"_____no_output_____"
]
],
[
[
"# TODO: Compile the pipeline functon above",
"_____no_output_____"
],
[
"ls $PIPELINE_TAR",
"_____no_output_____"
]
],
[
[
"If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the\nPython description of the pipeline into yaml description!\n\nNow let's feed Kubeflow with our pipeline and run it using our client:",
"_____no_output_____"
]
],
[
[
"# TODO 4\nrun = client.run_pipeline(\n experiment_id= # TODO: Add code for experiment id\n job_name= # TODO: Provide a jobname\n pipeline_package_path= # TODO: Add code for pipeline zip file\n params={\n \"gcs_bucket_name\": BUCKET,\n },\n)",
"_____no_output_____"
]
],
[
[
"Have a look at the link to monitor the run. ",
"_____no_output_____"
],
[
"Now all the runs are nicely organized under the experiment in the UI, and new runs can be either manually launched or scheduled through the UI in a completely repeatable and traceable way!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ec9f9e6fbe2fa440d4715f8a914fd344990b1c7b | 2,561 | ipynb | Jupyter Notebook | 00 Get Your Shit Together/Planning.ipynb | escarpa23/machine-learning-program | cd872c5f929ef486fe531fe6ef931d9a9dddf87f | [
"MIT"
] | 1 | 2022-03-16T21:19:41.000Z | 2022-03-16T21:19:41.000Z | 00 Get Your Shit Together/Planning.ipynb | escarpa23/machine-learning-program | cd872c5f929ef486fe531fe6ef931d9a9dddf87f | [
"MIT"
] | null | null | null | 00 Get Your Shit Together/Planning.ipynb | escarpa23/machine-learning-program | cd872c5f929ef486fe531fe6ef931d9a9dddf87f | [
"MIT"
] | null | null | null | 26.402062 | 73 | 0.492776 | [
[
[
"# Planning",
"_____no_output_____"
],
[
"# Lesson 1: Corregir Pandas & Object Dissection\n\n- **Topics:**\n 1. [X] corregir pandas\n 2. [ ] disecting object for data vis\n 3. tal\n- **Takeaways:**\n 1. Accessing the Object `instance[]`\n 2. Executing a function from the Object `instance.fuction()`\n 3. Accessing `subinstances` to get a special `function`\n - `~~df.item_price.split()~~`\n - `df.item_price.str.split()`\n 4. Partes de un objeto\n - serie\n - index\n - values\n - name\n - dtypes\n - index, value\n - s[[0]]\n- **Task for Next Lesson:**\n 1. [ ] 09_Time Series | Apple\n 2. [ ] 01 Chipotle Terminar",
"_____no_output_____"
],
[
"# Lesson 2: 03 Transforming Basic 24/02/22\n\n- **Topics:**\n 1. corregir pandas\n 2. [ ] 03 Transforming Basic\n 3. [ ] 04 Manipulate the DataFrame\n- **Takeaways:**\n 1. Accessing the Object `instance[]`\n 2. Executing a function from the Object `instance.fuction()`\n 3. Accessing `subinstances` to get a special `function`\n - `~~df.item_price.split()~~`\n - `df.item_price.str.split()`\n- **Task for Next Lesson:**\n 1. [ ] 09_Time Series | Apple\n 2. [ ] 03 Transforming Basic\n 3. [ ] tal",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
ec9fb25b93e66bd048528a66087b3a07990708f5 | 46,702 | ipynb | Jupyter Notebook | Sentiment analysis (training).ipynb | littlewine/USelections2016 | a028b521eed763a089b3b4abc6bd54a33c1350d9 | [
"MIT"
] | 2 | 2019-05-25T11:50:42.000Z | 2021-04-06T17:43:20.000Z | Sentiment analysis (training).ipynb | littlewine/USelections2016 | a028b521eed763a089b3b4abc6bd54a33c1350d9 | [
"MIT"
] | null | null | null | Sentiment analysis (training).ipynb | littlewine/USelections2016 | a028b521eed763a089b3b4abc6bd54a33c1350d9 | [
"MIT"
] | null | null | null | 40.787773 | 11,702 | 0.671663 | [
[
[
"# Training a sentiment classifier",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport time\nimport warnings\nwarnings.filterwarnings('ignore')\n\nstart_time = time.time()\n\ndataset = pd.read_csv('data/Sentiment Analysis Dataset.csv',error_bad_lines=False)\ndel dataset[\"ItemID\"]\ndel dataset['SentimentSource']\n\nelapsed_time = time.time() - start_time\nprint elapsed_time\n\nprint dataset.shape\nprint len(dataset)\ndataset.head()\n\n",
"Skipping line 8836: expected 4 fields, saw 5\n\nSkipping line 535882: expected 4 fields, saw 7\n\n"
],
[
"#dataset[:10].iterrows()\n\ntest_train = pd.DataFrame()\n\ntest_train = dataset\ntest_train.head()",
"_____no_output_____"
]
],
[
[
"# preproccessing",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import RegexpTokenizer\nfrom nltk.corpus import stopwords\nimport HTMLParser # In Python 3.4+ import html \nimport nltk\nfrom nltk.stem import PorterStemmer\nfrom nltk.tokenize import sent_tokenize, word_tokenize\n\nps = PorterStemmer()\n\ndef Clean(unescaped_tweet):\n '''This function takes a tweet as input and returns a tokenizing list.'''\n \n tokenizer = RegexpTokenizer(r'\\w+')\n \n #tokenize words\n cleaned_tweet_tokens = tokenizer.tokenize(unescaped_tweet.lower())\n #remove stop words\n #cleaned_tweet_tokens = [word for word in cleaned_tweet_tokens if word not in stopwords.words('english')]\n \n #cleaned_tweet_tokens = [ ps.stem(w) for w in cleaned_tweet_tokens]\n \n return cleaned_tweet_tokens",
"_____no_output_____"
],
[
"\n\n# start_time = time.time()\n# test_train['token'] = test_train['SentimentText'].apply(lambda tweet: Clean(tweet))\n# test_train.head()\n\n# elapsed_time = time.time() - start_time\n# print elapsed_time",
"_____no_output_____"
],
[
"test_train.head()",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nfrom nltk.stem.snowball import SnowballStemmer\nimport nltk\n\nstemmer = SnowballStemmer('english')\nanalyzer = TfidfVectorizer().build_analyzer()\n\ndef stemmed_words(doc):\n return (stemmer.stem(w) for w in analyzer(doc))\n\nvect = TfidfVectorizer(analyzer=stemmed_words, \n tokenizer=nltk.tokenize.casual.TweetTokenizer,\n stop_words='english',\n #min_df = 0.001, #dont include words that appear in less than x% of tweets\n #max_df = 0.1\n )\n\n#test stemmer\n#print(vect.fit_transform(sm_set.head()[:10]))\n#print(vect.get_feature_names())",
"_____no_output_____"
],
[
"from sklearn.utils import shuffle",
"_____no_output_____"
],
[
"#keep only a sample\nsm_set = pd.DataFrame(shuffle(test_train)[:100000]).reset_index(drop=True)",
"_____no_output_____"
],
[
"sm_set.SentimentText.head()",
"_____no_output_____"
],
[
"# based on the text of each tweet, create a (sparse) matrix containing the occurencies of each word and store it into X.\n# this is going to be our feature matrix, which we will give into the classifier to \"learn\" the sentiment.\n\n#narrow down the dataset\nsm_set = test_train#[:100000]\n\nstart_time = time.time()\n#fit_transform is a method to create the feature matrix of the tweets based on word occurencies\nX = vect.fit_transform(sm_set.SentimentText)\ny = sm_set.Sentiment\n\nelapsed_time = time.time() - start_time\nprint elapsed_time,'sec to fit transform',len(sm_set),'samples'",
"255.079085827 sec to fit transform 1578612 samples\n"
],
[
"print len(vect.get_feature_names())\nprint vect.get_feature_names()",
"636007\n"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"#split the dataset in a training (X_train, y_train) and test dataset (X_test,y_test)\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.33, random_state=42)\n",
"_____no_output_____"
]
],
[
[
"# train NaiveBayes without feature selection",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import MultinomialNB",
"_____no_output_____"
],
[
"#train the classifier\nstart_time = time.time()\nclf = MultinomialNB()\nclf.fit(X_train, y_train)\n\nelapsed_time = time.time() - start_time\nprint elapsed_time",
"0.265272855759\n"
],
[
"print vect.get_feature_names()[::len(vect.get_feature_names())/40]",
"[u'00', u'3p9pkr', u'6p3im', u'abail', u'amaneci', u'assistiu', u'benjaminblack', u'brendensteven', u'cathyrigbi', u'clickio', u'd_ryura', u'didntwork', u'ebert', u'explosivosr', u'friendstack', u'gotki', u'hesgettingamazingreview', u'ilube', u'jaybaer', u'jorgemudri', u'kennyl98', u'lall', u'lmmeng', u'manu', u'michaeal', u'moviesss', u'nenna', u'ohscreditunion', u'pentagramdream', u'pshawww', u'renna', u'sakura0_o', u'shaunswagg', u'someh', u'sumchi', u'thanickyj', u'tootexti', u'unvibr', u'wepppaaaaaaaaaaaaaaa', u'xseifer', u'\\u02c6\\xec\\u0153\\xbc\\xeb']\n"
],
[
"len(vect.get_feature_names())",
"_____no_output_____"
],
[
"vect",
"_____no_output_____"
],
[
"from sklearn.metrics import classification_report",
"_____no_output_____"
],
[
"print \"Results for %i training samples and %i test samples (trained on %f sec)\" %(len(y_train),len(y_test),elapsed_time)\nprint classification_report(y_test,clf.predict(X_test))\n",
"Results for 1057670 training samples and 520942 test samples (trained on 0.265273 sec)\n precision recall f1-score support\n\n 0 0.74 0.82 0.78 260447\n 1 0.80 0.72 0.75 260495\n\navg / total 0.77 0.77 0.77 520942\n\n"
]
],
[
[
"# feature selection",
"_____no_output_____"
],
[
"By now, vect has a vocabulary including all words and we will trim that using a statistical method, such as chi2",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_selection import chi2,f_classif,SelectPercentile",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"y_train_bool = map(lambda x: x==1,y_train)",
"_____no_output_____"
],
[
"pd.Series(f_classif(X_train,y_train_bool)[0]).plot()\nplt.show()",
"_____no_output_____"
],
[
"pd.Series(f_classif(X_train,y_train_bool)[1]).plot()\nplt.show()",
"_____no_output_____"
],
[
"selector = SelectPercentile(chi2, percentile=1)",
"_____no_output_____"
],
[
"selector.fit(X_train,y_train)",
"_____no_output_____"
],
[
"clf.fit(selector.transform(X_train),y_train)",
"_____no_output_____"
],
[
"#predict\nresults = clf.predict(selector.transform(X_test))\nprint classification_report(y_test,results)\n",
" precision recall f1-score support\n\n 0 0.77 0.77 0.77 260447\n 1 0.77 0.77 0.77 260495\n\navg / total 0.77 0.77 0.77 520942\n\n"
]
],
[
[
"# try different models",
"_____no_output_____"
],
[
"** Logistic Regression **",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"clf = LogisticRegression('l2')",
"_____no_output_____"
],
[
"# fit a variable selector in our data\nselector = SelectPercentile(chi2, percentile=10)\nselector.fit(X_train,y_train)",
"_____no_output_____"
],
[
"#fit the model\nclf.fit(selector.transform(X_train),y_train)",
"_____no_output_____"
],
[
"#predict\nresults = clf.predict(selector.transform(X_test))\nprint classification_report(y_test,results)",
"_____no_output_____"
]
],
[
[
"# Sentiment analysis in some sample sentences",
"_____no_output_____"
]
],
[
[
"sample_text = ['Aris is a bit dubtful about me being a smart ass',\n 'aris doesnt love sklearn yet',\n 'but he will definitely love it soon',\n 'fuck','bad','amazing', 'this is a sentence',\n 'this is a bad sentence',]",
"_____no_output_____"
],
[
"vect.transform(sample_text)",
"_____no_output_____"
],
[
"for i,sent in enumerate(sample_text):\n print sent,clf.predict(vect.transform(sample_text))[i]",
"_____no_output_____"
]
],
[
[
"# export our model",
"_____no_output_____"
]
],
[
[
"time.localtime()[1:5]",
"_____no_output_____"
],
[
"#save the clf classifier in a file to load it in a different notebook/at a different time\ntimestr = \"%i-%i_%i,%i\"%time.localtime()[1:5]\nfrom sklearn.externals import joblib\njoblib.dump(clf, 'trained models/'+'descr'+timestr+'.pkl') \njoblib.dump(vect, 'trained models/vect'+'descr'+timestr+'.pkl') \n\n#load it later with:\n#clf = joblib.load('NaiveBayesCl_67k_tweets.pkl') ",
"_____no_output_____"
],
[
"import pickle\ns = pickle.dumps(vect)\nvec2 = pickle.loads(s)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9fbb0d6477d1d4fc65aac62268e99259f10f06 | 30,702 | ipynb | Jupyter Notebook | examples/Selectors-Pipelines.ipynb | bhelfrecht/scikit-cosmo | b4e06ae67752574d48fb05a0a3e3f860488ed233 | [
"BSD-3-Clause"
] | 16 | 2020-12-07T23:27:11.000Z | 2021-12-17T22:28:33.000Z | examples/Selectors-Pipelines.ipynb | bhelfrecht/scikit-cosmo | b4e06ae67752574d48fb05a0a3e3f860488ed233 | [
"BSD-3-Clause"
] | 74 | 2020-11-30T18:51:41.000Z | 2021-12-06T20:53:31.000Z | examples/Selectors-Pipelines.ipynb | bhelfrecht/scikit-cosmo | b4e06ae67752574d48fb05a0a3e3f860488ed233 | [
"BSD-3-Clause"
] | 4 | 2020-12-02T16:42:53.000Z | 2021-11-16T15:50:33.000Z | 178.5 | 12,848 | 0.902808 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Using-scikit-COSMO-Selectors-with-scikit-learn-Pipelines\" data-toc-modified-id=\"Using-scikit-COSMO-Selectors-with-scikit-learn-Pipelines-1\"><span class=\"toc-item-num\">1 </span>Using scikit-COSMO Selectors with scikit-learn Pipelines</a></span><ul class=\"toc-item\"><li><span><a href=\"#Using-one-selector-in-a-pipeline\" data-toc-modified-id=\"Using-one-selector-in-a-pipeline-1.1\"><span class=\"toc-item-num\">1.1 </span>Using one selector in a pipeline</a></span></li><li><span><a href=\"#Stacking-selectors-one-after-another\" data-toc-modified-id=\"Stacking-selectors-one-after-another-1.2\"><span class=\"toc-item-num\">1.2 </span>Stacking selectors one after another</a></span></li></ul></li></ul></div>",
"_____no_output_____"
],
[
"Using scikit-COSMO selectors with scikit-learn pipelines\n========================================================",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom matplotlib import pyplot as plt\n\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.datasets import load_boston\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\n\nfrom skcosmo.feature_selection import FPS, CUR",
"_____no_output_____"
]
],
[
[
"## Simple integration of scikit-COSMO selectors\nThis example shows how to use FPS to subselect features before training a RidgeCV.",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler()\nselector = FPS(n_to_select=6)\nridge = RidgeCV(cv=2, alphas=np.logspace(-8, 2, 10))\n\nX, y = load_boston(return_X_y=True)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\npipe = Pipeline([(\"scaler\", scaler), (\"selector\", selector), (\"ridge\", ridge)])\npipe.fit(X_train.copy(), y_train.copy())\n\nplt.scatter(y_test, pipe.predict(X_test))\nplt.gca().set_aspect(\"equal\")\nplt.plot(plt.xlim(), plt.xlim(), \"r--\")\nplt.xlabel('True Values')\nplt.ylabel('Predicted Values')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Stacking selectors one after another\n\nThis example shows how to use an FPS, then CUR selector\nto subselect features before training a RidgeCV.",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler()\nfps = FPS(n_to_select=10)\ncur = CUR(n_to_select=4)\nridge = RidgeCV(cv=2, alphas=np.logspace(-8, 2, 10))\n\nX, y = load_boston(return_X_y=True)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\npipe = Pipeline(\n [(\"scaler\", scaler), (\"selector1\", fps), (\"selector2\", cur), (\"ridge\", ridge)]\n)\npipe.fit(X_train.copy(), y_train.copy())\n\nplt.scatter(y_test, pipe.predict(X_test))\nplt.gca().set_aspect(\"equal\")\nplt.plot(plt.xlim(), plt.xlim(), \"r--\")\nplt.xlabel('True Values')\nplt.ylabel('Predicted Values')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9fbb7235245172f8957191c2b524eebfe1a751 | 759,603 | ipynb | Jupyter Notebook | 2_TCN/TCN_MPQA.ipynb | jrderek/Text-Classification- | 384448c37d9619490fac29b4731084eea8ef4007 | [
"MIT"
] | null | null | null | 2_TCN/TCN_MPQA.ipynb | jrderek/Text-Classification- | 384448c37d9619490fac29b4731084eea8ef4007 | [
"MIT"
] | null | null | null | 2_TCN/TCN_MPQA.ipynb | jrderek/Text-Classification- | 384448c37d9619490fac29b4731084eea8ef4007 | [
"MIT"
] | null | null | null | 69.338476 | 447 | 0.488298 | [
[
[
"# TCN Classification with MPQA Dataset\n<hr>\n\nWe will build a text classification model using TCN model on the MPQA Dataset. Since there is no standard train/test split for this dataset, we will use 10-Fold Cross Validation (CV). \n\n\n## Load the library",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport re\nimport nltk\nimport random\n# from nltk.tokenize import TweetTokenizer\nfrom sklearn.model_selection import KFold\n\n%config IPCompleter.greedy=True\n%config IPCompleter.use_jedi=False\n# nltk.download('twitter_samples')",
"_____no_output_____"
],
[
"tf.config.list_physical_devices('GPU') ",
"_____no_output_____"
]
],
[
[
"## Load the Dataset",
"_____no_output_____"
]
],
[
[
"corpus = pd.read_pickle('../../../0_data/MPQA/MPQA.pkl')\ncorpus.label = corpus.label.astype(int)\nprint(corpus.shape)\ncorpus",
"(10606, 3)\n"
],
[
"corpus.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 10606 entries, 0 to 10605\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 sentence 10606 non-null object\n 1 label 10606 non-null int32 \n 2 split 10606 non-null object\ndtypes: int32(1), object(2)\nmemory usage: 207.3+ KB\n"
],
[
"corpus.groupby( by='label').count()",
"_____no_output_____"
],
[
"# Separate the sentences and the labels\nsentences, labels = list(corpus.sentence), list(corpus.label)",
"_____no_output_____"
],
[
"sentences[0]",
"_____no_output_____"
]
],
[
[
"<!--## Split Dataset-->",
"_____no_output_____"
],
[
"# Data Preprocessing\n<hr>\n\nPreparing data for word embedding, especially for pre-trained word embedding like Word2Vec or GloVe, __don't use standard preprocessing steps like stemming or stopword removal__. Compared to our approach on cleaning the text when doing word count based feature extraction (e.g. TFIDF) such as removing stopwords, stemming etc, now we will keep these words as we do not want to lose such information that might help the model learn better.\n\n__Tomas Mikolov__, one of the developers of Word2Vec, in _word2vec-toolkit: google groups thread., 2015_, suggests only very minimal text cleaning is required when learning a word embedding model. Sometimes, it's good to disconnect\nIn short, what we will do is:\n- Puntuations removal\n- Lower the letter case\n- Tokenization\n\nThe process above will be handled by __Tokenizer__ class in TensorFlow\n\n- <b>One way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.</b>",
"_____no_output_____"
]
],
[
[
"# Define a function to compute the max length of sequence\ndef max_length(sequences):\n '''\n input:\n sequences: a 2D list of integer sequences\n output:\n max_length: the max length of the sequences\n '''\n max_length = 0\n for i, seq in enumerate(sequences):\n length = len(seq)\n if max_length < length:\n max_length = length\n return max_length",
"_____no_output_____"
],
[
"from tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\n\ntrunc_type='post'\npadding_type='post'\noov_tok = \"<UNK>\"\n\nprint(\"Example of sentence: \", sentences[4])\n\n# Cleaning and Tokenization\ntokenizer = Tokenizer(oov_token=oov_tok)\ntokenizer.fit_on_texts(sentences)\n\n# Turn the text into sequence\ntraining_sequences = tokenizer.texts_to_sequences(sentences)\nmax_len = max_length(training_sequences)\n\nprint('Into a sequence of int:', training_sequences[4])\n\n# Pad the sequence to have the same size\ntraining_padded = pad_sequences(training_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\nprint('Into a padded sequence:', training_padded[4])",
"Example of sentence: no quick fix\nInto a sequence of int: [25, 945, 1476]\nInto a padded sequence: [ 25 945 1476 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0]\n"
],
[
"word_index = tokenizer.word_index\n# See the first 10 words in the vocabulary\nfor i, word in enumerate(word_index):\n print(word, word_index.get(word))\n if i==9:\n break\nvocab_size = len(word_index)+1\nprint(vocab_size)",
"<UNK> 1\nthe 2\nof 3\nto 4\na 5\nand 6\nnot 7\nis 8\nin 9\nbe 10\n6236\n"
]
],
[
[
"# Model 1: Embedding Random\n<hr>",
"_____no_output_____"
],
[
"## TCN Model\n\nNow, we will build Temporal Convolutional Network (CNN) models to classify encoded documents as either positive or negative.\n\nThe model takes inspiration from https://github.com/philipperemy/keras-tcn and https://www.kaggle.com/christofhenkel/temporal-convolutional-network\n\n__Arguments__\n`TCN(nb_filters=64, kernel_size=2, nb_stacks=1, dilations=[1, 2, 4, 8, 16, 32], padding='causal', use_skip_connections=False, dropout_rate=0.0, return_sequences=True, activation='relu', kernel_initializer='he_normal', use_batch_norm=False, **kwargs)`\n\n- `nb_filters`: Integer. The number of filters to use in the convolutional layers. Would be similar to units for LSTM.\n- `kernel_size`: Integer. The size of the kernel to use in each convolutional layer.\n- `dilations`: List. A dilation list. Example is: [1, 2, 4, 8, 16, 32, 64].\n- `nb_stacks`: Integer. The number of stacks of residual blocks to use.\n- `padding`: String. The padding to use in the convolutions. 'causal' for a causal network (as in the original implementation) and - `'same' for a non-causal network.\n- `use_skip_connections`: Boolean. If we want to add skip connections from input to each residual block.\n- `return_sequences`: Boolean. Whether to return the last output in the output sequence, or the full sequence.\n- `dropout_rate`: Float between 0 and 1. Fraction of the input units to drop.\n- `activation`: The activation used in the residual blocks o = activation(x + F(x)).\n- `kernel_initializer`: Initializer for the kernel weights matrix (Conv1D).\n- `use_batch_norm`: Whether to use batch normalization in the residual layers or not.\n- `kwargs`: Any other arguments for configuring parent class Layer. For example \"name=str\", Name of the model. Use unique names when using multiple TCN.",
"_____no_output_____"
],
[
"Now, we will define our TCN model as follows:\n- One TCN layer with 100 filters, kernel size 1-6, and relu and tanh activation function;\n- Dropout size = 0.5;\n- Optimizer: Adam (The best learning algorithm so far)\n- Loss function: binary cross-entropy (suited for binary classification problem)\n\n**Note**: \n- The whole purpose of dropout layers is to tackle the problem of over-fitting and to introduce generalization to the model. Hence it is advisable to keep dropout parameter near 0.5 in hidden layers. \n- https://missinglink.ai/guides/keras/keras-conv1d-working-1d-convolutional-neural-networks-keras/",
"_____no_output_____"
]
],
[
[
"from tcn import TCN, tcn_full_summary\nfrom tensorflow.keras.layers import Input, Embedding, Dense, Dropout, SpatialDropout1D\nfrom tensorflow.keras.layers import concatenate, GlobalAveragePooling1D, GlobalMaxPooling1D\nfrom tensorflow.keras.models import Model\n\ndef define_model(kernel_size = 3, activation='relu', input_dim = None, output_dim=300, max_length = None ):\n \n inp = Input( shape=(max_length,))\n x = Embedding(input_dim=input_dim, output_dim=output_dim, input_length=max_length)(inp)\n x = SpatialDropout1D(0.1)(x)\n \n x = TCN(128,dilations = [1, 2, 4], return_sequences=True, activation = activation, name = 'tcn1')(x)\n x = TCN(64,dilations = [1, 2, 4], return_sequences=True, activation = activation, name = 'tcn2')(x)\n \n avg_pool = GlobalAveragePooling1D()(x)\n max_pool = GlobalMaxPooling1D()(x)\n \n conc = concatenate([avg_pool, max_pool])\n conc = Dense(16, activation=\"relu\")(conc)\n conc = Dropout(0.1)(conc)\n outp = Dense(1, activation=\"sigmoid\")(conc) \n\n model = Model(inputs=inp, outputs=outp)\n model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])\n \n return model",
"_____no_output_____"
],
[
"model_0 = define_model( input_dim=1000, max_length=100)\nmodel_0.summary()",
"Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 100)] 0 \n__________________________________________________________________________________________________\nembedding (Embedding) (None, 100, 300) 300000 input_1[0][0] \n__________________________________________________________________________________________________\nspatial_dropout1d (SpatialDropo (None, 100, 300) 0 embedding[0][0] \n__________________________________________________________________________________________________\ntcn1 (TCN) (None, 100, 128) 279936 spatial_dropout1d[0][0] \n__________________________________________________________________________________________________\ntcn2 (TCN) (None, 100, 64) 65984 tcn1[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling1d (Globa (None, 64) 0 tcn2[0][0] \n__________________________________________________________________________________________________\nglobal_max_pooling1d (GlobalMax (None, 64) 0 tcn2[0][0] \n__________________________________________________________________________________________________\nconcatenate (Concatenate) (None, 128) 0 global_average_pooling1d[0][0] \n global_max_pooling1d[0][0] \n__________________________________________________________________________________________________\ndense (Dense) (None, 16) 2064 concatenate[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 16) 0 dense[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 1) 17 dropout[0][0] \n==================================================================================================\nTotal params: 648,001\nTrainable params: 648,001\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"# tcn_full_summary(model_0)",
"_____no_output_____"
],
[
"# class myCallback(tf.keras.callbacks.Callback):\n# # Overide the method on_epoch_end() for our benefit\n# def on_epoch_end(self, epoch, logs={}):\n# if (logs.get('accuracy') > 0.93):\n# print(\"\\nReached 93% accuracy so cancelling training!\")\n# self.model.stop_training=True\n\n\ncallbacks = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0, \n patience=10, verbose=2, \n mode='auto', restore_best_weights=True)",
"_____no_output_____"
]
],
[
[
"## Train and Test the Model",
"_____no_output_____"
]
],
[
[
"# Parameter Initialization\ntrunc_type='post'\npadding_type='post'\noov_tok = \"<UNK>\"\nactivations = ['relu', 'tanh']\nkernel_sizes = [1, 2, 3, 4, 5, 6]\n\ncolumns = ['Activation', 'Filters', 'acc1', 'acc2', 'acc3', 'acc4', 'acc5', 'acc6', 'acc7', 'acc8', 'acc9', 'acc10', 'AVG']\nrecord = pd.DataFrame(columns = columns)\n\n# prepare cross validation with 10 splits and shuffle = True\nkfold = KFold(10, True)\n\n# Separate the sentences and the labels\nsentences, labels = list(corpus.sentence), list(corpus.label)\n\nexp = 0\n\nfor activation in activations:\n for kernel_size in kernel_sizes:\n # kfold.split() will return set indices for each split\n exp+=1\n print('-------------------------------------------')\n print('Training {}: {} activation, {} kernel size.'.format(exp, activation, kernel_size))\n print('-------------------------------------------')\n acc_list = []\n for train, test in kfold.split(sentences):\n \n train_x, test_x = [], []\n train_y, test_y = [], []\n \n for i in train:\n train_x.append(sentences[i])\n train_y.append(labels[i])\n\n for i in test:\n test_x.append(sentences[i])\n test_y.append(labels[i])\n\n # Turn the labels into a numpy array\n train_y = np.array(train_y)\n test_y = np.array(test_y)\n\n # encode data using\n # Cleaning and Tokenization\n tokenizer = Tokenizer(oov_token=oov_tok)\n tokenizer.fit_on_texts(train_x)\n\n # Turn the text into sequence\n training_sequences = tokenizer.texts_to_sequences(train_x)\n test_sequences = tokenizer.texts_to_sequences(test_x)\n\n max_len = max_length(training_sequences)\n\n # Pad the sequence to have the same size\n Xtrain = pad_sequences(training_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n Xtest = pad_sequences(test_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n\n word_index = tokenizer.word_index\n vocab_size = len(word_index)+1\n\n # Define the input shape\n model = define_model(kernel_size, activation, input_dim=vocab_size, max_length=max_len)\n\n # Train the model\n model.fit(Xtrain, train_y, batch_size=50, epochs=100, verbose=1, \n callbacks=[callbacks], validation_data=(Xtest, test_y))\n\n # evaluate the model\n loss, acc = model.evaluate(Xtest, test_y, verbose=0)\n print('Test Accuracy: {}'.format(acc*100))\n\n acc_list.append(acc*100)\n \n mean_acc = np.array(acc_list).mean()\n parameters = [activation, kernel_size]\n entries = parameters + acc_list + [mean_acc]\n\n temp = pd.DataFrame([entries], columns=columns)\n record = record.append(temp, ignore_index=True)\n print()\n print(record)\n print()",
"-------------------------------------------\nTraining 1: relu activation, 1 kernel size.\n-------------------------------------------\n"
]
],
[
[
"## Summary",
"_____no_output_____"
]
],
[
[
"record.sort_values(by='AVG', ascending=False)",
"_____no_output_____"
],
[
"record[['Activation', 'AVG']].groupby(by='Activation').max().sort_values(by='AVG', ascending=False)",
"_____no_output_____"
],
[
"report = record.sort_values(by='AVG', ascending=False)\nreport = report.to_excel('TCN_MPQA.xlsx', sheet_name='random')",
"_____no_output_____"
]
],
[
[
"# Model 2: Word2Vec Static",
"_____no_output_____"
],
[
"__Using and updating pre-trained embeddings__\n* In this part, we will create an Embedding layer in Tensorflow Keras using a pre-trained word embedding called Word2Vec 300-d tht has been trained 100 bilion words from Google News.\n* In this part, we will leave the embeddings fixed instead of updating them (dynamic).",
"_____no_output_____"
],
[
"1. __Load `Word2Vec` Pre-trained Word Embedding__",
"_____no_output_____"
]
],
[
[
"from gensim.models import KeyedVectors\nword2vec = KeyedVectors.load_word2vec_format('../GoogleNews-vectors-negative300.bin', binary=True)",
"_____no_output_____"
],
[
"# Access the dense vector value for the word 'handsome'\n# word2vec.word_vec('handsome') # 0.11376953\nword2vec.word_vec('cool') # 1.64062500e-01",
"_____no_output_____"
]
],
[
[
"2. __Check number of training words present in Word2Vec__",
"_____no_output_____"
]
],
[
[
"def training_words_in_word2vector(word_to_vec_map, word_to_index):\n '''\n input:\n word_to_vec_map: a word2vec GoogleNews-vectors-negative300.bin model loaded using gensim.models\n word_to_index: word to index mapping from training set\n '''\n \n vocab_size = len(word_to_index) + 1\n count = 0\n # Set each row \"idx\" of the embedding matrix to be \n # the word vector representation of the idx'th word of the vocabulary\n for word, idx in word_to_index.items():\n if word in word_to_vec_map:\n count+=1\n \n return print('Found {} words present from {} training vocabulary in the set of pre-trained word vector'.format(count, vocab_size))",
"_____no_output_____"
],
[
"# Separate the sentences and the labels\nsentences, labels = list(corpus.sentence), list(corpus.label)\n\n# Cleaning and Tokenization\ntokenizer = Tokenizer(oov_token=oov_tok)\ntokenizer.fit_on_texts(sentences)\n\nword_index = tokenizer.word_index\ntraining_words_in_word2vector(word2vec, word_index)",
"Found 6083 words present from 6236 training vocabulary in the set of pre-trained word vector\n"
]
],
[
[
"2. __Define a `pretrained_embedding_layer` function__",
"_____no_output_____"
]
],
[
[
"emb_mean = word2vec.vectors.mean()\nemb_std = word2vec.vectors.std()",
"_____no_output_____"
],
[
"from tensorflow.keras.layers import Embedding\n\ndef pretrained_embedding_matrix(word_to_vec_map, word_to_index, emb_mean, emb_std):\n '''\n input:\n word_to_vec_map: a word2vec GoogleNews-vectors-negative300.bin model loaded using gensim.models\n word_to_index: word to index mapping from training set\n '''\n np.random.seed(2021)\n \n # adding 1 to fit Keras embedding (requirement)\n vocab_size = len(word_to_index) + 1\n # define dimensionality of your pre-trained word vectors (= 300)\n emb_dim = word_to_vec_map.word_vec('handsome').shape[0]\n \n # initialize the matrix with generic normal distribution values\n embed_matrix = np.random.normal(emb_mean, emb_std, (vocab_size, emb_dim))\n \n # Set each row \"idx\" of the embedding matrix to be \n # the word vector representation of the idx'th word of the vocabulary\n for word, idx in word_to_index.items():\n if word in word_to_vec_map:\n embed_matrix[idx] = word_to_vec_map.get_vector(word)\n \n return embed_matrix",
"_____no_output_____"
],
[
"# Test the function\nw_2_i = {'<UNK>': 1, 'handsome': 2, 'cool': 3, 'shit': 4 }\nem_matrix = pretrained_embedding_matrix(word2vec, w_2_i, emb_mean, emb_std)\nem_matrix",
"_____no_output_____"
]
],
[
[
"## TCN Model",
"_____no_output_____"
]
],
[
[
"def define_model_2(kernel_size = 3, activation='relu', input_dim = None, \n output_dim=300, max_length = None, emb_matrix = None):\n \n inp = Input( shape=(max_length,))\n x = Embedding(input_dim=input_dim, \n output_dim=output_dim, \n input_length=max_length,\n # Assign the embedding weight with word2vec embedding marix\n weights = [emb_matrix],\n # Set the weight to be not trainable (static)\n trainable = False)(inp)\n \n x = SpatialDropout1D(0.1)(x)\n \n x = TCN(128,dilations = [1, 2, 4], return_sequences=True, activation = activation, name = 'tcn1')(x)\n x = TCN(64,dilations = [1, 2, 4], return_sequences=True, activation = activation, name = 'tcn2')(x)\n \n avg_pool = GlobalAveragePooling1D()(x)\n max_pool = GlobalMaxPooling1D()(x)\n \n conc = concatenate([avg_pool, max_pool])\n conc = Dense(16, activation=\"relu\")(conc)\n conc = Dropout(0.1)(conc)\n outp = Dense(1, activation=\"sigmoid\")(conc) \n\n model = Model(inputs=inp, outputs=outp)\n model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])\n \n return model",
"_____no_output_____"
],
[
"model_0 = define_model_2( input_dim=1000, max_length=100, emb_matrix=np.random.rand(1000, 300))\nmodel_0.summary()",
"Model: \"model_121\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_122 (InputLayer) [(None, 100)] 0 \n__________________________________________________________________________________________________\nembedding_121 (Embedding) (None, 100, 300) 300000 input_122[0][0] \n__________________________________________________________________________________________________\nspatial_dropout1d_121 (SpatialD (None, 100, 300) 0 embedding_121[0][0] \n__________________________________________________________________________________________________\ntcn1 (TCN) (None, 100, 128) 279936 spatial_dropout1d_121[0][0] \n__________________________________________________________________________________________________\ntcn2 (TCN) (None, 100, 64) 65984 tcn1[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling1d_121 (G (None, 64) 0 tcn2[0][0] \n__________________________________________________________________________________________________\nglobal_max_pooling1d_121 (Globa (None, 64) 0 tcn2[0][0] \n__________________________________________________________________________________________________\nconcatenate_121 (Concatenate) (None, 128) 0 global_average_pooling1d_121[0][0\n global_max_pooling1d_121[0][0] \n__________________________________________________________________________________________________\ndense_242 (Dense) (None, 16) 2064 concatenate_121[0][0] \n__________________________________________________________________________________________________\ndropout_121 (Dropout) (None, 16) 0 dense_242[0][0] \n__________________________________________________________________________________________________\ndense_243 (Dense) (None, 1) 17 dropout_121[0][0] \n==================================================================================================\nTotal params: 648,001\nTrainable params: 348,001\nNon-trainable params: 300,000\n__________________________________________________________________________________________________\n"
]
],
[
[
"## Train and Test the Model",
"_____no_output_____"
]
],
[
[
"# class myCallback(tf.keras.callbacks.Callback):\n# # Overide the method on_epoch_end() for our benefit\n# def on_epoch_end(self, epoch, logs={}):\n# if (logs.get('accuracy') >= 0.9):\n# print(\"\\nReached 90% accuracy so cancelling training!\")\n# self.model.stop_training=True\n\ncallbacks = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0, \n patience=10, verbose=2, \n mode='auto', restore_best_weights=True)",
"_____no_output_____"
],
[
"# Parameter Initialization\ntrunc_type='post'\npadding_type='post'\noov_tok = \"<UNK>\"\nactivations = ['relu']\nprint('Loading embedding statistics . . .')\nemb_mean = emb_mean\nemb_std = emb_std\nprint('Done!')\nkernel_sizes = [1, 2, 3, 4, 5, 6]\n\ncolumns = ['Activation', 'Filters', 'acc1', 'acc2', 'acc3', 'acc4', 'acc5', 'acc6', 'acc7', 'acc8', 'acc9', 'acc10', 'AVG']\nrecord2 = pd.DataFrame(columns = columns)\n\n# prepare cross validation with 10 splits and shuffle = True\nkfold = KFold(10, True)\n\n# Separate the sentences and the labels\nsentences, labels = list(corpus.sentence), list(corpus.label)\nexp = 0\n\nfor activation in activations:\n for kernel_size in kernel_sizes:\n exp+=1\n print('-------------------------------------------')\n print('Training {}: {} activation, {} kernel size.'.format(exp, activation, kernel_size))\n print('-------------------------------------------')\n \n # kfold.split() will return set indices for each split\n acc_list = []\n for train, test in kfold.split(sentences):\n \n train_x, test_x = [], []\n train_y, test_y = [], []\n \n for i in train:\n train_x.append(sentences[i])\n train_y.append(labels[i])\n\n for i in test:\n test_x.append(sentences[i])\n test_y.append(labels[i])\n\n # Turn the labels into a numpy array\n train_y = np.array(train_y)\n test_y = np.array(test_y)\n\n # encode data using\n # Cleaning and Tokenization\n tokenizer = Tokenizer(oov_token=oov_tok)\n tokenizer.fit_on_texts(train_x)\n\n # Turn the text into sequence\n training_sequences = tokenizer.texts_to_sequences(train_x)\n test_sequences = tokenizer.texts_to_sequences(test_x)\n\n max_len = max_length(training_sequences)\n\n # Pad the sequence to have the same size\n Xtrain = pad_sequences(training_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n Xtest = pad_sequences(test_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n\n word_index = tokenizer.word_index\n vocab_size = len(word_index)+1\n \n \n emb_matrix = pretrained_embedding_matrix(word2vec, word_index, emb_mean, emb_std)\n \n # Define the input shape\n model = define_model_2(kernel_size, activation, input_dim=vocab_size, \n max_length=max_len, emb_matrix=emb_matrix)\n\n # Train the model\n model.fit(Xtrain, train_y, batch_size=50, epochs=100, verbose=1, \n callbacks=[callbacks], validation_data=(Xtest, test_y))\n\n # evaluate the model\n loss, acc = model.evaluate(Xtest, test_y, verbose=0)\n print('Test Accuracy: {}'.format(acc*100))\n\n acc_list.append(acc*100)\n \n mean_acc = np.array(acc_list).mean()\n parameters = [activation, kernel_size]\n entries = parameters + acc_list + [mean_acc]\n\n temp = pd.DataFrame([entries], columns=columns)\n record2 = record2.append(temp, ignore_index=True)\n print()\n print(record2)\n print()",
"Loading embedding statistics . . .\nDone!\n-------------------------------------------\nTraining 1: relu activation, 1 kernel size.\n-------------------------------------------\n"
]
],
[
[
"## Summary",
"_____no_output_____"
]
],
[
[
"record2.sort_values(by='AVG', ascending=False)",
"_____no_output_____"
],
[
"record2[['Activation', 'AVG']].groupby(by='Activation').max().sort_values(by='AVG', ascending=False)",
"_____no_output_____"
],
[
"report = record2.sort_values(by='AVG', ascending=False)\nreport = report.to_excel('TCN_MPQA_2.xlsx', sheet_name='static')",
"_____no_output_____"
]
],
[
[
"# Model 3: Word2Vec - Dynamic",
"_____no_output_____"
],
[
"* In this part, we will fine tune the embeddings while training (dynamic).",
"_____no_output_____"
],
[
"## CNN Model",
"_____no_output_____"
]
],
[
[
"def define_model_3(kernel_size = 3, activation='relu', input_dim = None, \n output_dim=300, max_length = None, emb_matrix = None):\n \n inp = Input( shape=(max_length,))\n x = Embedding(input_dim=input_dim, \n output_dim=output_dim, \n input_length=max_length,\n # Assign the embedding weight with word2vec embedding marix\n weights = [emb_matrix],\n # Set the weight to be not trainable (static)\n trainable = True)(inp)\n \n x = SpatialDropout1D(0.1)(x)\n \n x = TCN(128,dilations = [1, 2, 4], return_sequences=True, activation = activation, name = 'tcn1')(x)\n x = TCN(64,dilations = [1, 2, 4], return_sequences=True, activation = activation, name = 'tcn2')(x)\n \n avg_pool = GlobalAveragePooling1D()(x)\n max_pool = GlobalMaxPooling1D()(x)\n \n conc = concatenate([avg_pool, max_pool])\n conc = Dense(16, activation=\"relu\")(conc)\n conc = Dropout(0.1)(conc)\n outp = Dense(1, activation=\"sigmoid\")(conc) \n\n model = Model(inputs=inp, outputs=outp)\n model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])\n \n return model",
"_____no_output_____"
],
[
"model_0 = define_model_3( input_dim=1000, max_length=100, emb_matrix=np.random.rand(1000, 300))\nmodel_0.summary()",
"Model: \"model_182\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_183 (InputLayer) [(None, 100)] 0 \n__________________________________________________________________________________________________\nembedding_182 (Embedding) (None, 100, 300) 300000 input_183[0][0] \n__________________________________________________________________________________________________\nspatial_dropout1d_182 (SpatialD (None, 100, 300) 0 embedding_182[0][0] \n__________________________________________________________________________________________________\ntcn1 (TCN) (None, 100, 128) 279936 spatial_dropout1d_182[0][0] \n__________________________________________________________________________________________________\ntcn2 (TCN) (None, 100, 64) 65984 tcn1[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling1d_182 (G (None, 64) 0 tcn2[0][0] \n__________________________________________________________________________________________________\nglobal_max_pooling1d_182 (Globa (None, 64) 0 tcn2[0][0] \n__________________________________________________________________________________________________\nconcatenate_182 (Concatenate) (None, 128) 0 global_average_pooling1d_182[0][0\n global_max_pooling1d_182[0][0] \n__________________________________________________________________________________________________\ndense_364 (Dense) (None, 16) 2064 concatenate_182[0][0] \n__________________________________________________________________________________________________\ndropout_182 (Dropout) (None, 16) 0 dense_364[0][0] \n__________________________________________________________________________________________________\ndense_365 (Dense) (None, 1) 17 dropout_182[0][0] \n==================================================================================================\nTotal params: 648,001\nTrainable params: 648,001\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
]
],
[
[
"## Train and Test the Model",
"_____no_output_____"
]
],
[
[
"class myCallback(tf.keras.callbacks.Callback):\n # Overide the method on_epoch_end() for our benefit\n def on_epoch_end(self, epoch, logs={}):\n if (logs.get('accuracy') > 0.93):\n print(\"\\nReached 93% accuracy so cancelling training!\")\n self.model.stop_training=True\n\ncallbacks = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0, \n patience=10, verbose=2, \n mode='auto', restore_best_weights=True)",
"_____no_output_____"
],
[
"# Parameter Initialization\ntrunc_type='post'\npadding_type='post'\noov_tok = \"<UNK>\"\nactivations = ['relu']\nprint('Loading embedding statistics . . .')\nemb_mean = emb_mean\nemb_std = emb_std\nprint('Done!')\nkernel_sizes = [1, 2, 3, 4, 5, 6]\n\ncolumns = ['Activation', 'Filters', 'acc1', 'acc2', 'acc3', 'acc4', 'acc5', 'acc6', 'acc7', 'acc8', 'acc9', 'acc10', 'AVG']\nrecord3 = pd.DataFrame(columns = columns)\n\n# prepare cross validation with 10 splits and shuffle = True\nkfold = KFold(10, True)\n\n# Separate the sentences and the labels\nsentences, labels = list(corpus.sentence), list(corpus.label)\n\nexp = 0\n\nfor activation in activations:\n for kernel_size in kernel_sizes:\n exp+=1\n print('-------------------------------------------')\n print('Training {}: {} activation, {} kernel size.'.format(exp, activation, kernel_size))\n print('-------------------------------------------')\n \n # kfold.split() will return set indices for each split\n acc_list = []\n for train, test in kfold.split(sentences):\n \n train_x, test_x = [], []\n train_y, test_y = [], []\n \n for i in train:\n train_x.append(sentences[i])\n train_y.append(labels[i])\n\n for i in test:\n test_x.append(sentences[i])\n test_y.append(labels[i])\n\n # Turn the labels into a numpy array\n train_y = np.array(train_y)\n test_y = np.array(test_y)\n\n # encode data using\n # Cleaning and Tokenization\n tokenizer = Tokenizer(oov_token=oov_tok)\n tokenizer.fit_on_texts(train_x)\n\n # Turn the text into sequence\n training_sequences = tokenizer.texts_to_sequences(train_x)\n test_sequences = tokenizer.texts_to_sequences(test_x)\n\n max_len = max_length(training_sequences)\n\n # Pad the sequence to have the same size\n Xtrain = pad_sequences(training_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n Xtest = pad_sequences(test_sequences, maxlen=max_len, padding=padding_type, truncating=trunc_type)\n\n word_index = tokenizer.word_index\n vocab_size = len(word_index)+1\n \n \n emb_matrix = pretrained_embedding_matrix(word2vec, word_index, emb_mean, emb_std)\n \n # Define the input shape\n model = define_model_3(kernel_size, activation, input_dim=vocab_size, \n max_length=max_len, emb_matrix=emb_matrix)\n\n # Train the model\n model.fit(Xtrain, train_y, batch_size=50, epochs=100, verbose=1, \n callbacks=[callbacks], validation_data=(Xtest, test_y))\n\n # evaluate the model\n loss, acc = model.evaluate(Xtest, test_y, verbose=0)\n print('Test Accuracy: {}'.format(acc*100))\n\n acc_list.append(acc*100)\n \n mean_acc = np.array(acc_list).mean()\n parameters = [activation, kernel_size]\n entries = parameters + acc_list + [mean_acc]\n\n temp = pd.DataFrame([entries], columns=columns)\n record3 = record3.append(temp, ignore_index=True)\n print()\n print(record3)\n print()",
"Loading embedding statistics . . .\nDone!\n-------------------------------------------\nTraining 1: relu activation, 1 kernel size.\n-------------------------------------------\n"
]
],
[
[
"## Summary",
"_____no_output_____"
]
],
[
[
"record3.sort_values(by='AVG', ascending=False)",
"_____no_output_____"
],
[
"report = record3.sort_values(by='AVG', ascending=False)\nreport = report.to_excel('TCN_MPQA_3.xlsx', sheet_name='dynamic')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9fc54a82661a5179ddae7bf1a27f5c4f6812e7 | 190,317 | ipynb | Jupyter Notebook | nicky/visualize_functional_group.ipynb | WitMaster98/capturing_CO2_working_cap_MOFs | a9ed43bd9c3abb646856109be0a49e12e57b9156 | [
"MIT"
] | null | null | null | nicky/visualize_functional_group.ipynb | WitMaster98/capturing_CO2_working_cap_MOFs | a9ed43bd9c3abb646856109be0a49e12e57b9156 | [
"MIT"
] | null | null | null | nicky/visualize_functional_group.ipynb | WitMaster98/capturing_CO2_working_cap_MOFs | a9ed43bd9c3abb646856109be0a49e12e57b9156 | [
"MIT"
] | 1 | 2021-11-24T03:42:56.000Z | 2021-11-24T03:42:56.000Z | 238.492481 | 14,308 | 0.926344 | [
[
[
"import networkx as nx",
"_____no_output_____"
],
[
"atoms = [\n 'Br','C','H','F','N','S','I','Cl','O'\n]\natoms2idx = {x:idx for idx,x in enumerate(atoms)}\nmap_bond ={\n 'S':1,\n 'D':2,\n 'T':3,\n 'A':4,\n}",
"_____no_output_____"
]
],
[
[
"## Br",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'Br')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## Cl",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'Cl')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## I",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'I')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## F",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'F')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## Me",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'C')\nG.add_node(2, atom = 'H')\nG.add_node(3, atom = 'H')\nG.add_node(4, atom = 'H')\n\nG.add_edge(1,2,bond_type='S')\nG.add_edge(1,3,bond_type='S')\nG.add_edge(1,4,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## Et(C2H5)",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'C')\nG.add_node(2,atom = 'C')\nG.add_node(3,atom = 'H')\nG.add_node(4,atom = 'H')\nG.add_node(5,atom = 'H')\nG.add_node(6,atom = 'H')\nG.add_node(7,atom = 'H')\n#c1\nG.add_edge(1,3,bond_type='S')\nG.add_edge(1,7,bond_type='S')\nG.add_edge(1,6,bond_type='S')\n#c2\nG.add_edge(2,4,bond_type='S')\nG.add_edge(2,5,bond_type='S')\n#c-c\nG.add_edge(2,1,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## Pr (c3h7)",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'C')\nG.add_node(2,atom = 'C')\nG.add_node(3,atom = 'C')\nG.add_node(4,atom = 'H')\nG.add_node(5,atom = 'H')\nG.add_node(6,atom = 'H')\nG.add_node(7,atom = 'H')\nG.add_node(8,atom = 'H')\nG.add_node(9,atom = 'H')\nG.add_node(10,atom = 'H')\n#c1\nG.add_edge(1,4,bond_type='S')\nG.add_edge(1,10,bond_type='S')\nG.add_edge(1,2,bond_type='S')\n#c2\nG.add_edge(2,9,bond_type='S')\nG.add_edge(2,5,bond_type='S')\nG.add_edge(2,3,bond_type='S')\n#c3\nG.add_edge(3,6,bond_type='S')\nG.add_edge(3,7,bond_type='S')\nG.add_edge(3,8,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## HCO",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'C')\nG.add_node(2,atom = 'O')\nG.add_node(3,atom = 'H')\nG.add_edge(1,2,bond_type='D')\nG.add_edge(1,3,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## OH",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'O')\nG.add_node(2,atom = 'H')\nG.add_edge(1,2,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## COOH",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'C')\nG.add_node(2,atom = 'O')\nG.add_node(3,atom = 'O')\nG.add_node(4,atom = 'H')\nG.add_edge(1,3,bond_type='S')\nG.add_edge(1,2,bond_type='D')\nG.add_edge(3,4,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## OMe (OCH3)",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'O')\nG.add_node(2,atom = 'C')\nG.add_node(3,atom = 'H')\nG.add_node(4,atom = 'H')\nG.add_node(5,atom = 'H')\n#c\nG.add_edge(2,1,bond_type='S')\nG.add_edge(2,4,bond_type='S')\nG.add_edge(2,5,bond_type='S')\nG.add_edge(2,3,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## OEt(OC2H5)",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'O')\nG.add_node(2,atom = 'C')\nG.add_node(3,atom = 'C')\nG.add_node(4,atom = 'H')\nG.add_node(5,atom = 'H')\nG.add_node(6,atom = 'H')\nG.add_node(7,atom = 'H')\nG.add_node(8,atom = 'H')\n\n#c1\nG.add_edge(2,1,bond_type='S')\nG.add_edge(2,4,bond_type='S')\nG.add_edge(2,8,bond_type='S')\nG.add_edge(2,3,bond_type='S')\n#c2\nG.add_edge(3,5,bond_type='S')\nG.add_edge(3,6,bond_type='S')\nG.add_edge(3,7,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## OPr( OC3H7)",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1,atom = 'C')\nG.add_node(2,atom = 'C')\nG.add_node(3,atom = 'C')\nG.add_node(4,atom = 'H')\nG.add_node(5,atom = 'H')\nG.add_node(6,atom = 'H')\nG.add_node(7,atom = 'H')\nG.add_node(8,atom = 'H')\nG.add_node(9,atom = 'H')\nG.add_node(10,atom = 'H')\nG.add_node(11,atom = 'O')\n\n#c1\nG.add_edge(1,4,bond_type='S')\nG.add_edge(1,10,bond_type='S')\nG.add_edge(1,2,bond_type='S')\nG.add_edge(1,11,bond_type='S')\n\n#c2\nG.add_edge(2,9,bond_type='S')\nG.add_edge(2,5,bond_type='S')\nG.add_edge(2,3,bond_type='S')\n#c3\nG.add_edge(3,6,bond_type='S')\nG.add_edge(3,7,bond_type='S')\nG.add_edge(3,8,bond_type='S')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## CN",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'C')\nG.add_node(2, atom = 'N' )\nG.add_edge(1,2,bond_type='T')\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## NH2",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'N')\nG.add_node(2, atom = 'H' )\nG.add_node(3, atom = 'H' )\n\nG.add_edge(1,2,bond_type='S')\nG.add_edge(1,3,bond_type='S')\n\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## NHMe",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'N')\nG.add_node(2, atom = 'C' )\nG.add_node(3, atom = 'H' )\nG.add_node(4, atom = 'H' )\nG.add_node(5, atom = 'H' )\nG.add_node(6, atom = 'H' )\n#n\nG.add_edge(1,2,bond_type='S')\nG.add_edge(1,3,bond_type='S')\n#c\nG.add_edge(2,4,bond_type='S')\nG.add_edge(2,5,bond_type='S')\nG.add_edge(2,6,bond_type='S')\n\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## No2",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'N')\nG.add_node(2, atom = 'O' )\nG.add_node(3, atom = 'O' )\n\nG.add_edge(1,2,bond_type='A')\nG.add_edge(1,3,bond_type='A')\n\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## SO3H",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'S')\nG.add_node(2, atom = 'O')\nG.add_node(3, atom = 'O')\nG.add_node(4, atom = 'O')\nG.add_node(5, atom = 'H')\n#S\nG.add_edge(1,2,bond_type='D')\nG.add_edge(1,3,bond_type='S')\nG.add_edge(1,4,bond_type='D')\n#O\nG.add_edge(3,5,bond_type='S')\n\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
],
[
[
"## Ph (C6H5)",
"_____no_output_____"
]
],
[
[
"G = nx.Graph()\nG.add_node(1, atom = 'C')\nG.add_node(2, atom = 'C')\nG.add_node(3, atom = 'C')\nG.add_node(4, atom = 'C')\nG.add_node(5, atom = 'C')\nG.add_node(6, atom = 'C')\nG.add_node(7, atom = 'H')\nG.add_node(8, atom = 'H')\nG.add_node(9, atom = 'H')\nG.add_node(10, atom = 'H')\nG.add_node(11, atom = 'H')\n#C\nG.add_edge(1,2,bond_type='A')\nG.add_edge(2,3,bond_type='A')\nG.add_edge(3,4,bond_type='A')\nG.add_edge(4,5,bond_type='A')\nG.add_edge(5,6,bond_type='A')\nG.add_edge(6,1,bond_type='A')\n#H\nG.add_edge(2,7,bond_type='S')\nG.add_edge(3,8,bond_type='S')\nG.add_edge(4,9,bond_type='S')\nG.add_edge(5,10,bond_type='S')\nG.add_edge(6,11,bond_type='S')\n\nlabels = nx.get_node_attributes(G,'atom')\nnx.draw_networkx(G,labels = labels)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9fd26b53c94f879da18eaba51ebdcf3114b561 | 7,274 | ipynb | Jupyter Notebook | 0.13/_downloads/plot_artifacts_detection.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.13/_downloads/plot_artifacts_detection.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.13/_downloads/plot_artifacts_detection.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 49.482993 | 2,809 | 0.637888 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n\nIntroduction to artifacts and artifact detection\n================================================\n\nSince MNE supports the data of many different acquisition systems, the\nparticular artifacts in your data might behave very differently from the\nartifacts you can observe in our tutorials and examples.\n\nTherefore you should be aware of the different approaches and of\nthe variability of artifact rejection (automatic/manual) procedures described\nonwards. At the end consider always to visually inspect your data\nafter artifact rejection or correction.\n\nBackground: what is an artifact?\n--------------------------------\n\nArtifacts are signal interference that can be\nendogenous (biological) and exogenous (environmental).\nTypical biological artifacts are head movements, eye blinks\nor eye movements, heart beats. The most common environmental\nartifact is due to the power line, the so-called *line noise*.\n\nHow to handle artifacts?\n------------------------\n\nMNE deals with artifacts by first identifying them, and subsequently removing\nthem. Detection of artifacts can be done visually, or using automatic routines\n(or a combination of both). After you know what the artifacts are, you need\nremove them. This can be done by:\n\n - *ignoring* the piece of corrupted data\n - *fixing* the corrupted data\n\nFor the artifact detection the functions MNE provides depend on whether\nyour data is continuous (Raw) or epoch-based (Epochs) and depending on\nwhether your data is stored on disk or already in memory.\n\nDetecting the artifacts without reading the complete data into memory allows\nyou to work with datasets that are too large to fit in memory all at once.\nDetecting the artifacts in continuous data allows you to apply filters\n(e.g. a band-pass filter to zoom in on the muscle artifacts on the temporal\nchannels) without having to worry about edge effects due to the filter\n(i.e. filter ringing). Having the data in memory after segmenting/epoching is\nhowever a very efficient way of browsing through the data which helps\nin visualizing. So to conclude, there is not a single most optimal manner\nto detect the artifacts: it just depends on the data properties and your\nown preferences.\n\nIn this tutorial we show how to detect artifacts visually and automatically.\nFor how to correct artifacts by rejection see `tut_artifacts_reject`.\nTo discover how to correct certain artifacts by filtering see\n`tut_artifacts_filter` and to learn how to correct artifacts\nwith subspace methods like SSP and ICA see `tut_artifacts_correct_ssp`\nand `tut_artifacts_correct_ica`.\n\n\nArtifacts Detection\n-------------------\n\nThis tutorial discusses a couple of major artifacts that most analyses\nhave to deal with and demonstrates how to detect them.\n\n\n",
"_____no_output_____"
]
],
[
[
"import mne\nfrom mne.datasets import sample\nfrom mne.preprocessing import create_ecg_epochs, create_eog_epochs\n\n# getting some data ready\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname, preload=True)",
"_____no_output_____"
]
],
[
[
"Low frequency drifts and line noise\n\n",
"_____no_output_____"
]
],
[
[
"(raw.copy().pick_types(meg='mag')\n .del_proj(0)\n .plot(duration=60, n_channels=100, remove_dc=False))",
"_____no_output_____"
]
],
[
[
"we see high amplitude undulations in low frequencies, spanning across tens of\nseconds\n\n",
"_____no_output_____"
]
],
[
[
"raw.plot_psd(fmax=250)",
"_____no_output_____"
]
],
[
[
"On MEG sensors we see narrow frequency peaks at 60, 120, 180, 240 Hz,\nrelated to line noise.\nBut also some high amplitude signals between 25 and 32 Hz, hinting at other\nbiological artifacts such as ECG. These can be most easily detected in the\ntime domain using MNE helper functions\n\nSee `tut_artifacts_filter`.\n\n",
"_____no_output_____"
],
[
"ECG\n---\n\nfinds ECG events, creates epochs, averages and plots\n\n",
"_____no_output_____"
]
],
[
[
"average_ecg = create_ecg_epochs(raw).average()\nprint('We found %i ECG events' % average_ecg.nave)\naverage_ecg.plot_joint()",
"_____no_output_____"
]
],
[
[
"we can see typical time courses and non dipolar topographies\nnot the order of magnitude of the average artifact related signal and\ncompare this to what you observe for brain signals\n\n",
"_____no_output_____"
],
[
"EOG\n---\n\n",
"_____no_output_____"
]
],
[
[
"average_eog = create_eog_epochs(raw).average()\nprint('We found %i EOG events' % average_eog.nave)\naverage_eog.plot_joint()",
"_____no_output_____"
]
],
[
[
"Knowing these artifact patterns is of paramount importance when\njudging about the quality of artifact removal techniques such as SSP or ICA.\nAs a rule of thumb you need artifact amplitudes orders of magnitude higher\nthan your signal of interest and you need a few of such events in order\nto find decompositions that allow you to estimate and remove patterns related\nto artifacts.\n\nConsider the following tutorials for correcting this class of artifacts:\n - `tut_artifacts_filter`\n - `tut_artifacts_correct_ica`\n - `tut_artifacts_correct_ssp`\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9fdcec2913ab0ed47bd8114c1b3ab937be6d31 | 400,121 | ipynb | Jupyter Notebook | Slutions/NN_Assignment_5b_V2.ipynb | taareek/neural_netwok | 00f1b0577aa75e68b16777e8323238394d0e48e3 | [
"MIT"
] | null | null | null | Slutions/NN_Assignment_5b_V2.ipynb | taareek/neural_netwok | 00f1b0577aa75e68b16777e8323238394d0e48e3 | [
"MIT"
] | null | null | null | Slutions/NN_Assignment_5b_V2.ipynb | taareek/neural_netwok | 00f1b0577aa75e68b16777e8323238394d0e48e3 | [
"MIT"
] | null | null | null | 277.092105 | 121,742 | 0.902522 | [
[
[
"<a href=\"https://colab.research.google.com/github/taareek/neural_netwok/blob/main/NN_Assignment_5b_V2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"*Enabling GPU*",
"_____no_output_____"
]
],
[
[
"import torch\nimport numpy as np\n\n# check if CUDA is available\ntrain_on_gpu = torch.cuda.is_available()\n\nif not train_on_gpu:\n print('CUDA is not available. Training on CPU ...')\nelse:\n print('CUDA is available! Training on GPU ...')",
"CUDA is available! Training on GPU ...\n"
]
],
[
[
"*Importing necessary libraries*",
"_____no_output_____"
]
],
[
[
"from torchvision import datasets\nimport torchvision.transforms as transforms\nfrom torch.utils.data.sampler import SubsetRandomSampler\nimport matplotlib.pyplot as plt\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport pandas as pd\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"*Downloading CIFAR-10 Dataset*",
"_____no_output_____"
]
],
[
[
"# number of subprocesses to use for data loading\nnum_workers = 0\nbatch_size = 20\n# percentage of training set to use as validation (here I have used 20%)\nvalid_size = 0.2\ntransform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n ])\n\ntrain_data = datasets.CIFAR10('data', train=True,\n download=True, transform=transform)\ntest_data = datasets.CIFAR10('data', train=False,\n download=True, transform=transform)\n\nprint('Training set samples:', len(train_data))\nprint('Test set samples:', len(test_data))\n\n",
"Files already downloaded and verified\nFiles already downloaded and verified\nTraining set samples: 50000\nTest set samples: 10000\n"
]
],
[
[
"*Splitting Training and Test Data into train, validation and test set*",
"_____no_output_____"
]
],
[
[
"num_train = len(train_data)\nindices = list(range(num_train))\nprint(len(indices))\nnp.random.shuffle(indices)\nsplit = int(np.floor(valid_size * num_train))\n#print(split)\ntrain_idx, valid_idx = indices[split:], indices[:split]\nprint(len(train_idx))\nfor i in range(10):\n print(train_idx[i])",
"50000\n40000\n12967\n40651\n17476\n44570\n34536\n4729\n47300\n19711\n6785\n26143\n"
],
[
"# obtain training indices that will be used for validation\nnum_train = len(train_data)\nindices = list(range(num_train))\nnp.random.shuffle(indices)\nsplit = int(np.floor(valid_size * num_train))\ntrain_idx, valid_idx = indices[split:], indices[:split]\n\n# define samplers for obtaining training and validation batches\ntrain_sampler = SubsetRandomSampler(train_idx)\nvalid_sampler = SubsetRandomSampler(valid_idx)\n\n# prepare data loaders (combine dataset and sampler)\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\n sampler=train_sampler, num_workers=num_workers)\nvalid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, \n sampler=valid_sampler, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers)\n\n# image classes\nclasses = ['airplane', 'automobile', 'bird', 'cat', 'deer',\n 'dog', 'frog', 'horse', 'ship', 'truck']",
"_____no_output_____"
]
],
[
[
"*Visualize Training data (1 batch)*",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\n# helper function to display an image\ndef imshow(img):\n img = img / 2 + 0.5 # unnormalize\n plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image",
"_____no_output_____"
],
[
"# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy() # Converted Image to numpy\n\n# plot the images in the batch, along with the corresponding labels\n\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n imshow(images[idx])\n ax.set_title(classes[labels[idx]])",
"_____no_output_____"
]
],
[
[
"***Defining CNN Model***",
"_____no_output_____"
]
],
[
[
"class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n # convolutional layer 1 (sees 32x32x3 image tensor)\n self.conv1 = nn.Conv2d(3, 16, kernel_size= 3, padding=1)\n # convolutional layer 2\n self.conv2 = nn.Conv2d(16, 32, kernel_size= 3, padding=1)\n # convulation layer 3\n self.conv3 = nn.Conv2d(32, 64, kernel_size= 3, padding=1)\n # max pooling layer\n self.pool = nn.MaxPool2d(2, 2)\n # linear layer (64 * 4 * 4 -> 500)\n self.fc1 = nn.Linear(64 * 4 * 4, 1000)\n # linear layer (1000 -> 500)\n self.fc2 = nn.Linear(1000, 500)\n # linear layer (500 -> 10)\n self.fc3 = nn.Linear(500,10)\n # dropout layer (p=0.25)\n self.dropout = nn.Dropout(0.25)\n\n def forward(self, x):\n # adding sequence of convolutional and max pooling layers\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = self.pool(F.relu(self.conv3(x)))\n # flatten image input\n x = x.view(-1, 64 * 4 * 4)\n # add dropout layer\n x = self.dropout(x)\n # add 1st hidden layer, with sigmoid activation function\n x = F.sigmoid(self.fc1(x))\n # add dropout layer\n x = self.dropout(x)\n # add 2nd hidden layer, with sigmoid activation function\n x = F.sigmoid(self.fc2(x))\n # add dropout layer\n x = self.dropout(x)\n # add 3rd layer\n x = self.fc3(x)\n return x",
"_____no_output_____"
],
[
"# create a complete CNN\nmodel = Net()\nprint(model)\n\n# move tensors to GPU if CUDA is available\nif train_on_gpu:\n model.cuda()",
"Net(\n (conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (fc1): Linear(in_features=1024, out_features=1000, bias=True)\n (fc2): Linear(in_features=1000, out_features=500, bias=True)\n (fc3): Linear(in_features=500, out_features=10, bias=True)\n (dropout): Dropout(p=0.25, inplace=False)\n)\n"
]
],
[
[
"*Loss function and optimizer*",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\n\n# specify loss function (categorical cross-entropy)\ncriterion = nn.CrossEntropyLoss()\n\n# specify optimizer\noptimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay= 5e-4)",
"_____no_output_____"
]
],
[
[
"***Training our Model***",
"_____no_output_____"
]
],
[
[
"# number of epochs to train the model\nn_epochs = 50\n\nvalid_loss_min = np.Inf # track change in validation loss\n\nfor epoch in range(1, n_epochs+1):\n\n # keep track of training and validation loss\n train_loss = 0.0\n valid_loss = 0.0\n \n ###################\n # train the model #\n ###################\n model.train()\n for data, target in train_loader:\n # move tensors to GPU if CUDA is available\n if train_on_gpu:\n data, target = data.cuda(), target.cuda()\n\n optimizer.zero_grad()\n output = model(data)\n loss = criterion(output, target)\n loss.backward()\n optimizer.step()\n train_loss += loss.item()*data.size(0)\n \n ###################### \n # validate the model #\n ######################\n model.eval()\n for data, target in valid_loader:\n # move tensors to GPU if CUDA is available\n if train_on_gpu:\n data, target = data.cuda(), target.cuda()\n \n output = model(data)\n loss = criterion(output, target)\n valid_loss += loss.item()*data.size(0)\n \n # calculate average losses\n train_loss = train_loss/len(train_loader.sampler)\n valid_loss = valid_loss/len(valid_loader.sampler)\n \n # print training/validation statistics \n print('Epoch: {} \\tTraining Loss: {:.6f} \\tValidation Loss: {:.6f}'.format(\n epoch, train_loss, valid_loss))\n \n # save model if validation loss has decreased\n if valid_loss <= valid_loss_min:\n print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(\n valid_loss_min,\n valid_loss))\n torch.save(model.state_dict(), 'model_cifar.pt')\n valid_loss_min = valid_loss",
"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1805: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n"
]
],
[
[
"*Load model with minimum loss*",
"_____no_output_____"
]
],
[
[
"model.load_state_dict(torch.load('model_cifar.pt'))",
"_____no_output_____"
]
],
[
[
"*Test the train Network And Accuracy*",
"_____no_output_____"
]
],
[
[
"# track test loss\ntest_loss = 0.0\nclass_correct = list(0. for i in range(10))\nclass_total = list(0. for i in range(10))\n\nmodel.eval()\n# iterate over test data\nfor data, target in test_loader:\n # move tensors to GPU if CUDA is available\n if train_on_gpu:\n data, target = data.cuda(), target.cuda()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the batch loss\n loss = criterion(output, target)\n # update test loss \n test_loss += loss.item()*data.size(0)\n # convert output probabilities to predicted class\n _, pred = torch.max(output, 1) \n # compare predictions to true label\n correct_tensor = pred.eq(target.data.view_as(pred))\n correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())\n # calculate test accuracy for each object class\n for i in range(batch_size):\n label = target.data[i]\n class_correct[label] += correct[i].item()\n class_total[label] += 1\n\n# average test loss\ntest_loss = test_loss/len(test_loader.dataset)\nprint('Test Loss: {:.6f}\\n'.format(test_loss))\n\nfor i in range(10):\n if class_total[i] > 0:\n print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (\n classes[i], 100 * class_correct[i] / class_total[i],\n np.sum(class_correct[i]), np.sum(class_total[i])))\n else:\n print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))\n\nprint('\\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (\n 100. * np.sum(class_correct) / np.sum(class_total),\n np.sum(class_correct), np.sum(class_total)))",
"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1805: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n"
]
],
[
[
"*Visualize Test results with sample*",
"_____no_output_____"
]
],
[
[
"# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\nimages.numpy()\n\n# move model inputs to cuda, if GPU available\nif train_on_gpu:\n images = images.cuda()\n\n# get sample outputs\noutput = model(images)\n# convert output probabilities to predicted class\n_, preds_tensor = torch.max(output, 1)\npreds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())\n\n# plot the images in the batch, along with predicted and true labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n imshow(images[idx] if not train_on_gpu else images[idx].cpu())\n ax.set_title(\"{} ({})\".format(classes[preds[idx]], classes[labels[idx]]),\n color=(\"green\" if preds[idx]==labels[idx].item() else \"red\"))",
"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1805: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n"
],
[
"",
"_____no_output_____"
]
],
[
[
"***Confusion Matrix***",
"_____no_output_____"
]
],
[
[
"print(len(train_data))\nprint(len(train_data.targets))",
"50000\n50000\n"
],
[
"# Function for prediction\n# Doesn't work in our scenario\[email protected]_grad()\ndef get_all_preds(cnn_model, loader):\n all_preds = torch.tensor([])\n for batch in loader:\n images, labels = batch\n\n preds = cnn_model(images)\n all_preds = torch.cat(\n (all_preds, preds)\n ,dim=0\n )\n return all_preds",
"_____no_output_____"
],
[
"# Function to plot confusion matrix\n\nfrom sklearn.metrics import confusion_matrix\nimport itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n #print(cm)\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt), horizontalalignment=\"center\", color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')",
"_____no_output_____"
],
[
"# Function for prediction\n# It works!\[email protected]_grad()\ndef prediction(cnn, batch_loader):\n total_preds = torch.tensor([]).cuda()\n for data, target in test_loader:\n # move tensors to GPU if CUDA is available\n if train_on_gpu:\n data, target = data.cuda(), target.cuda()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # convert output probabilities to predicted class\n _,pred = torch.max(output, 1) \n total_preds = torch.cat(\n (total_preds, output)\n ,dim=0\n )\n return total_preds",
"_____no_output_____"
],
[
"#test_prediction_loader = torch.utils.data.DataLoader(test_data, batch_size=20)\nwith torch.no_grad():\n test_prediction_loader = torch.utils.data.DataLoader(test_data, batch_size=20)\n test_preds = prediction(model, test_prediction_loader)",
"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1805: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n"
],
[
"len(test_preds)",
"_____no_output_____"
],
[
"test_preds = test_preds.cpu()\nconf_matrix = confusion_matrix(test_data.targets, test_preds.argmax(dim= 1))\nprint(conf_matrix)",
"[[691 27 50 23 6 4 3 13 121 62]\n [ 9 851 1 7 1 2 3 2 28 96]\n [ 76 11 560 105 59 45 52 53 22 17]\n [ 17 12 52 640 35 101 45 48 17 33]\n [ 20 3 86 114 569 25 48 113 13 9]\n [ 12 5 42 295 25 507 10 76 12 16]\n [ 5 7 43 107 29 9 765 5 13 17]\n [ 14 5 25 63 28 38 3 797 8 19]\n [ 28 37 6 16 1 3 2 3 878 26]\n [ 10 88 4 21 2 4 1 8 26 836]]\n"
],
[
"plt.figure(figsize=(10,10))\nplot_confusion_matrix(conf_matrix, train_data.classes)",
"Confusion matrix, without normalization\n"
]
],
[
[
"# Draft",
"_____no_output_____"
]
],
[
[
"len(train_preds)\nprint(train_preds[3])",
"tensor([ 0.0464, 0.0321, 0.6740, 0.2034, -0.3273, 0.0766, 0.3750, 0.3106,\n -0.2834, 0.3333])\n"
],
[
"# Function to get number correct classification\n# jhamela ache ekahne\n\ndef get_num_correct(preds, labels):\n num_correct = 0\n for i in range(len(preds)):\n preds_label = preds[i].argmax()\n if preds_label == labels[i]:\n num_correct+= 1\n return num_correct\n",
"_____no_output_____"
],
[
"# Predictions on train data\nmodel1 = Net()\nmodel1 = model1.cpu() # moving our model to CPU as we trained our model in GPU\nwith torch.no_grad():\n prediction_loader = torch.utils.data.DataLoader(train_data, batch_size=20)\n train_preds = get_all_preds(model1, prediction_loader)",
"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1805: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n"
],
[
"# Getting accuracy on train data\npreds_correct = get_num_correct(train_preds, train_data.targets)\nprint(\"Total corrects: \", preds_correct)\nprint(\"Accuracy: \", (preds_correct/ len(train_data.targets)) *100)",
"Total corrects: 4985\nAccuracy: 9.969999999999999\n"
],
[
"for i in range(5):\n print(train_preds[i])",
"tensor([ 0.1023, 0.5309, 0.9629, -0.0568, -0.1206, -0.2918, 0.2548, 0.4906,\n -0.3196, -0.0019])\ntensor([ 0.2161, 0.0955, 0.9204, -0.0459, -0.3321, -0.1419, -0.0113, 0.3711,\n -0.2925, 0.0858])\ntensor([ 0.3101, -0.0921, 0.8460, 0.1361, -0.3745, 0.1984, 0.3655, -0.0416,\n 0.1336, 0.0707])\ntensor([ 0.2471, 0.1450, 0.5735, 0.1945, -0.4120, 0.0687, 0.1969, 0.1027,\n -0.1359, 0.1466])\ntensor([ 0.3683, 0.2769, 0.8806, 0.1002, -0.3488, -0.0588, -0.0721, 0.4447,\n -0.2287, -0.0722])\n"
],
[
" # Prediction on test data\n with torch.no_grad():\n test_prediction_loader = torch.utils.data.DataLoader(test_data, batch_size=20)\n test_preds = get_all_preds(model, test_prediction_loader)",
"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1805: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n"
],
[
"# Getting accuracy on Test data\ntest_preds_correct = get_num_correct(test_preds, test_data.targets)\nprint(\"Total corrects: \", test_preds_correct)\nprint(\"Accuracy: \", (test_preds_correct/ len(test_data.targets)) *100)",
"Total corrects: 1038\nAccuracy: 10.38\n"
],
[
"import matplotlib.pyplot as plt\n\nfrom sklearn.metrics import confusion_matrix\n#from resources.plotcm import plot_confusion_matrix",
"_____no_output_____"
],
[
"cm = confusion_matrix(train_data.targets, train_preds.argmax(dim=1))",
"_____no_output_____"
],
[
"print(type(cm))",
"<class 'numpy.ndarray'>\n"
],
[
"cm",
"_____no_output_____"
],
[
"import itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt), horizontalalignment=\"center\", color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_confusion_matrix(cm, train_data.classes)",
"Confusion matrix, without normalization\n[[ 63 35 4747 7 0 5 27 102 1 13]\n [ 67 41 4701 11 0 6 46 115 2 11]\n [ 84 40 4713 12 0 8 28 103 2 10]\n [ 70 45 4707 8 0 7 35 113 2 13]\n [ 74 34 4685 12 0 10 33 133 3 16]\n [ 63 34 4702 10 0 8 30 143 0 10]\n [ 64 29 4743 12 0 6 39 98 0 9]\n [ 68 26 4730 13 0 3 37 109 0 14]\n [ 78 34 4715 9 0 4 26 123 1 10]\n [ 77 39 4722 13 0 3 25 115 3 3]]\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9fe0b42031ef8215110244153d67ec3fc98f76 | 441,735 | ipynb | Jupyter Notebook | notebooks/PDF test cases.ipynb | a-regal/tesis_pregrado | 501d3a137f305d53e8b4eaec7c4ba6f18d7b7706 | [
"MIT"
] | 1 | 2019-11-16T02:32:48.000Z | 2019-11-16T02:32:48.000Z | notebooks/PDF test cases.ipynb | a-regal/tesis_pregrado | 501d3a137f305d53e8b4eaec7c4ba6f18d7b7706 | [
"MIT"
] | null | null | null | notebooks/PDF test cases.ipynb | a-regal/tesis_pregrado | 501d3a137f305d53e8b4eaec7c4ba6f18d7b7706 | [
"MIT"
] | 1 | 2020-09-13T16:17:18.000Z | 2020-09-13T16:17:18.000Z | 451.671779 | 128,068 | 0.924869 | [
[
[
"# Main imports",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport os\nimport sys\nimport re\nimport pandas as pd\nimport numpy as np\nimport _pickle as cPickle\nimport matplotlib.pyplot as plt\nimport plotly\nimport plotly.graph_objs as go\nfrom matplotlib.patches import Circle, RegularPolygon\nfrom matplotlib.path import Path\nfrom matplotlib.projections.polar import PolarAxes\nfrom matplotlib.projections import register_projection\nfrom matplotlib.spines import Spine\nfrom matplotlib.transforms import Affine2D\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\n\nif 'SUMO_HOME' in os.environ:\n tools = os.path.join(os.environ['SUMO_HOME'], 'tools')\n sys.path.append(tools)\n import traci\nelse:\n sys.exit(\"please declare environment variable 'SUMO_HOME'\")\n\nfrom model_classes import ActorCriticNetwork, Agent, DQN",
"_____no_output_____"
]
],
[
[
"## Global variables for both models",
"_____no_output_____"
]
],
[
[
"n_actions = 5981\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")",
"_____no_output_____"
],
[
"experiment_dict = {\n 'dqn_base': {\n 'weight_path_1_layer': './resultados_centro_1mlp/escenario0/1_layer/policy_net_weights_experiment_ep_2999.pt',\n 'weight_path_2_layer': './resultados_centro_1mlp/escenario0/2_layer/policy_net_weights_experiment_ep_2999.pt',\n 'weight_path_3_layer': './resultados_centro_1mlp/escenario0/3_layer/policy_net_weights_experiment_ep_2999.pt',\n 'config': \"../sumo_simulation/sim_config/km2_centro/scenario/osm.sumocfg\",\n 'plot_name': 'dqn_scenario_0'\n },\n 'dqn_case_2x': {\n 'weight_path_1_layer': './resultados_centro_1mlp/escenario1/1_layer/policy_net_weights_experiment_ep_2999.pt',\n 'weight_path_2_layer': './resultados_centro_1mlp/escenario1/2_layer/policy_net_weights_experiment_ep_2999.pt',\n 'weight_path_3_layer': './resultados_centro_1mlp/escenario1/3_layer/policy_net_weights_experiment_ep_2999.pt',\n 'config': \"../sumo_simulation/sim_config/km2_centro/scenario_2/osm.sumocfg\",\n 'plot_name': 'dqn_scenario_2',\n },\n 'dqn_case_4x': {\n 'weight_path_1_layer': './resultados_centro_1mlp/escenario2/1_layer/policy_net_weights_experiment_ep_2999.pt',\n 'weight_path_2_layer': './resultados_centro_1mlp/escenario2/2_layer/policy_net_weights_experiment_ep_2999.pt',\n 'weight_path_3_layer': './resultados_centro_1mlp/escenario2/3_layer/policy_net_weights_experiment_ep_2999.pt',\n 'config': \"../sumo_simulation/sim_config/km2_centro/scenario_3/osm.sumocfg\",\n 'plot_name': 'dqn_scenario_3'\n },\n 'pg_base': {\n 'weight_path_1_layer': './resultados_pg/escenario0/1_layer/ac_weights_experiment_ep_2999.pt',\n 'weight_path_2_layer': './resultados_pg/escenario0/2_layer/ac_weights_experiment_ep_2999.pt',\n 'weight_path_3_layer': './resultados_pg/escenario0/3_layer/ac_weights_experiment_ep_2999.pt',\n 'config': \"../sumo_simulation/sim_config/km2_centro/scenario/osm.sumocfg\",\n 'plot_name': 'scenario_0_pg'\n },\n 'pg_2x': {\n 'weight_path_1_layer': './resultados_pg/escenario1/1_layer/ac_weights_experiment_ep_2999.pt',\n 'weight_path_2_layer': './resultados_pg/escenario1/2_layer/ac_weights_experiment_ep_2999.pt',\n 'weight_path_3_layer': './resultados_pg/escenario1/3_layer/ac_weights_experiment_ep_1293.pt',\n 'config': \"../sumo_simulation/sim_config/km2_centro/scenario_2/osm.sumocfg\",\n 'plot_name': 'scenario_1_pg'\n },\n 'pg_4x': {\n 'weight_path_1_layer': './resultados_pg/escenario2/1_layer/ac_weights_experiment_ep_2999.pt',\n 'weight_path_2_layer': './resultados_pg/escenario2/2_layer/ac_weights_experiment_ep_2999.pt',\n 'weight_path_3_layer': './resultados_pg/escenario2/3_layer/ac_weights_experiment_ep_2192.pt',\n 'config': \"../sumo_simulation/sim_config/km2_centro/scenario_3/osm.sumocfg\",\n 'plot_name': 'scenario_2_pg'\n }\n}",
"_____no_output_____"
],
[
"def load_dqn_sd(state_dict, num_layers):\n state_dict[\"mlp1.weight\"] = state_dict['module.mlp1.weight']\n state_dict[\"mlp1.bias\"] = state_dict['module.mlp1.bias']\n state_dict[\"head.weight\"] = state_dict['module.head.weight']\n state_dict[\"head.bias\"] = state_dict['module.head.bias']\n \n del state_dict['module.mlp1.weight'], state_dict['module.mlp1.bias'], state_dict['module.head.weight'], state_dict['module.head.bias']\n \n if num_layers == '1':\n policy_net = DQN(n_actions, False, False).to(device)\n \n elif num_layers == '2':\n policy_net = DQN(n_actions, True, False).to(device)\n state_dict[\"mlp2.weight\"] = state_dict['module.mlp2.weight']\n state_dict[\"mlp2.bias\"] = state_dict['module.mlp2.bias']\n \n del state_dict['module.mlp2.weight'], state_dict['module.mlp2.bias']\n \n else:\n policy_net = DQN(n_actions, True, True).to(device)\n state_dict[\"mlp2.weight\"] = state_dict['module.mlp2.weight']\n state_dict[\"mlp2.bias\"] = state_dict['module.mlp2.bias']\n state_dict[\"mlp3.weight\"] = state_dict['module.mlp3.weight']\n state_dict[\"mlp3.bias\"] = state_dict['module.mlp3.bias']\n \n del state_dict['module.mlp2.weight'], state_dict['module.mlp2.bias'], state_dict['module.mlp3.weight'], state_dict['module.mlp3.bias']\n \n\n policy_net.load_state_dict(state_dict)\n\n policy_net.eval()\n \n return policy_net",
"_____no_output_____"
],
[
"def load_pg_sd(state_dict, num_layers):\n if num_layers == '1':\n agent = Agent(alpha=0.001, input_dims=[6], gamma=0.001,\n n_actions=n_actions, layer2_size=False, layer3_size=False)\n \n elif num_layers == '2':\n agent = Agent(alpha=0.001, input_dims=[6], gamma=0.001,\n n_actions=n_actions, layer3_size=False)\n else:\n agent = Agent(alpha=0.001, input_dims=[6], gamma=0.001,\n n_actions=n_actions)\n\n agent.actor_critic.load_state_dict(state_dict)\n\n agent.actor_critic.eval()\n \n return agent",
"_____no_output_____"
],
[
"def run_agent_simulation(key, weight_path, sumoCmd):\n \n num_layers = re.findall('\\d', weight_path)\n \n if 'dqn' in key:\n num_layers = num_layers[2]\n else:\n num_layers = num_layers[1]\n \n state_dict = torch.load(weight_path)\n \n if 'dqn' in key:\n agent_kind = 'dqn'\n agent = load_dqn_sd(state_dict, num_layers)\n else:\n agent_kind = 'pg'\n agent = load_pg_sd(state_dict, num_layers)\n\n traci.start(sumoCmd)\n\n action_dict = cPickle.load(open('../sumo_simulation/input/action_to_zone_km2_centro.pkl', 'rb'))\n state = torch.zeros([1,6], device=device)\n traci_ep = 0\n lane_id_list = traci.lane.getIDList()\n\n states_agent = []\n states_agent_mean = []\n truck_emissions_agent = []\n\n for e in range(86400):\n\n if traci_ep % 3600 == 0 and traci_ep != 0:\n\n #Start agent interaction\n \n if agent_kind == 'dqn':\n action = agent(state)\n else:\n action = agent.choose_action(state)\n\n #Apply regulation and run steps\n reg_action = action > 0\n\n #print(reg_action.view(-1))\n\n for index, lane_id in enumerate(reg_action.view(-1)):\n #for lane_id in lane_indices:\n if lane_id.item() == 1:\n if action_dict[index] is not None:\n traci.lane.setDisallowed(action_dict[index], ['truck'])\n else:\n pass\n else:\n if action_dict[index] is not None:\n traci.lane.setAllowed(action_dict[index], ['truck'])\n else:\n pass \n\n vehicle_id_list = traci.vehicle.getIDList()\n \n vehicle_types = [traci.vehicle.getTypeID(v_id) for v_id in vehicle_id_list]\n vehicle_co2 = [traci.vehicle.getCO2Emission(v_id) for i, v_id in enumerate(vehicle_id_list) \n if 'truck' in vehicle_types[i]]\n\n try:\n truck_emissions_agent.append(sum(vehicle_co2)/len(vehicle_co2))\n except:\n truck_emissions_agent.append(0)\n\n #Get simulation values\n co2 = [traci.lane.getCO2Emission(edge_id) for edge_id in lane_id_list]\n co = [traci.lane.getCOEmission(edge_id) for edge_id in lane_id_list]\n nox = [traci.lane.getNOxEmission(edge_id) for edge_id in lane_id_list]\n pmx = [traci.lane.getPMxEmission(edge_id) for edge_id in lane_id_list]\n noise = [traci.lane.getNoiseEmission(edge_id) for edge_id in lane_id_list]\n fuel = [traci.lane.getFuelConsumption(edge_id) for edge_id in lane_id_list]\n\n sim_results = np.array([co2, co, pmx, nox, noise, fuel])\n\n next_state = np.transpose(sim_results).mean(axis=0)\n \n states_agent.append(np.transpose(sim_results).sum(axis=0))\n states_agent_mean.append(next_state)\n\n next_state = torch.from_numpy(next_state).to(device).float()\n\n state += next_state\n\n traci.simulationStep()\n traci_ep += 1\n traci.close(False)\n \n if agent_kind == 'dqn':\n values = [agent(torch.from_numpy(state).float().to(device).view(-1,6)) \n for state in states_agent_mean]\n else:\n values = [agent.choose_action(torch.from_numpy(state).float().to(device).view(-1,6)) \n for state in states_agent_mean]\n \n \n #values = torch.cat(values).view(-1).detach().cpu().numpy()\n \n return states_agent, truck_emissions_agent, values",
"_____no_output_____"
],
[
"result_dict = {'{}_{}_layers'.format(algorithm, layer):'' \n for algorithm in experiment_dict.keys() for layer in range(1,4)}",
"_____no_output_____"
],
[
"result_dict",
"_____no_output_____"
],
[
"for key in result_dict.keys():\n result_dict[key] = {k: '' for k in ['cauchy', 'chi_squared', 'gaussian']}",
"_____no_output_____"
],
[
"result_dict",
"_____no_output_____"
],
[
"plot_lanes = list(result_dict.keys())\n#plot_lanes = ['dqn_base_2_layers', 'dqn_case_2x_3_layers', 'dqn_case_4x_3_layers', 'pg_base_1_layers']",
"_____no_output_____"
],
[
"for key in experiment_dict.keys():\n for path in experiment_dict[key]:\n if 'weight' in path:\n for test in ['cauchy', 'chi_squared', 'gaussian']:\n sumoCmd = ['/usr/bin/sumo/bin/sumo','-c',\n '/home/andres/Documents/tesis_pregrado/sumo_simulation/sim_config/test_cases/{}/osm.sumocfg'.format(test),\n '-e', '86400']\n if '_'.join([key,path.split('_')[2], 'layers']) in plot_lanes:\n print(key, path, test)\n result_dict['_'.join([key,path.split('_')[2], 'layers'])][test] = run_agent_simulation(key, experiment_dict[key][path], sumoCmd)",
"dqn_base weight_path_1_layer cauchy\n Retrying in 1 seconds\ndqn_base weight_path_1_layer chi_squared\n Retrying in 1 seconds\ndqn_base weight_path_1_layer gaussian\n Retrying in 1 seconds\ndqn_base weight_path_2_layer cauchy\n Retrying in 1 seconds\ndqn_base weight_path_2_layer chi_squared\n Retrying in 1 seconds\ndqn_base weight_path_2_layer gaussian\n Retrying in 1 seconds\ndqn_base weight_path_3_layer cauchy\n Retrying in 1 seconds\ndqn_base weight_path_3_layer chi_squared\n Retrying in 1 seconds\ndqn_base weight_path_3_layer gaussian\n Retrying in 1 seconds\ndqn_case_2x weight_path_1_layer cauchy\n Retrying in 1 seconds\ndqn_case_2x weight_path_1_layer chi_squared\n Retrying in 1 seconds\ndqn_case_2x weight_path_1_layer gaussian\n Retrying in 1 seconds\ndqn_case_2x weight_path_2_layer cauchy\n Retrying in 1 seconds\ndqn_case_2x weight_path_2_layer chi_squared\n Retrying in 1 seconds\ndqn_case_2x weight_path_2_layer gaussian\n Retrying in 1 seconds\ndqn_case_2x weight_path_3_layer cauchy\n Retrying in 1 seconds\ndqn_case_2x weight_path_3_layer chi_squared\n Retrying in 1 seconds\ndqn_case_2x weight_path_3_layer gaussian\n Retrying in 1 seconds\ndqn_case_4x weight_path_1_layer cauchy\n Retrying in 1 seconds\ndqn_case_4x weight_path_1_layer chi_squared\n Retrying in 1 seconds\ndqn_case_4x weight_path_1_layer gaussian\n Retrying in 1 seconds\ndqn_case_4x weight_path_2_layer cauchy\n Retrying in 1 seconds\ndqn_case_4x weight_path_2_layer chi_squared\n Retrying in 1 seconds\ndqn_case_4x weight_path_2_layer gaussian\n Retrying in 1 seconds\ndqn_case_4x weight_path_3_layer cauchy\n Retrying in 1 seconds\ndqn_case_4x weight_path_3_layer chi_squared\n Retrying in 1 seconds\ndqn_case_4x weight_path_3_layer gaussian\n Retrying in 1 seconds\npg_base weight_path_1_layer cauchy\n Retrying in 1 seconds\n"
],
[
"result_dict.keys()",
"_____no_output_____"
],
[
"for test in ['cauchy', 'gaussian', 'chi_squared']:\n for key in result_dict.keys():\n if '_'.join([key, 'layers']) in plot_lanes:\n plt.plot(result_dict[key][test][1], label='_'.join([key,test]))\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"for tensor in result_dict['pg_base_1_layers']['gaussian'][2]:\n print(tensor.sum())",
"tensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\ntensor(1., device='cuda:0', grad_fn=<SumBackward0>)\n"
],
[
"for test in ['cauchy', 'gaussian', 'chi_squared']:\n for key in result_dict.keys():\n if 'pg' in key and ('2' in key or '3' in key):\n continue\n if key in plot_lanes:\n if 'pg' in key:\n actions = [(t[0] > 0).sum().item() for t in result_dict[key][test][2]]\n else:\n actions = [(t > 0).sum().item() for t in result_dict[key][test][2]]\n \n plt.plot(actions, label='_'.join([key,test]))\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"for test in ['cauchy', 'gaussian', 'chi_squared']:\n for key in result_dict.keys():\n if key in plot_lanes:\n plt.plot(result_dict[key][test][1], label='_'.join([key,test]))\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"for test in ['cauchy', 'gaussian', 'chi_squared']:\n for key in result_dict.keys():\n if key in plot_lanes:\n print(key, test)\n print(list(np.array(result_dict[key][test][0]).sum(axis=0)))",
"dqn_base_1_layers cauchy\n[380284.5711304015, 20350.675060837948, 11.303034915985634, 168.1272203123832, 4834.847449702776, 163.4748317001701]\ndqn_base_2_layers cauchy\n[216502.92906936756, 11302.00569633783, 6.292213280066899, 95.5382955965804, 3464.5846944698837, 93.06966414896004]\ndqn_base_3_layers cauchy\n[171863.07317399586, 8697.093358970298, 4.929754804050995, 74.9728875707299, 2811.233019588197, 73.87999569116907]\ndqn_case_2x_1_layers cauchy\n[380284.5711304015, 20350.675060837948, 11.303034915985634, 168.1272203123832, 4834.847449702776, 163.4748317001701]\ndqn_case_2x_2_layers cauchy\n[244348.94259967294, 12836.378285637482, 7.181351500968048, 107.39011601800193, 3354.109044886825, 105.03963766651813]\ndqn_case_2x_3_layers cauchy\n[44159.49374485842, 1546.7836265149301, 1.061878355832132, 18.26210940440237, 1135.0203897562394, 18.98348883677111]\ndqn_case_4x_1_layers cauchy\n[387187.04150843737, 20872.200309012278, 11.54992417580039, 171.3108828594158, 4663.497599842953, 166.44199195800016]\ndqn_case_4x_2_layers cauchy\n[244348.94259967294, 12836.378285637482, 7.181351500968048, 107.39011601800193, 3354.109044886825, 105.03963766651813]\ndqn_case_4x_3_layers cauchy\n[44159.49374485842, 1546.7836265149301, 1.061878355832132, 18.26210940440237, 1135.0203897562394, 18.98348883677111]\npg_base_1_layers cauchy\n[22427.68815107366, 310.8975456733897, 0.39577728851755967, 8.506193602859202, 682.3132655479353, 9.641527287181432]\npg_base_2_layers cauchy\n[27509.478103364036, 687.4407872244391, 0.5301900877307291, 11.3433585310635, 847.0463226205273, 11.825836882664573]\npg_base_3_layers cauchy\n[22427.68815107366, 310.8975456733897, 0.39577728851755967, 8.506193602859202, 682.3132655479353, 9.641527287181432]\npg_2x_1_layers cauchy\n[22427.68815107366, 310.8975456733897, 0.39577728851755967, 8.506193602859202, 682.3132655479353, 9.641527287181432]\npg_2x_2_layers cauchy\n[71329.00959902181, 903.7739406265298, 1.0796712734855602, 28.049315620928077, 2278.5632530599337, 30.663492024576215]\npg_2x_3_layers cauchy\n[74175.8363762399, 946.234328493047, 1.1446584476726853, 29.202678652486224, 2410.9839267471216, 31.888894773820844]\npg_4x_1_layers cauchy\n[22427.68815107366, 310.8975456733897, 0.39577728851755967, 8.506193602859202, 682.3132655479353, 9.641527287181432]\npg_4x_2_layers cauchy\n[27182.811436697368, 635.2463427799946, 0.5505234210640624, 11.095580753285722, 848.7299266684861, 11.685450539448041]\npg_4x_3_layers cauchy\n[31745.455337934633, 847.3300136059448, 0.6708634618452659, 12.80054679393032, 953.8786838787536, 13.646894958495924]\ndqn_base_1_layers gaussian\n[1128651.6274838434, 63692.30969338663, 63.271944086712764, 376.73268005929395, 26063.817087213345, 485.21944886359785]\ndqn_base_2_layers gaussian\n[912065.4700398103, 54019.635271812724, 56.93074254336739, 303.7238701260771, 24852.665509653158, 392.1110136324884]\ndqn_base_3_layers gaussian\n[914579.6964676006, 52415.676147336635, 54.78701626771709, 313.0447597188148, 24170.61713160202, 393.1939221290869]\ndqn_case_2x_1_layers gaussian\n[1128651.6274838434, 63692.30969338663, 63.271944086712764, 376.73268005929395, 26063.817087213345, 485.21944886359785]\ndqn_case_2x_2_layers gaussian\n[971761.5326244787, 55310.60753241262, 59.72969619417961, 346.7517412647803, 24133.601385239916, 417.7757772130938]\ndqn_case_2x_3_layers gaussian\n[699033.7725482461, 31565.24152015176, 39.05307096701716, 215.91817647266774, 20825.224262838787, 300.5278610751887]\ndqn_case_4x_1_layers gaussian\n[1151902.2854680116, 66447.08947731074, 69.35902364255766, 418.3313153783283, 26369.784042867886, 495.2154543105865]\ndqn_case_4x_2_layers gaussian\n[971761.5326244787, 55310.60753241262, 59.72969619417961, 346.7517412647803, 24133.601385239916, 417.7757772130938]\ndqn_case_4x_3_layers gaussian\n[699033.7725482461, 31565.24152015176, 39.05307096701716, 215.91817647266774, 20825.224262838787, 300.5278610751887]\npg_base_1_layers gaussian\n[573401.5583827976, 18597.197711333105, 27.56638538689138, 183.4039413022703, 18958.893528790817, 246.51295860151282]\npg_base_2_layers gaussian\n[646407.267202908, 20196.34330326081, 30.528057084979004, 199.58597665689862, 19720.57595483407, 277.90021631044107]\npg_base_3_layers gaussian\n[573401.5583827976, 18597.197711333105, 27.56638538689138, 183.4039413022703, 18958.893528790817, 246.51295860151282]\npg_2x_1_layers gaussian\n[573401.5583827976, 18597.197711333105, 27.56638538689138, 183.4039413022703, 18958.893528790817, 246.51295860151282]\npg_2x_2_layers gaussian\n[537604.2795812032, 17813.665761721262, 25.35658744646362, 165.33096082457737, 18741.373848670133, 231.12402701213622]\npg_2x_3_layers gaussian\n[554538.4659900531, 17384.093050723222, 26.498316739214165, 157.72426271862145, 18990.11763359924, 238.40388862178358]\npg_4x_1_layers gaussian\n[566983.8424238159, 17880.21484718008, 26.558728742490757, 152.06513769368627, 19078.167308198248, 243.7544160741084]\npg_4x_2_layers gaussian\n[618780.0233534531, 21130.572102319613, 30.590206923974193, 184.6319881665213, 19949.879599004606, 266.02143189784795]\npg_4x_3_layers gaussian\n[573401.5583827976, 18597.197711333105, 27.56638538689138, 183.4039413022703, 18958.893528790817, 246.51295860151282]\ndqn_base_1_layers chi_squared\n[1218582.5617294786, 72771.93148469955, 73.67207913186478, 431.91285282320786, 28056.97883998736, 523.8883270320688]\ndqn_base_2_layers chi_squared\n[934016.6467623435, 50663.92298933144, 55.93628359082181, 319.4350951054316, 25599.945129581323, 401.5516731300209]\ndqn_base_3_layers chi_squared\n[954845.6998552164, 56832.46498354472, 58.84592660726331, 312.465494352692, 26603.66365573891, 410.507026899297]\ndqn_case_2x_1_layers chi_squared\n[1218582.5617294786, 72771.93148469955, 73.67207913186478, 431.91285282320786, 28056.97883998736, 523.8883270320688]\ndqn_case_2x_2_layers chi_squared\n[1036755.2590878205, 54676.87692365128, 59.49843249221362, 337.58639096408876, 25605.06021845662, 445.715864132699]\ndqn_case_2x_3_layers chi_squared\n[712177.7007170564, 32898.8564225648, 41.37132179785077, 224.73940035722998, 22156.572716807263, 306.18292477598396]\ndqn_case_4x_1_layers chi_squared\n[1160887.0650983814, 70030.820914271, 68.77451779474286, 409.86923187416636, 27457.728909668, 499.0860453784211]\ndqn_case_4x_2_layers chi_squared\n[1036755.2590878205, 54676.87692365128, 59.49843249221362, 337.58639096408876, 25605.06021845662, 445.715864132699]\ndqn_case_4x_3_layers chi_squared\n[712177.7007170564, 32898.8564225648, 41.37132179785077, 224.73940035722998, 22156.572716807263, 306.18292477598396]\npg_base_1_layers chi_squared\n[706409.3302879711, 21871.89229636783, 32.86774576191395, 200.52200575065206, 20450.831943000507, 303.6953342735713]\npg_base_2_layers chi_squared\n[677058.1636516453, 21251.59300317727, 30.209292209078388, 204.03289542612964, 20562.057355162437, 291.07554485851176]\npg_base_3_layers chi_squared\n[706409.3302879711, 21871.89229636783, 32.86774576191395, 200.52200575065206, 20450.831943000507, 303.6953342735713]\npg_2x_1_layers chi_squared\n[706409.3302879711, 21871.89229636783, 32.86774576191395, 200.52200575065206, 20450.831943000507, 303.6953342735713]\npg_2x_2_layers chi_squared\n[610579.6598904465, 20221.910805845775, 30.77731281243733, 201.96547420387782, 20559.979008452003, 262.4977470322945]\npg_2x_3_layers chi_squared\n[593260.6219082118, 19592.13791677465, 28.523207819969326, 180.51272388213536, 19543.180597396502, 255.0506693687429]\npg_4x_1_layers chi_squared\n[654974.9535735981, 20526.021836805907, 30.514387041614345, 194.70374807077144, 19733.486559229183, 281.5811978665806]\npg_4x_2_layers chi_squared\n[637499.9352346509, 20800.296465940326, 29.44561690792933, 188.73047469526142, 20146.883530987532, 274.06942223973255]\npg_4x_3_layers chi_squared\n[706409.3302879711, 21871.89229636783, 32.86774576191395, 200.52200575065206, 20450.831943000507, 303.6953342735713]\n"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"d = pd.DataFrame([np.array(result_dict[key]['cauchy'][0]).sum(axis=0) for key in result_dict.keys()],\n columns=['CO2', 'CO', 'PMx', 'NOx', 'Ruido', 'Consumo de Gasolina'])",
"_____no_output_____"
],
[
"d.to_latex()\n",
"_____no_output_____"
],
[
"pd.DataFrame([np.array(result_dict[key]['gaussian'][0]).sum(axis=0) for key in result_dict.keys()],\n columns=['CO2', 'CO', 'PMx', 'NOx', 'Ruido', 'Consumo de Gasolina']).to_latex()",
"_____no_output_____"
],
[
"pd.DataFrame([np.array(result_dict[key]['chi_squared'][0]).sum(axis=0) for key in result_dict.keys()],\n columns=['CO2', 'CO', 'PMx', 'NOx', 'Ruido', 'Consumo de Gasolina']).to_latex()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9ff29d508a140031844595e0e88c85a93c9052 | 50,161 | ipynb | Jupyter Notebook | notebooks/clases/1_pytorch_tutorial.ipynb | Patricio96/INFO257-1 | 4fff8b1c45492f5668a404424e7fbca42924779c | [
"CC0-1.0"
] | null | null | null | notebooks/clases/1_pytorch_tutorial.ipynb | Patricio96/INFO257-1 | 4fff8b1c45492f5668a404424e7fbca42924779c | [
"CC0-1.0"
] | null | null | null | notebooks/clases/1_pytorch_tutorial.ipynb | Patricio96/INFO257-1 | 4fff8b1c45492f5668a404424e7fbca42924779c | [
"CC0-1.0"
] | null | null | null | 47.500947 | 14,996 | 0.688403 | [
[
[
"%autosave 0\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import display\nimport ipywidgets as widgets\nfrom matplotlib import animation\nfrom functools import partial\nslider_layout = widgets.Layout(width='600px', height='20px')\nslider_style = {'description_width': 'initial'}\nIntSlider_nice = partial(widgets.IntSlider, style=slider_style, layout=slider_layout, continuous_update=False)\nFloatSlider_nice = partial(widgets.FloatSlider, style=slider_style, layout=slider_layout, continuous_update=False)\nSelSlider_nice = partial(widgets.SelectionSlider, style=slider_style, layout=slider_layout, continuous_update=False)",
"_____no_output_____"
]
],
[
[
"# Instalación de PyTorch\n\nLa forma más recomendada es usando el manejador de ambientes y paquetes `conda`\n\nSi no conoces conda por favor revisa esta breve clase de INFO147: https://github.com/magister-informatica-uach/INFO147/blob/master/unidad1/02_ambientes_virtuales.ipynb\n\nUna vez hayas creado un ambiente de `conda` debes escoger entre instalar pytorch con soporte de GPU\n\n conda install pytorch torchvision cudatoolkit=10.2 ignite -c pytorch\n\no sin soporte GPU \n\n conda install pytorch torchvision cpuonly ignite -c pytorch\n\n",
"_____no_output_____"
],
[
"# Breve tutorial de [PyTorch](https://pytorch.org) \n\nPyTorch es una librería de alto nivel para Python que provee \n1. Una clase tensor para hacer cómputo de alto rendimiento \n1. Un plataforma para crear y entrenar redes neuronales\n\n### Torch Tensor\n\nLa clase [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html) es muy similar en uso al `ndarray` de [*NumPy*](https://numpy.org/)\n\nUn tensor corresponde a una matriz o arreglo n-dimensional con tipo definido que soporta operaciónes vectoriales tipo SIMD y broadcasting\n\n\n\n\nLa documentación de la clase con todas las operaciones que soporta: https://pytorch.org/docs/stable/tensors.html\n\nA continuación revisaremos las más fundamentales",
"_____no_output_____"
]
],
[
[
"import torch\ntorch.__version__",
"_____no_output_____"
]
],
[
[
"#### Creación de tensores\n\nUn tensor puede crearse usando constructores de torch o a partir de datos existentes: lista de Python o *ndarray* de NumPy",
"_____no_output_____"
]
],
[
[
"# Un tensor de 10 ceros\ndisplay(torch.zeros(10))\n# Un tensor de 10 unos\ndisplay(torch.ones(10))\n# Un tensor de números linealmente espaciados\ndisplay(torch.linspace(0, 9, steps=10))\n# Un tensor de 10 números aleatorios con distribución N(0, 1)\ndisplay(torch.randn(10))\n# Un tensor creado a partir de una lista\ndisplay(torch.Tensor([0, 1, 2, 3, 4, 5, 6]))\n# Un tensor creado a partir de un ndarray\nnumpy_array = np.random.randn(10)\ndisplay(torch.from_numpy(numpy_array))",
"_____no_output_____"
]
],
[
[
"#### Atributos importantes de los tensores\n\nUn tensor tiene un tamaño (dimesiones) y tipo específico\n\nUn tensor puede estar alojado en la memoria del sistema ('cpu') o en la memoria de dispositivo ('gpu')",
"_____no_output_____"
]
],
[
[
"a = torch.randn(10, 20, 30)\ndisplay(a.shape)\ndisplay(a.dtype)\ndisplay(a.device)\ndisplay(a.requires_grad)",
"_____no_output_____"
]
],
[
[
"Cuando se crea un tensor se puede especificar el tipo y el dispositivo",
"_____no_output_____"
]
],
[
[
"a = torch.zeros(10, dtype=torch.int32, device='cuda')\ndisplay(a)",
"_____no_output_____"
]
],
[
[
"#### Manipulación de tensores\n\nPodemos manipular la forma de un tensor usando: reshape, flatten, roll, traspose, unsqueeze, entre otros",
"_____no_output_____"
]
],
[
[
"a = torch.linspace(0, 9, 10)\ndisplay(a)\ndisplay(a.reshape(2, 5))\ndisplay(a.reshape(2, 5).t())\ndisplay(a.reshape(2, 5).flatten())\ndisplay(a.roll(2))\ndisplay(a.unsqueeze(1))",
"_____no_output_____"
]
],
[
[
"#### Cálculos con tensores\n\nUn tensor soporta operaciones aritméticas y lógicas\n\nSi el tensor está en memoria de sistema entonces las operaciones son realizadas por la CPU ",
"_____no_output_____"
]
],
[
[
"a = torch.linspace(0, 5, steps=6)\ndisplay(a)\n# aritmética y funciones\ndisplay(a + 5)\ndisplay(2*a)\ndisplay(a.pow(2))\ndisplay(a.log())\n# máscaras booleanas\ndisplay(a[a>3])\n# Operaciones con otros tensores\nb = torch.ones(6)\ndisplay(a + b)\ndisplay(a*b)\n# broadcasting\ndisplay(a.unsqueeze(1)*b.unsqueeze(0))",
"_____no_output_____"
]
],
[
[
"#### Cálculos en GPU\n\nUsando el atributo `to` podemos intercambiar un tensor entre GPU ('device') y CPU ('host')\n\nCuando todos los tensores involucrados en una operaciones están en memoria de dispositivo entonces el cálculo lo hace la GPU\n\nLa siguiente nota indica las opciones para intercambiar datos entre GPU y CPU que ofrece PyTorch: https://pytorch.org/docs/stable/notes/cuda.html \n\n##### Breve nota: \nUna *Graphical Processing Unit* (GPU) o tarjeta de video es un hardware para hacer cálculos sobre mallas tridimensionales, generación de imágenes (rendering) y otras tareas gráficas. A diferencia de la CPU, la GPU es especialista en cálculo paralelo y tiene miles de nucleos (NVIDIA RTX 2080: 2944 nucleos)",
"_____no_output_____"
]
],
[
[
"a = torch.zeros(10)\ndisplay(a.device)\na = a.to('cuda')\ndisplay(a.device)",
"_____no_output_____"
]
],
[
[
"### Auto-diferenciación\n\nLas redes neuronales se entrenan usando **Gradiente descedente**\n\n> Necesitamos calcular las derivadas de la función de costo para todos los parámetros de la red\n\nEsto puede ser complejo si nuestra red es grande y tiene distintos tipos de capas\n\nPyTorch viene incorporado con un sistema de diferenciación automática denominado [`autograd`](https://pytorch.org/docs/stable/autograd.html) \n\nPara poder derivar una función en pytorch\n\n1. Se necesita que su entrada sean tensores con el atributo `requires_grad=True`\n1. Luego llamamos la función `backward()` de la función\n1. El resultado queda guardado en el atributo `grad` de la entrada (nodo hoja)",
"_____no_output_____"
]
],
[
[
"x = torch.linspace(0, 10, steps=1000, requires_grad=True)\n\ny = 5*x -20\n#y = torch.sin(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3)\n#dydx = 2*np.pi*torch.cos(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3) - 2/3*(x-5)*torch.sin(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3)\n\nfig, ax = plt.subplots(figsize=(6, 3))\nax.plot(x.detach().numpy(), y.detach().numpy(), label='y')\n\ny.backward(torch.ones_like(x))\n\nax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')\n#ax.plot(x.detach().numpy(), dydx.detach().numpy())\nplt.legend();",
"_____no_output_____"
]
],
[
[
"### Grafo de cómputo\n\nCuando contatenamos operaciones PyTorch construye internamente un \"grafo de cómputo\"\n\n$$\nx \\to z = f_1(x) \\to y = f_2(z)\n$$\n\nLa función `backward` calcula los gradientes y los almacena en los nodo hoja que tengan `requires_grad=True`\n\nPor ejemplo\n\n y.backward : Guarda dy/dx en x.grad\n \n z.backward : Guarda dz/dx en x.grad\n\nBasicamente `backward` implementa la regla de la cadena de las derivadas\n\n`backward` recibe una entrada: La derivada de la etapa superior de la cadena. Por defecto usa `torch.ones([1])`, asume que se está en el nivel superior y que la salida es escalar (unidimensional)",
"_____no_output_____"
]
],
[
[
"x = torch.linspace(0, 10, steps=1000, requires_grad=True) # Nodo hoja\ndisplay(x.grad_fn)\nz = torch.sin(2*x)\ndisplay(z.grad_fn)\ny = z.pow(2)/2\ndisplay(y.grad_fn)\n\nfig, ax = plt.subplots(figsize=(6, 3), tight_layout=True)\nax.plot(x.detach().numpy(), z.detach().numpy(), label='z')\nax.plot(x.detach().numpy(), y.detach().numpy(), label='y')\n\n# Derivada dy/dx\ny.backward(torch.ones_like(x), create_graph=True)\nax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')\n\n# Borro el resultado en x.grad\nx.grad = None\n\n# Derivada dz/dx\nz.backward(torch.ones_like(x))\nax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dz/dx')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"# Definir una red neuronal en PyTorch\n\nPyTorch nos ofrece la clase tensor y las funcionalidades de autograd\n\nEstas poderosas herramientas nos dan todo lo necesario para construir y entrenar redes neuronales artificiales\n\nPara facilitar aun más estas tareas PyTorch tiene módulos de alto nivel que implementan\n\n1. Modelo base de red neuronal: `torch.nn.Module`\n1. Distintos tipos de capas, funciones de activación y funciones de costo: [`torch.nn`](https://pytorch.org/docs/stable/nn.html)\n1. Distintos algoritmos de optimización basados en gradiente descedente: [`torch.optim`](https://pytorch.org/docs/stable/optim.html)\n\n\nUna red neuronal en PyTorch se implementa\n1. Heredando de [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#module)\n1. Especificando las funciones `__init__` y `forward`\n\nOtra opción es usar [`torch.nn.Sequential`](https://pytorch.org/docs/stable/nn.html#sequential) y especificar una lista de capas\n\n\n#### Red MLP en pytorch:\n\nHeredamos de `Module` y especificamos el constructor y la función `forward`\n\nCreamos una red de dos entradas, una capa oculta y una neurona salida\n\nLa capa `torch.nn.Linear` con parámetro $W$ y $B$ realiza la siguiente operación sobre la entrada $X$\n\n$$\nZ = WX + B\n$$\n\ncorresponde a la capa completamente conectada (*fully-connected*)",
"_____no_output_____"
]
],
[
[
"import torch\n\nclass MultiLayerPerceptron(torch.nn.Module):\n\n # Constructor: Crea las capas de la red\n def __init__(self): \n # Mandatorio: Llamar al constructor del padre:\n super(type(self), self).__init__() \n # Creamos dos capas completamente conectadas\n ?\n ?\n # Función de activación sigmoide\n ?\n \n # Forward: Conecta la entrada con la salida\n def forward(self, x):\n # Pasamos x por la primera capa y luego aplicamos función de activación\n ?\n # Pasamos el resultado por la segunda capa y lo retornamos\n return ?\n ",
"_____no_output_____"
]
],
[
[
"Al crear una capa `Linear` de forma interna se registran los parámetros `weight` y `bias`, ambos con `requires_grad=True`\n\nInicialmente los parámetros tienen valores aleatorios",
"_____no_output_____"
]
],
[
[
"model = MultiLayerPerceptron(hidden_dim=2)\ndisplay(model.hidden.weight)\ndisplay(model.hidden.bias)",
"_____no_output_____"
]
],
[
[
"Podemos evaluar el modelo sobre un tensor ",
"_____no_output_____"
]
],
[
[
"X = 10*torch.rand(10000, 2) - 5\nY = model.forward(X)\n\nfig, ax = plt.subplots()\nX_numpy = X.detach().numpy()\nY_numpy = Y.detach().numpy()\nax.scatter(X_numpy[:, 0], X_numpy[:, 1], s=10, c=Y_numpy[:, 0], cmap=plt.cm.RdBu_r);",
"_____no_output_____"
]
],
[
[
"# Entrenamiento de redes neuronales en Pytorch\n\nPara entrenar la neurona debemos definir \n\n1. Una función de costo: La \"qué\" vamos a minimizar\n1. Un algoritmo de optimización: El \"cómo\" vamos a minimizar\n\nLas funciones de costo están implementadas [`torch.nn`](https://pytorch.org/docs/stable/nn.html#loss-functions)\n\ny tienen la siguiente sintaxis\n```python\n>>> # Definimos el criterio\n>>> criterion = torch.nn.CrossEntropyLoss(reduction, # String, opciones 'sum', 'mean' o 'none'\n weight=None, # Ponderación opcional para las clases\n ...\n )\n>>> # Lo usamos para calcular la calidad de la predicción\n>>> loss = criterion(output, # Minibatch de datos\n target # Minibatch de etiquetas\n ) \n```\n\nLas funciones de costo más comunes son\n\n| Problema | Recibe | Etiqueta | Función de costo | \n|----| ----| ---- | ---- |\n|Regresión | Real | Real | `torch.nn.MSELoss()` | \n|Clasificación | [0, 1] | {0, 1} | `torch.nn.BCELoss()` | \n|Clasificación | Real | {0, 1} | `torch.nn.BCEWithLogitsLoss()` |\n|Clasificación (multiclase) | LogSoftmax | Entero |`torch.nn.LogLoss()` |\n|Clasificación (multiclase) | Real | Entero |`torch.nn.CrossEntropyLoss()` |\n\n\n> También puedes escribir tu mismo una función de costo usando aritmética y funciones como `torch.sum` o `torch.mean`\n\nLos algoritmos de optimización están implementados en el módulo [`torch.optim`](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim)\n\nLa sintaxis general es\n\n```python\n>>> # Definimos un modelo de torch\n>>> model = ClaseTorchModule(...) \n>>> # Creamos el optimizador y le entregamos los parámetros que queremos que actualice\n>>> optimizer = torch.optim.SGD(model.parameters(), # Los parámetros del modelo\n lr=0.01, # La tasa de aprendizaje\n weight_decay=0, # Penalización L2 de los parámetros (lambda)\n momentum=0.9, # Otros parámetros específicos de cada optimizer\n ...\n )\n```\nLos optimizadores más comunes son\n\n| Optimizador | Descripción |\n| ---- | ---- |\n| `torch.optim.SGD` | Gradiente descedente estocástico con momentum|\n| `torch.optim.Adam` | Gradiente descedente con tasa de aprendizaje y momentum adaptivo |",
"_____no_output_____"
]
],
[
[
"# Función de costo entropía cruzada binaria\ncriterion = torch.nn.BCEWithLogitsLoss(reduction='sum')\n\n# Algoritmo de optimización Gradiente Descendente Estocástico\noptimizer = torch.optim.SGD(model.parameters(), lr=1e-2)",
"_____no_output_____"
]
],
[
[
"### Esquema general de entrenamiento\n\n\n\n``` python\n>>> for epoch in range(num_epochs): # Durante un cierto número de épocas\n for minibatch in data: # Para cada minibatch de datos\n optimizer.zero_grad() # Limpiamos los gradientes\n x, y = minibatch # Desempaquetamos\n yhat = model.forward(x) # Predecimos\n loss = criterion(yhat, y) # Evaluamos\n loss.backward() # Calculamos los gradientes\n optimizer.step() # Actualizamos los parámetros\n```\n\n- Una época es una presentación completa del conjunto de entrenamiento\n- Un minibatch es un subconjunto del conjunto de entrenamiento",
"_____no_output_____"
],
[
"### Entrenando con un solo dato\n\nDigamos que tenemos un dato $X$ y una etiqueta $Y$\n\n> Calculemos el error y actualizemos nuestro modelo",
"_____no_output_____"
]
],
[
[
"X = torch.tensor([[-1.0, 1.0]])\nY = torch.tensor([[0.]])\n\nhatY = model.forward(X)\ndisplay(hatY)\nloss = criterion(hatY, Y)\ndisplay(loss)",
"_____no_output_____"
]
],
[
[
"Una vez que calculamos la loss podemos calcular el gradiente",
"_____no_output_____"
]
],
[
[
"loss.backward()\ndisplay(model.output.weight.grad)\ndisplay(model.output.bias.grad)",
"_____no_output_____"
]
],
[
[
"Finalmente actualizamos los parámetros usando la función `step` de nuestro optimizador",
"_____no_output_____"
]
],
[
[
"# Resetea los gradientes\ndisplay(model.output.weight)\ndisplay(model.output.bias)\noptimizer.step()\ndisplay(model.output.weight)\ndisplay(model.output.bias)",
"_____no_output_____"
]
],
[
[
"Repetimos este proceso a través de varias \"épocas\" de entrenamiento",
"_____no_output_____"
]
],
[
[
"for nepoch in range(10):\n # Calculamos la salida del modelo\n hatY = model.forward(X)\n # Reseteamos los gradientes de la iteración anterior\n optimizer.zero_grad()\n # Calculamos la función de costo\n loss = criterion(hatY, Y)\n # Calculamos su gradiente\n loss.backward()\n # Actualizamos los parámetros\n optimizer.step()\n print(\"%d w:%f %f b:%f\" %(nepoch, model.output.weight[0, 0], model.output.weight[0, 1], model.output.bias))",
"_____no_output_____"
]
],
[
[
"### Entrenando en un conjunto de datos\n\nConsideremos un conjunto de entrenamiento con datos bidimensionales y dos clases como el siguiente\n\n\n\n> Notemos que no es linealmente separable",
"_____no_output_____"
]
],
[
[
"import sklearn.datasets\ndata, labels = sklearn.datasets.make_circles(n_samples=1000, noise=0.2, factor=0.25)\n#data, labels = sklearn.datasets.make_moons(n_samples=1000, noise=0.2)\n#data, labels = sklearn.datasets.make_blobs(n_samples=[250]*4, n_features=2, cluster_std=0.5,\n# centers=np.array([[-1, 1], [1, 1], [-1, -1], [1, -1]]))\n#labels[labels==2] = 1; labels[labels==3] = 0;\n\nfig, ax = plt.subplots(figsize=(8, 4))\nfor k, marker in enumerate(['x', 'o']):\n ax.scatter(data[labels==k, 0], data[labels==k, 1], s=20, marker=marker, alpha=0.75)\n \n# Para las gráficas\nx_min, x_max = data[:, 0].min() - 0.5, data[:, 0].max() + 0.5\ny_min, y_max = data[:, 1].min() - 0.5, data[:, 1].max() + 0.5\nxx, yy = np.meshgrid(np.arange(x_min, x_max, 0.05), np.arange(y_min, y_max, 0.05))",
"_____no_output_____"
]
],
[
[
"1. Antes de empezar el entrenamiento convertimos los datos a formato tensor de PyTorch\n1. Luego presentamos los datos en *mini-batches* a la red neuronal en cada época del entrenamiento\n\nPyTorch provee las clases `DataSet` y `DataLoader` para lograr estos objetivos\n\nEstas clases son parte del módulo data: https://pytorch.org/docs/stable/data.html\n\n\nCrearemos un set a partir de tensores usando una clase que hereda de `DataSet`\n\n```python\n torch.utils.data.TensorDataset(*tensors # Una secuencia de tensores\n )\n``` \nLuego crearemos conjuntos de entrenamiento y validación usando \n\n```python\n torch.utils.data.Subset(dataset, # Un objeto que herede de DataSet\n indices # Un conjunto de índices para seleccionar un subconjunto de dataset\n )\n``` \nFinalmente crearemos dataloaders usando\n\n```python\n torch.utils.data.DataLoader(dataset, # Un objeto que herede de DataSet\n batch_size=1, # Tamaño del minibatch \n shuffle=False, # Entregar los minibatches desordenados\n sampler=None, # O especificar una muestreador customizado\n num_workers=0, # Cuantos nucleos de CPU usar\n ...\n )\n```\n",
"_____no_output_____"
]
],
[
[
"import sklearn.model_selection\n# Separamos el data set en entrenamiento y validación\ntrain_idx, valid_idx = next(sklearn.model_selection.ShuffleSplit(train_size=0.6).split(data, labels))\n\n\n# Crear conjuntos de entrenamiento y prueba\nfrom torch.utils.data import DataLoader, TensorDataset, Subset \n\n# Creamos un conjunto de datos en formato tensor\ntorch_set = TensorDataset(torch.from_numpy(data.astype('float32')), \n torch.from_numpy(labels.astype('float32')))\n\n# Data loader de entrenamiento\ntorch_train_loader = DataLoader(Subset(torch_set, train_idx), shuffle=True, batch_size=32)\n# Data loader de validación\ntorch_valid_loader = DataLoader(Subset(torch_set, valid_idx), shuffle=False, batch_size=256)",
"_____no_output_____"
]
],
[
[
"Los `DataLoader` se ocupan como iteradores de Python",
"_____no_output_____"
]
],
[
[
"for sample_data, sample_label in torch_train_loader:\n display(sample_data.shape)\n display(sample_label.shape)\n display(sample_label)\n break",
"_____no_output_____"
]
],
[
[
"#### Recordemos\n\nPara cada dato de entrenamiento:\n- Calculamos gradientes: `loss.backward`\n- Actualizamos parámetros `optimizer.step`\n\nPara cada dato de validación\n- Evaluamos la *loss* para detectar sobre-ajuste\n\n\n\n#### ¿Cuándo nos detenemos?\n\nLo ideal es detener el entrenamiento cuando la loss de validación no haya disminuido durante una cierta cantidad de épocas\n\nPodemos usar [`save`](https://pytorch.org/tutorials/beginner/saving_loading_models.html) para ir guardando los parámetros del mejor modelo de validación\n\nUsamos un número fijo de épocas como resguardo: Si el modelo no ha convergido entonces debemos incrementarlo\n\n#### ¿Cómo afecta el resultado el número de neuronas en la capa oculta?",
"_____no_output_____"
]
],
[
[
"model = MultiLayerPerceptron(hidden_dim=3)\ncriterion = torch.nn.BCEWithLogitsLoss(reduction='sum')\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-2)\nn_epochs = 200\nrunning_loss = np.zeros(shape=(n_epochs, 2))\n\nbest_valid = np.inf",
"_____no_output_____"
],
[
"def train_one_epoch(k, model, criterion, optimizer):\n global best_valid\n train_loss, valid_loss = 0.0, 0.0\n \n # Loop de entrenamiento\n for sample_data, sample_label in torch_train_loader:\n output = model.forward(sample_data)\n optimizer.zero_grad() \n loss = criterion(output, sample_label.unsqueeze(1)) \n train_loss += loss.item()\n loss.backward()\n optimizer.step()\n \n # Loop de validación\n for sample_data, sample_label in torch_valid_loader:\n output = model.forward(sample_data)\n loss = criterion(output, sample_label.unsqueeze(1)) \n valid_loss += loss.item()\n \n # Guardar modelo si es el mejor hasta ahora\n if k % 10 == 0:\n if valid_loss < best_valid:\n best_valid = valid_loss\n torch.save({'epoca': k,\n 'model_state_dict': model.state_dict(),\n 'optimizer_state_dict': optimizer.state_dict(),\n 'loss': valid_loss}, '/home/phuijse/modelos/best_model.pt')\n \n return train_loss/torch_train_loader.dataset.__len__(), valid_loss/torch_valid_loader.dataset.__len__()",
"_____no_output_____"
],
[
"def update_plot(k):\n global model, running_loss\n [ax_.cla() for ax_ in ax]\n running_loss[k, 0], running_loss[k, 1] = train_one_epoch(k, model, criterion, optimizer)\n Z = model.forward(torch.from_numpy(np.c_[xx.ravel(), yy.ravel()].astype('float32')))\n Z = Z.detach().numpy().reshape(xx.shape)\n ax[0].contourf(xx, yy, Z, cmap=plt.cm.RdBu_r, alpha=1., vmin=0, vmax=1)\n for i, (marker, name) in enumerate(zip(['o', 'x'], ['Train', 'Test'])):\n ax[0].scatter(data[labels==i, 0], data[labels==i, 1], color='k', s=10, marker=marker, alpha=0.5)\n ax[1].plot(np.arange(0, k+1, step=1), running_loss[:k+1, i], '-', label=name+\" cost\")\n plt.legend(); ax[1].grid()\n\nfig, ax = plt.subplots(1, 2, figsize=(8, 3.5), tight_layout=True)\nupdate_plot(0)\nanim = animation.FuncAnimation(fig, update_plot, frames=n_epochs, \n interval=10, repeat=False, blit=False)",
"_____no_output_____"
]
],
[
[
"Neuronas de capa oculta:",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, model.hidden.out_features, figsize=(8, 3), tight_layout=True)\n\nZ = model.hidden(torch.from_numpy(np.c_[xx.ravel(), yy.ravel()].astype('float32'))).detach().numpy()\nZ = 1/(1+np.exp(-Z))\nfor i in range(model.hidden.out_features):\n ax[i].contourf(xx, yy, Z[:, i].reshape(xx.shape), \n cmap=plt.cm.RdBu_r, alpha=1., vmin=np.amin(Z), vmax=np.amax(Z))",
"_____no_output_____"
]
],
[
[
"#### Recuperando el mejor modelo",
"_____no_output_____"
]
],
[
[
"model = MultiLayerPerceptron(hidden_dim=3)\n\nprint(\"state_dict del módelo:\")\nfor param_tensor in model.state_dict():\n print(param_tensor, \"\\t\", model.state_dict()[param_tensor].size())\n print(param_tensor, \"\\t\", model.state_dict()[param_tensor])\n\n\n \nmodel.load_state_dict(torch.load('/home/phuijse/modelos/best_model.pt')['model_state_dict'])\n\nprint(\" \")\nprint(\"state_dict del módelo recuperado:\")\nfor param_tensor in model.state_dict():\n print(param_tensor, \"\\t\", model.state_dict()[param_tensor].size())\n print(param_tensor, \"\\t\", model.state_dict()[param_tensor])\n\n",
"_____no_output_____"
]
],
[
[
"## Diagnósticos a partir de curvas de aprendizaje\n\nPodemos diagnosticar el entrenamiento observando la evolución de la loss \n\nSiempre visualiza la loss en ambos conjuntos: entrenamiento y validación\n\nAlgunos ejemplos\n\n#### Ambas curvas en descenso\n\n- Entrena por más épocas",
"_____no_output_____"
]
],
[
[
"epochs = np.arange(1, 500)\nloss_train = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs))\nloss_valid = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs)) + 0.1\nfig, ax = plt.subplots(figsize=(6, 3))\nax.plot(epochs, loss_train, lw=2, label='entrenamiento')\nax.plot(epochs, loss_valid, lw=2, label='validación')\nax.set_ylim([0.5, 1.05])\nplt.legend();",
"_____no_output_____"
]
],
[
[
"#### Muy poca diferencia entre error de entrenamiento y validación\n\n- Entrena por más épocas\n- Usa un modelo más complejo",
"_____no_output_____"
]
],
[
[
"epochs = np.arange(1, 500)\nloss_train = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs))\nloss_valid = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs)) + 0.01\nfig, ax = plt.subplots(figsize=(6, 3))\nax.plot(epochs, loss_train, lw=2, label='entrenamiento')\nax.plot(epochs, loss_valid, lw=2, label='validación')\nax.set_ylim([0.5, 1.05])\nplt.legend();",
"_____no_output_____"
]
],
[
[
"#### Sobreajuste temprano\n\n- Usa un modelo más sencillo\n- Usa más datos (aumentación)\n- Usa regularización (dropout, L2)",
"_____no_output_____"
]
],
[
[
"epochs = np.arange(1, 500)\nloss_train = (epochs)**(-1/10) + 0.01*np.random.randn(len(epochs))\nloss_valid = (epochs)**(-1/10) + 0.00001*(epochs)**2 +0.01*np.random.randn(len(epochs)) + 0.01\nfig, ax = plt.subplots(figsize=(6, 3))\nax.plot(epochs, loss_train, lw=2, label='entrenamiento')\nax.plot(epochs, loss_valid, lw=2, label='validación')\nax.set_ylim([0.5, 1.05])\nplt.legend();",
"_____no_output_____"
]
],
[
[
"#### Error en el código o mal punto de partida\n\n- Revisa que tu código no tenga bugs\n - Función de costo\n - Optimizador\n- Mala inicialización, reinicia el entrenamiento",
"_____no_output_____"
]
],
[
[
"epochs = np.arange(1, 500)\nloss_train = 1.0 + 0.01*np.random.randn(len(epochs))\nloss_valid = 1.0 + 0.01*np.random.randn(len(epochs)) + 0.01\nfig, ax = plt.subplots(figsize=(6, 3))\nax.plot(epochs, loss_train, lw=2, label='entrenamiento')\nax.plot(epochs, loss_valid, lw=2, label='validación')\n#ax.set_ylim([0.5, 1.05])\nplt.legend();",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca0131c8a0c15585e0333ed3d838983a0b0aaab | 173,522 | ipynb | Jupyter Notebook | notebooks/cats_vs_dogs/model_from_scratch.ipynb | ephes/data_science_tutorial | 16cc55e8ff74934d4fe4d5bd78259c5b33d5b5ae | [
"MIT"
] | 6 | 2019-05-18T18:48:58.000Z | 2021-12-28T15:58:16.000Z | notebooks/cats_vs_dogs/model_from_scratch.ipynb | ephes/data_science_tutorial | 16cc55e8ff74934d4fe4d5bd78259c5b33d5b5ae | [
"MIT"
] | null | null | null | notebooks/cats_vs_dogs/model_from_scratch.ipynb | ephes/data_science_tutorial | 16cc55e8ff74934d4fe4d5bd78259c5b33d5b5ae | [
"MIT"
] | 3 | 2020-04-13T09:47:46.000Z | 2020-09-04T08:50:17.000Z | 377.221739 | 145,952 | 0.907793 | [
[
[
"%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
],
[
"# import os\n# os.environ[\"KERAS_BACKEND\"] = \"plaidml.keras.backend\"\n# del(os.environ[\"KERAS_BACKEND\"])\n# print(os.environ[\"KERAS_BACKEND\"])",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# Set up directories",
"_____no_output_____"
]
],
[
[
"from pathlib import Path\ndata_root = Path.home() / \"data\" / \"tmp\"\nsample_dir = data_root / \"cats_vs_dogs_sample\"",
"_____no_output_____"
]
],
[
[
"# Train model from scratch",
"_____no_output_____"
],
[
"## Create model",
"_____no_output_____"
]
],
[
[
"from keras import layers\nfrom keras import models\n\n\nmodel = models.Sequential()\nmodel.add(layers.Conv2D(32, (3, 3), activation='relu',\n input_shape=(150, 150, 3)))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(128, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(128, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Flatten())\nmodel.add(layers.Dense(512, activation='relu'))\nmodel.add(layers.Dense(1, activation='sigmoid'))\n\n#from keras.utils import multi_gpu_model\n#model = multi_gpu_model(model, gpus=2)",
"Using plaidml.keras.backend backend.\nINFO:plaidml:Opening device \"metal_amd_radeon_pro_5500m.0\"\n"
],
[
"model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 148, 148, 32) 896 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 74, 74, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 72, 72, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 36, 36, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 34, 34, 128) 73856 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 17, 17, 128) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 15, 15, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_4 (MaxPooling2 (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 512) 3211776 \n_________________________________________________________________\ndense_2 (Dense) (None, 1) 513 \n=================================================================\nTotal params: 3,453,121\nTrainable params: 3,453,121\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"from keras import optimizers\n\nmodel.compile(loss='binary_crossentropy',\n optimizer=optimizers.RMSprop(lr=1e-4),\n metrics=['acc'])",
"_____no_output_____"
]
],
[
[
"## Create training data generator",
"_____no_output_____"
]
],
[
[
"from keras.preprocessing.image import ImageDataGenerator\n\n# All images will be rescaled by 1./255\ntrain_datagen = ImageDataGenerator(rescale=1./255)\ntest_datagen = ImageDataGenerator(rescale=1./255)\n\ntrain_generator = train_datagen.flow_from_directory(\n # This is the target directory\n str(sample_dir / \"train\"),\n # All images will be resized to 150x150\n target_size=(150, 150),\n batch_size=20,\n # Since we use binary_crossentropy loss, we need binary labels\n class_mode='binary')\n\nvalidation_generator = test_datagen.flow_from_directory(\n str(sample_dir / \"validation\"),\n target_size=(150, 150),\n batch_size=20,\n class_mode='binary')",
"Found 2000 images belonging to 2 classes.\nFound 1000 images belonging to 2 classes.\n"
]
],
[
[
"## Fit model",
"_____no_output_____"
]
],
[
[
"%%time\nhistory = model.fit_generator(\n train_generator,\n steps_per_epoch=100,\n epochs=30,\n validation_data=validation_generator,\n validation_steps=50)",
"WARNING:tensorflow:From <timed exec>:1: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use Model.fit, which supports generators.\nEpoch 1/30\n100/100 [==============================] - 25s 249ms/step - loss: 0.6901 - acc: 0.5190 - val_loss: 0.6783 - val_acc: 0.5040\nEpoch 2/30\n100/100 [==============================] - 25s 254ms/step - loss: 0.6475 - acc: 0.6170 - val_loss: 0.6270 - val_acc: 0.6580\nEpoch 3/30\n100/100 [==============================] - 26s 259ms/step - loss: 0.5940 - acc: 0.6790 - val_loss: 0.6057 - val_acc: 0.6610\nEpoch 4/30\n100/100 [==============================] - 27s 267ms/step - loss: 0.5692 - acc: 0.6965 - val_loss: 0.6255 - val_acc: 0.6450\nEpoch 5/30\n100/100 [==============================] - 27s 268ms/step - loss: 0.5399 - acc: 0.7225 - val_loss: 0.5920 - val_acc: 0.6730\nEpoch 6/30\n100/100 [==============================] - 27s 265ms/step - loss: 0.5141 - acc: 0.7380 - val_loss: 0.5840 - val_acc: 0.6770\nEpoch 7/30\n100/100 [==============================] - 27s 272ms/step - loss: 0.4843 - acc: 0.7710 - val_loss: 0.6777 - val_acc: 0.6230\nEpoch 8/30\n100/100 [==============================] - 27s 271ms/step - loss: 0.4720 - acc: 0.7670 - val_loss: 0.6074 - val_acc: 0.6960\nEpoch 9/30\n100/100 [==============================] - 26s 261ms/step - loss: 0.4387 - acc: 0.8005 - val_loss: 0.5916 - val_acc: 0.6960\nEpoch 10/30\n100/100 [==============================] - 27s 266ms/step - loss: 0.4153 - acc: 0.8120 - val_loss: 0.5744 - val_acc: 0.6970\nEpoch 11/30\n100/100 [==============================] - 27s 267ms/step - loss: 0.3969 - acc: 0.8190 - val_loss: 0.5969 - val_acc: 0.6940\nEpoch 12/30\n100/100 [==============================] - 27s 271ms/step - loss: 0.3687 - acc: 0.8405 - val_loss: 0.5674 - val_acc: 0.7150\nEpoch 13/30\n100/100 [==============================] - 27s 269ms/step - loss: 0.3457 - acc: 0.8595 - val_loss: 0.5948 - val_acc: 0.7060\nEpoch 14/30\n100/100 [==============================] - 28s 279ms/step - loss: 0.3237 - acc: 0.8715 - val_loss: 0.5981 - val_acc: 0.7060\nEpoch 15/30\n100/100 [==============================] - 27s 269ms/step - loss: 0.2971 - acc: 0.8770 - val_loss: 0.6218 - val_acc: 0.7080\nEpoch 16/30\n100/100 [==============================] - 27s 271ms/step - loss: 0.2801 - acc: 0.8760 - val_loss: 0.5893 - val_acc: 0.7230\nEpoch 17/30\n100/100 [==============================] - 27s 272ms/step - loss: 0.2583 - acc: 0.8935 - val_loss: 0.6282 - val_acc: 0.7330\nEpoch 18/30\n100/100 [==============================] - 27s 269ms/step - loss: 0.2391 - acc: 0.9040 - val_loss: 0.6258 - val_acc: 0.7280\nEpoch 19/30\n100/100 [==============================] - 27s 269ms/step - loss: 0.2195 - acc: 0.9195 - val_loss: 0.6333 - val_acc: 0.7280\nEpoch 20/30\n100/100 [==============================] - 27s 269ms/step - loss: 0.1903 - acc: 0.9320 - val_loss: 0.6951 - val_acc: 0.7230\nEpoch 21/30\n100/100 [==============================] - 27s 270ms/step - loss: 0.1811 - acc: 0.9360 - val_loss: 0.6552 - val_acc: 0.7210\nEpoch 22/30\n100/100 [==============================] - 28s 278ms/step - loss: 0.1568 - acc: 0.9480 - val_loss: 0.7465 - val_acc: 0.7160\nEpoch 23/30\n100/100 [==============================] - 27s 273ms/step - loss: 0.1421 - acc: 0.9490 - val_loss: 0.6936 - val_acc: 0.7270\nEpoch 24/30\n100/100 [==============================] - 27s 273ms/step - loss: 0.1196 - acc: 0.9615 - val_loss: 0.8914 - val_acc: 0.7130\nEpoch 25/30\n100/100 [==============================] - 28s 276ms/step - loss: 0.1064 - acc: 0.9660 - val_loss: 0.7572 - val_acc: 0.7310\nEpoch 26/30\n100/100 [==============================] - 28s 276ms/step - loss: 0.0960 - acc: 0.9705 - val_loss: 0.8235 - val_acc: 0.7220\nEpoch 27/30\n100/100 [==============================] - 27s 269ms/step - loss: 0.0806 - acc: 0.9775 - val_loss: 0.8910 - val_acc: 0.7190\nEpoch 28/30\n100/100 [==============================] - 28s 275ms/step - loss: 0.0639 - acc: 0.9845 - val_loss: 0.9062 - val_acc: 0.7240\nEpoch 29/30\n100/100 [==============================] - 27s 274ms/step - loss: 0.0618 - acc: 0.9830 - val_loss: 0.8904 - val_acc: 0.7260\nEpoch 30/30\n100/100 [==============================] - 27s 271ms/step - loss: 0.0520 - acc: 0.9825 - val_loss: 1.0649 - val_acc: 0.7100\nCPU times: user 1h 42min 35s, sys: 10min 48s, total: 1h 53min 24s\nWall time: 13min 37s\n"
],
[
"%%time\nhistory = model.fit_generator(\n train_generator,\n steps_per_epoch=100,\n epochs=30,\n validation_data=validation_generator,\n validation_steps=50)",
"Epoch 1/30\n100/100 [==============================] - 14s 136ms/step - loss: 0.6865 - acc: 0.5585 - val_loss: 0.6944 - val_acc: 0.5230\nEpoch 2/30\n100/100 [==============================] - 13s 128ms/step - loss: 0.6533 - acc: 0.6220 - val_loss: 0.6273 - val_acc: 0.6520\nEpoch 3/30\n100/100 [==============================] - 13s 126ms/step - loss: 0.5953 - acc: 0.6810 - val_loss: 0.6368 - val_acc: 0.6300\nEpoch 4/30\n 28/100 [=======>......................] - ETA: 7s - loss: 0.5739 - acc: 0.6857"
],
[
"import json\n\nmodels_dir = data_root / \"models\" \nmodels_dir.mkdir(exist_ok=True)\nmodel.save(str(models_dir / \"cats_and_dogs_small_from_scratch.h5\"))\n\nhistory_path = models_dir / \"cats_and_dogs_small_from_scratch_history.json\"\nwith open(str(history_path), \"w\") as f:\n json.dump(history.history, f)",
"_____no_output_____"
],
[
"history = json.load(open(str(history_path)))",
"_____no_output_____"
]
],
[
[
"## Plot training vs test accuracy",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nacc = history['acc']\nval_acc = history['val_acc']\nloss = history['loss']\nval_loss = history['val_loss']\n\nepochs = range(len(acc))\n\nplt.figure(figsize=(15, 20))\n\nplt.subplot(211)\n\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.legend()\n\nplt.subplot(212)\n\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca01afb8adf5449ca917235f3693a7f779a499d | 34,382 | ipynb | Jupyter Notebook | examples/notebooks/unicycle_towards_origin.ipynb | julesser/crocoddyl | 67aec60a305621129f390f9bee387d3a00a56437 | [
"BSD-3-Clause"
] | 2 | 2020-09-23T12:57:29.000Z | 2021-05-30T16:27:14.000Z | examples/notebooks/unicycle_towards_origin.ipynb | julesser/crocoddyl | 67aec60a305621129f390f9bee387d3a00a56437 | [
"BSD-3-Clause"
] | null | null | null | examples/notebooks/unicycle_towards_origin.ipynb | julesser/crocoddyl | 67aec60a305621129f390f9bee387d3a00a56437 | [
"BSD-3-Clause"
] | 1 | 2021-03-26T14:31:17.000Z | 2021-03-26T14:31:17.000Z | 113.847682 | 10,852 | 0.875662 | [
[
[
"# Starting example: the unicycle\n\n\n\nAn unicycle represents a kinematic model of a car where it's only possible to move in two directions, i.e. it drives forward and turns on the spot. Its dynamics has nonholonomic constraints because it cannot move sideways. Remember that nonholonomic constraints are nonintegral and has the form $\\mathbf{f(q,\\dot{q})=0}$.\n\nIn this example, we define an optimal-control problem for the classical unicycle problem. Our goal is to drive the unicycle towards the origin but at the same time not too fast. For that, the cost function is described as the sum between the distance to the origin and the system speed.\n\n\n**Issue on Ubuntu 16.04**\n\nYou need to upgrate the matplotlib library, please do:\n - pip install --upgrade --user pip\n - pip install --upgrade --user matplotlib\n\nBasically, our optimal control problem has the following dynamical model and cost function:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nx = np.random.rand(3) # 3 DOF [x,y,theta]\nu = np.random.rand(2) # 2 Actuators [thrust, steering]\n\n# Unicycle dynamical model\nv, w = u\nc, s = np.cos(x[2]), np.sin(x[2])\ndt = 1e-2\ndx = np.array([v * c, v * s, w])\nxnext = x + dx * dt\n\n# Cost function: driving to origin (state) and reducing speed (control)\nstateWeight = 1 # penalizing bad performance\nctrlWeight = 1 # penalizing high actuator effort\ncostResiduals = np.concatenate([stateWeight*x,ctrlWeight*u])\ncost = .5* sum(costResiduals**2)",
"_____no_output_____"
]
],
[
[
"For this basic example, the unicycle model is coded in the library. We will just load it and use it. If you are very curious, have a look! It is in `crocoddyl/unicycle.py`. ",
"_____no_output_____"
],
[
"We create such a model with the following lines:",
"_____no_output_____"
]
],
[
[
"import crocoddyl\nmodel = crocoddyl.ActionModelUnicycle()\ndata = model.createData()",
"_____no_output_____"
]
],
[
[
"The action model contains ... well ... the description of the dynamics and cost function. There you find also the action model parameters (here the time step and the cost weights). On the other hand, the data has the buffers where the results of the calculus are stored.\n\nWe decided for this separation for an obvious reason that is given just below.",
"_____no_output_____"
]
],
[
[
"model.costWeights = np.matrix([\n 3, # state weight\n 1 # control weight\n]).T",
"_____no_output_____"
]
],
[
[
"**You can further understand the mathematical definition of action models see introduction_to_crocoddyl.ipynb**",
"_____no_output_____"
],
[
"## I. Defining the shooting problem\nA shooting problem is defined by the initial state from which computing the rollout and a sequence of action models.\n",
"_____no_output_____"
]
],
[
[
"x0 = np.matrix([ -4, -4, 0 ]).T #x,y,theta\nT = 20\nproblem = crocoddyl.ShootingProblem(x0, [ model ] * T, model)",
"_____no_output_____"
]
],
[
[
"Here we define a problem starting from $\\mathbf{x}_0$ with 20 timesteps (of 0.1 sec by default implementation of unicycle). The terminal action model is defined using the running action model.\n\nThis defines the model, not any algorithm to solve it. The only computation that the problem can provide is to integrate the system for a given sequence of controls.",
"_____no_output_____"
]
],
[
[
"us = [np.matrix([1., 1.]).T for _ in range(T)]\nxs = problem.rollout(us)",
"_____no_output_____"
]
],
[
[
"The plotUnicycle function plots the system as two arrows that represent the wheels",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom unicycle_utils import plotUnicycle\nfor x in xs: plotUnicycle(x)\nplt.axis([-5.,1.,-5.,1.])",
"_____no_output_____"
]
],
[
[
"## II. Solve the OCP\nThe main solver is named SolverDDP. It is initialized from the problem object and mostly contains the ddp.solve method. We can warm start it and tune the parameters, but for the simple unicycle, let's just solve it!",
"_____no_output_____"
]
],
[
[
"ddp = crocoddyl.SolverDDP(problem)\ndone = ddp.solve()\nassert done",
"_____no_output_____"
],
[
"plt.clf()\nfor x in ddp.xs: plotUnicycle(x)\nplt.axis([-5.,1.,-5.,1.])",
"_____no_output_____"
]
],
[
[
"and the final state is:",
"_____no_output_____"
]
],
[
[
"print(ddp.xs[-1])",
"[[ 0.03379973]\n [-0.30646301]\n [ 0.02773356]]\n"
]
],
[
[
"# Well, the terminal state is not so nicely in the origin.\n\nQuestion 1: why?\n\nQuestion 2: How can you change this?\n\nQuestion 3: by changing the cost parameters, the time horizon and the initial position, can you trigger a maneuver?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca0344edda5ab829a362a151a9df000f293dfd9 | 239,273 | ipynb | Jupyter Notebook | 19 - Credit Risk Modeling in Python/3_Dataset description/1_Our example: consumer loans. A first look at the dataset (3:11)/Credit%20Risk%20Modeling%20-%20Preparation%20-%20With%20Comments%20-%203-1.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 19 - Credit Risk Modeling in Python/3_Dataset description/1_Our example: consumer loans. A first look at the dataset (3:11)/Credit%20Risk%20Modeling%20-%20Preparation%20-%20With%20Comments%20-%203-1.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 19 - Credit Risk Modeling in Python/3_Dataset description/1_Our example: consumer loans. A first look at the dataset (3:11)/Credit%20Risk%20Modeling%20-%20Preparation%20-%20With%20Comments%20-%203-1.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 938.32549 | 136,520 | 0.789909 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
eca0464564517d44357eff59c68afd6b143d014e | 57,892 | ipynb | Jupyter Notebook | finmath/termstructure/nss_ettj_example.ipynb | gusamarante/Quantequim | 3968d9965e8e2c3b5850f1852b56c485859a9c89 | [
"MIT"
] | 296 | 2018-10-19T21:00:53.000Z | 2022-03-29T21:50:55.000Z | finmath/termstructure/nss_ettj_example.ipynb | gusamarante/Quantequim | 3968d9965e8e2c3b5850f1852b56c485859a9c89 | [
"MIT"
] | 11 | 2019-06-18T11:43:35.000Z | 2021-11-14T21:39:20.000Z | finmath/termstructure/nss_ettj_example.ipynb | gusamarante/Quantequim | 3968d9965e8e2c3b5850f1852b56c485859a9c89 | [
"MIT"
] | 102 | 2018-10-18T14:14:34.000Z | 2022-03-06T00:34:53.000Z | 285.182266 | 53,152 | 0.92657 | [
[
[
"# Brazilian bond and Nelson-Siegel-Svensson class\n## Author: Gustavo Soares",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom finmath.brazilian_bonds.government_bonds import LTN, NTNF\nfrom finmath.termstructure.curve_models import NelsonSiegelSvensson as NSS",
"_____no_output_____"
]
],
[
[
"### LTNs (zero coupon bonds)",
"_____no_output_____"
]
],
[
[
"ref_date = pd.to_datetime('2021-02-05').date()\n\nltn_expires = [\n '2021-04-01',\n '2021-07-01',\n '2021-10-01',\n '2022-01-01',\n '2022-04-01',\n '2022-07-01',\n '2022-10-01',\n '2023-01-01',\n '2023-07-01',\n '2024-01-01',\n '2024-07-01',\n]\n\nltn_yields = [\n 0.020580,\n 0.023885,\n 0.029904,\n 0.034463,\n 0.040148,\n 0.044847,\n 0.049137,\n 0.052500,\n 0.057519,\n 0.061150,\n 0.064247,\n]\n\nltn_prices = []\nltn_cash_flows = []\nfor T, y in zip(ltn_expires, ltn_yields):\n ltn = LTN(expiry=T, rate=y, ref_date=ref_date)\n ltn_prices += [ltn.price]\n ltn_cash_flows += [pd.Series(index=[pd.to_datetime(T)], data=[ltn.principal])]",
"_____no_output_____"
]
],
[
[
"### NTNFs (coupon paying bonds)",
"_____no_output_____"
]
],
[
[
"ntnf_expires = [\n '2023-01-01',\n '2025-01-01',\n '2027-01-01',\n '2029-01-01',\n '2031-01-01',\n]\n\nntnf_yields = [\n 0.05113,\n 0.06215,\n 0.06869,\n 0.07317,\n 0.07639,\n]\n\nntnf_prices = []\nntnf_cash_flows = []\nfor T, y in zip(ntnf_expires, ntnf_yields):\n ntnf = NTNF(expiry=T, rate=y, ref_date=ref_date)\n ntnf_prices += [ntnf.price]\n ntnf_cash_flows += [ntnf.cash_flows]",
"_____no_output_____"
]
],
[
[
"### Nelson-Siegel-Svensson model estimation",
"_____no_output_____"
]
],
[
[
"all_bond_prices = ltn_prices + ntnf_prices\nall_bond_cash_flows = ltn_cash_flows + ntnf_cash_flows\n\nnss = NSS(prices=all_bond_prices, cash_flows=all_bond_cash_flows, ref_date=ref_date)\nnss.betas",
"_____no_output_____"
]
],
[
[
"#### Plot curves",
"_____no_output_____"
]
],
[
[
"curves = pd.DataFrame(index=pd.to_datetime(ltn_expires + ntnf_expires),\n columns=['yield_curve'],\n data=ltn_yields + ntnf_yields).sort_index()\ncurves['nss_zero_curve'] = [nss.rate_for_ytm(betas=nss.betas, ytm=nss.dc.tf(nss.ref_date, x)) for x in curves.index]\ncurves.plot(figsize=(15,10), fontsize=16, marker='o')\nplt.title('Curves on %s' % nss.ref_date.strftime('%d-%b-%y'), fontsize=20)\nplt.legend(fontsize=20)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca04d42d93687c5840cef6aa53ae60f39e06ca1 | 20,334 | ipynb | Jupyter Notebook | starter_notebook_edsa_classification.ipynb | vpana/kaggle-api | 0e7da24c9bf426c99dda44ac7311f78b88ff9c71 | [
"Apache-2.0"
] | null | null | null | starter_notebook_edsa_classification.ipynb | vpana/kaggle-api | 0e7da24c9bf426c99dda44ac7311f78b88ff9c71 | [
"Apache-2.0"
] | null | null | null | starter_notebook_edsa_classification.ipynb | vpana/kaggle-api | 0e7da24c9bf426c99dda44ac7311f78b88ff9c71 | [
"Apache-2.0"
] | null | null | null | 30.623494 | 1,005 | 0.420035 | [
[
[
"<a href=\"https://colab.research.google.com/github/vpana/kaggle-api/blob/master/starter_notebook_edsa_classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Import the necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport nltk\nimport string\nimport re\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import MultinomialNB\nfrom nltk.stem import WordNetLemmatizer\nfrom nltk.corpus import stopwords\n\nfrom sklearn.metrics import f1_score\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Load in your data from kaggle. \nBy working in a kaggle kernel, you can access the data directly from the competition, as well as make your submission without downloading your output file",
"_____no_output_____"
]
],
[
[
"train = pd.read_csv('/content/train.csv')\ntest = pd.read_csv('/content/test.csv')",
"_____no_output_____"
]
],
[
[
"train.sentiment.value_counts()",
"_____no_output_____"
]
],
[
[
"train.sentiment.value_counts()",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
]
],
[
[
"## Splitting out the X variable from the target",
"_____no_output_____"
]
],
[
[
"y = train['sentiment']\nX = train['message']",
"_____no_output_____"
],
[
"from nltk.stem import PorterStemmer\n\n# init stemmer\nporter_stemmer=PorterStemmer()\n\ndef my_cool_preprocessor(text):\n \n text=text.lower() \n text=re.sub(\"\\\\W\",\" \",text) # remove special chars\n text=re.sub(\"\\\\s+(in|the|all|for|and|on)\\\\s+\",\" _connector_ \",text) # normalize certain words\n \n # stem words\n words=re.split(\"\\\\s+\",text)\n stemmed_words=[porter_stemmer.stem(word=word) for word in words]\n return ' '.join(stemmed_words)",
"_____no_output_____"
],
[
"def my_tokenizer(text):\n # create a space between special characters \n text=re.sub(\"(\\\\W)\",\" \\\\1 \",text)\n\n # split based on whitespace\n return re.split(\"\\\\s+\",text)",
"_____no_output_____"
]
],
[
[
"## Turning text into something your model can read",
"_____no_output_____"
]
],
[
[
"vectorizer = TfidfVectorizer(ngram_range=(1,2),tokenizer=my_tokenizer, min_df=2,max_df=0.70,analyzer='word',smooth_idf=False, preprocessor=my_cool_preprocessor,stop_words=\"english\")\nX_vectorized = vectorizer.fit_transform(X)\n, \n#stop_words=\"english\",max_df=0.85, preprocessor=my_cool_preprocessor,\"all\",\"in\",\"the\",\"is\",\"and\"preprocessor=my_cool_preprocessor",
"/usr/local/lib/python3.6/dist-packages/sklearn/feature_extraction/text.py:385: UserWarning: Your stop_words may be inconsistent with your preprocessing. Tokenizing the stop words generated tokens ['abov', 'afterward', 'alon', 'alreadi', 'alway', 'ani', 'anoth', 'anyon', 'anyth', 'anywher', 'becam', 'becaus', 'becom', 'befor', 'besid', 'cri', 'describ', 'dure', 'els', 'elsewher', 'empti', 'everi', 'everyon', 'everyth', 'everywher', 'fifti', 'formerli', 'forti', 'ha', 'henc', 'hereaft', 'herebi', 'hi', 'howev', 'hundr', 'inde', 'latterli', 'mani', 'meanwhil', 'moreov', 'mostli', 'nobodi', 'noon', 'noth', 'nowher', 'onc', 'onli', 'otherwis', 'ourselv', 'perhap', 'pleas', 'seriou', 'sever', 'sinc', 'sincer', 'sixti', 'someon', 'someth', 'sometim', 'somewher', 'themselv', 'thenc', 'thereaft', 'therebi', 'therefor', 'thi', 'thu', 'togeth', 'twelv', 'twenti', 'veri', 'wa', 'whatev', 'whenc', 'whenev', 'wherea', 'whereaft', 'wherebi', 'wherev', 'whi', 'yourselv'] not in stop_words.\n 'stop_words.' % sorted(inconsistent))\n"
],
[
"#vectorizer.vocabulary_\n#vectorizer.stop_words_",
"_____no_output_____"
]
],
[
[
"## Splitting the training data into a training and validation set",
"_____no_output_____"
]
],
[
[
"X_train,X_val,y_train,y_val = train_test_split(X_vectorized,y,test_size=0.25,shuffle=True, random_state=25)",
"_____no_output_____"
]
],
[
[
"## Training the model and evaluating using the validation set ",
"_____no_output_____"
]
],
[
[
"lsvc = LinearSVC()\nlsvc.fit(X_train, y_train)\nlsvc_pred = lsvc.predict(X_val)",
"_____no_output_____"
]
],
[
[
"## Checking the performance of our model on the validation set",
"_____no_output_____"
]
],
[
[
"f1_score(y_val, lsvc_pred, average=\"macro\")",
"_____no_output_____"
],
[
"from sklearn import metrics\n\nprint(metrics.classification_report(y_val, lsvc_pred))",
" precision recall f1-score support\n\n -1 0.70 0.39 0.50 247\n 0 0.60 0.45 0.52 466\n 1 0.76 0.87 0.81 1749\n 2 0.76 0.75 0.75 702\n\n accuracy 0.74 3164\n macro avg 0.71 0.62 0.65 3164\nweighted avg 0.73 0.74 0.73 3164\n\n"
]
],
[
[
"## Getting our test set ready ",
"_____no_output_____"
]
],
[
[
"testx = test['message']\ntest_vect = vectorizer.transform(testx)",
"_____no_output_____"
]
],
[
[
"## Making predictions on the test set and adding a sentiment column to our original test df",
"_____no_output_____"
]
],
[
[
"y_pred = lsvc.predict(test_vect)",
"_____no_output_____"
],
[
"test['sentiment'] = y_pred",
"_____no_output_____"
],
[
"test.head()",
"_____no_output_____"
]
],
[
[
"## Creating an output csv for submission",
"_____no_output_____"
]
],
[
[
"test[['tweetid','sentiment']].to_csv('testsubmission_8.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca04e65c0cf7028a2f998bd6253218514faa23a | 409,463 | ipynb | Jupyter Notebook | .ipynb_checkpoints/main-checkpoint.ipynb | fokoa/linear_regression | 64c95bcc0769b901e20cc61354753025a69b27cf | [
"MIT"
] | null | null | null | .ipynb_checkpoints/main-checkpoint.ipynb | fokoa/linear_regression | 64c95bcc0769b901e20cc61354753025a69b27cf | [
"MIT"
] | null | null | null | .ipynb_checkpoints/main-checkpoint.ipynb | fokoa/linear_regression | 64c95bcc0769b901e20cc61354753025a69b27cf | [
"MIT"
] | null | null | null | 1,811.783186 | 159,008 | 0.95857 | [
[
[
"# Load libraries\nimport numpy as np;\nimport pandas as pd;\nimport seaborn as sns;\nimport matplotlib.pyplot as plt;\n\nfrom sklearn.linear_model import LinearRegression;\nfrom sklearn.metrics import mean_squared_error;\nfrom sklearn.model_selection import train_test_split, KFold, cross_val_score;\n\nfrom linear_regression import LinearRegression as LR; # Our implementation\n\nimport warnings;\n\npd.set_option('max_column', None);\nwarnings.filterwarnings('ignore');",
"_____no_output_____"
],
[
"# Load data\nfish = pd.read_csv('data/pre_fish.csv');\nmetro = pd.read_csv('data/pre_metro.csv');\nair = pd.read_csv('data/pre_air.csv');\nhousing = pd.read_csv('data/pre_housing.csv');",
"_____no_output_____"
],
[
"# Split data\nX_fish = fish[fish.columns.difference(['Weight'])];\ny_fish = fish['Weight'];\n\nX_metro = metro[metro.columns.difference(['date_time', 'hour', 'traffic_volume'])];\ny_metro = metro['traffic_volume'];\n\nX_air = air[air.columns.difference(['C6H6(GT)', 'Date'])];\ny_air = air['C6H6(GT)']\n\nX_housing = housing[housing.columns.difference(['MEDV'])];\ny_housing = housing['MEDV'];",
"_____no_output_____"
],
[
"data = [(X_fish, y_fish, 'Fish'), (X_metro, y_metro, 'Metro'),\n (X_air, y_air, 'Air'), (X_housing, y_housing, 'Housing')];",
"_____no_output_____"
],
[
"def fiting(data, n_iter=100, lr=0.001, moment=0.95, batch=10, scoring='rmse'):\n \n plt.figure(1, figsize=(15, 12));\n \n for idx, (X, y, name) in enumerate(data):\n \n # Gradient Descent\n gd = LR(n_iter=n_iter, learning_rate=lr, opt='gd', scoring=scoring);\n gd.fit(X, y);\n\n # Stochastic Gradient Descent\n sgd = LR(n_iter=n_iter, learning_rate=lr, opt='sgd', batch=1, scoring=scoring);\n sgd.fit(X, y);\n\n # Mini-Batch Stochastic Gradient Descent\n sgd_m = LR(n_iter=n_iter, learning_rate=lr, opt='sgd', batch=batch, scoring=scoring);\n sgd_m.fit(X, y);\n\n # Momentum\n mom = LR(n_iter=n_iter, learning_rate=lr, opt='momentum', momentum=moment, batch=batch, scoring=scoring);\n mom.fit(X, y);\n\n # Nesterov Accelerated Gradient Descent\n nest = LR(n_iter=n_iter, learning_rate=lr, opt='nesterov', momentum=moment, batch=batch, scoring=scoring);\n nest.fit(X, y);\n\n # Ploting Descend Gradient\n plt.subplot(2, 2, idx+1);\n plt.plot(np.arange(len(gd.costs_)), gd.costs_, label='GD');\n plt.plot(np.arange(len(sgd.costs_)), sgd.costs_, label='SGD');\n plt.plot(np.arange(len(sgd_m.costs_)), sgd_m.costs_, label='SGD-MiniBatch');\n plt.plot(np.arange(len(mom.costs_)), mom.costs_, label='Momentum');\n plt.plot(np.arange(len(nest.costs_)), nest.costs_, label='Nesterov');\n\n plt.title(name);\n plt.xlabel('Iterations');\n plt.ylabel('Cost');\n plt.legend();\n \n plt.show()\n ",
"_____no_output_____"
],
[
"fiting(data, lr=0.01, batch=8)",
"_____no_output_____"
],
[
"fiting(data, lr=0.001, batch=8)",
"_____no_output_____"
],
[
"fiting(data, lr=0.0001, batch=8)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca05179591b58acfb74b706b0fe6b27ec82afe7 | 191,124 | ipynb | Jupyter Notebook | 5b.Network-Algorithms-II.ipynb | npshub/network-and-complex-system | 108488ead64b2f9fe6b948cffb4e03acf0831928 | [
"MIT"
] | 1 | 2020-10-17T12:29:18.000Z | 2020-10-17T12:29:18.000Z | 5b.Network-Algorithms-II.ipynb | npshub/network-and-complex-system | 108488ead64b2f9fe6b948cffb4e03acf0831928 | [
"MIT"
] | null | null | null | 5b.Network-Algorithms-II.ipynb | npshub/network-and-complex-system | 108488ead64b2f9fe6b948cffb4e03acf0831928 | [
"MIT"
] | 1 | 2021-05-27T01:30:58.000Z | 2021-05-27T01:30:58.000Z | 290.90411 | 57,048 | 0.922401 | [
[
[
"## Algorithm - II",
"_____no_output_____"
],
[
"### Clustering, Link Analysis, Node Classification, Link Prediction",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport networkx as nx\nimport seaborn as sns\nsns.set()\n%matplotlib inline",
"_____no_output_____"
],
[
"import warnings\nimport matplotlib.cbook\nwarnings.filterwarnings(\"ignore\",category=matplotlib.cbook.mplDeprecation)",
"_____no_output_____"
],
[
"G = nx.karate_club_graph()\nnx.draw(G, node_size = 500, node_color = \"lightblue\", with_labels = True)",
"_____no_output_____"
]
],
[
[
"### Clustering\n\nAlgorithms to characterize the number of triangles in a graph.\n\n- ```triangles(G[, nodes])``` \tCompute the number of triangles.\n- ```transitivity(G)``` \tCompute graph transitivity, the fraction of all possible triangles present in G.\n- ```clustering(G[, nodes, weight])``` \tCompute the clustering coefficient for nodes.\n- ```average_clustering(G[, nodes, weight, …])``` \tCompute the average clustering coefficient for the graph G.\n- ```square_clustering(G[, nodes])``` \tCompute the squares clustering coefficient for nodes.\n- ```generalized_degree(G[, nodes])``` \tCompute the generalized degree for nodes.",
"_____no_output_____"
]
],
[
[
"nx.triangles(G)",
"_____no_output_____"
],
[
"nx.transitivity(G)",
"_____no_output_____"
],
[
"nx.clustering(G)",
"_____no_output_____"
]
],
[
[
"--------------",
"_____no_output_____"
],
[
"### Link Analysis\n\n#### PageRank\n\nPageRank analysis of graph structure.\n\n- ```pagerank(G[, alpha, personalization, …])``` \tReturns the PageRank of the nodes in the graph.\n- ```pagerank_numpy(G[, alpha, personalization, …])``` \tReturns the PageRank of the nodes in the graph.\n- ```pagerank_scipy(G[, alpha, personalization, …])``` \tReturns the PageRank of the nodes in the graph.\n- ```google_matrix(G[, alpha, personalization, …])``` \tReturns the Google matrix of the graph.",
"_____no_output_____"
]
],
[
[
"nx.pagerank(G)",
"_____no_output_____"
],
[
"nx.google_matrix(G)",
"_____no_output_____"
]
],
[
[
"------------",
"_____no_output_____"
],
[
"#### Hits\n\nHubs and authorities analysis of graph structure.\n\n- ```hits(G[, max_iter, tol, nstart, normalized])``` \tReturns HITS hubs and authorities values for nodes.\n- ```hits_numpy(G[, normalized])``` \tReturns HITS hubs and authorities values for nodes.\n- ```hits_scipy(G[, max_iter, tol, normalized])``` \tReturns HITS hubs and authorities values for nodes.\n- ```hub_matrix(G[, nodelist])``` \tReturns the HITS hub matrix.\n- ```authority_matrix(G[, nodelist])``` \tReturns the HITS authority matrix.\n",
"_____no_output_____"
]
],
[
[
"nx.hits(G)",
"_____no_output_____"
],
[
"nx.hub_matrix(G)",
"_____no_output_____"
],
[
"nx.authority_matrix(G)",
"_____no_output_____"
]
],
[
[
"----------------",
"_____no_output_____"
],
[
"### Node Classification\n\nThis module provides the functions for node classification problem.\n\nThe functions in this module are not imported into the top level networkx namespace. You can access these functions by importing the ```networkx.algorithms.node_classification``` modules, then accessing the functions as attributes of node_classification. For example:",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nfrom networkx.algorithms import node_classification\nG = nx.balanced_tree(3,3)\nnx.draw(G, node_size = 500, node_color = \"lightgreen\", with_labels = True)",
"_____no_output_____"
],
[
"G.node[1]['label'] = 'A'\nG.node[2]['label'] = 'B'\nG.node[3]['label'] = 'C'\nL = node_classification.harmonic_function(G)\nprint(L)",
"['A', 'A', 'B', 'C', 'A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C']\n"
],
[
"LL = {}\nfor n,l in zip(G.nodes(),L):\n LL.update({n:l})",
"_____no_output_____"
],
[
"nx.draw(G, node_size = 500, labels = LL, node_color = \"lightgreen\", with_labels = True)",
"_____no_output_____"
]
],
[
[
"--------------",
"_____no_output_____"
],
[
"### Link Prediction\n\nLink prediction algorithms.\n\n- ```resource_allocation_index(G[, ebunch])``` \tCompute the resource allocation index of all node pairs in ebunch.\n- ```jaccard_coefficient(G[, ebunch])``` \tCompute the Jaccard coefficient of all node pairs in ebunch.\n- ```adamic_adar_index(G[, ebunch])``` \tCompute the Adamic-Adar index of all node pairs in ebunch.\n- ```preferential_attachment(G[, ebunch])``` \tCompute the preferential attachment score of all node pairs in ebunch.\n- ```cn_soundarajan_hopcroft(G[, ebunch, community])``` \tCount the number of common neighbors of all node pairs in ebunch\n- ```ra_index_soundarajan_hopcroft(G[, ebunch, …])``` \tCompute the resource allocation index of all node pairs in ebunch using community information.\n- ```within_inter_cluster(G[, ebunch, delta, …])``` \tCompute the ratio of within- and inter-cluster common neighb",
"_____no_output_____"
]
],
[
[
"G = nx.karate_club_graph()\nnx.draw(G, node_size = 500, node_color = \"lightblue\", with_labels = True)",
"_____no_output_____"
],
[
"preds = nx.resource_allocation_index(G, [(0,10),(9, 18), (11, 12),(30,27),(16,26)])\nfor u, v, p in preds:\n print('(%d, %d) -> %.8f' % (u, v, p))\n",
"(0, 10) -> 0.58333333\n(9, 18) -> 0.05882353\n(11, 12) -> 0.06250000\n(30, 27) -> 0.05882353\n(16, 26) -> 0.00000000\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
eca067bb3ca61729ed98714d52b5ac2554fda0e6 | 11,873 | ipynb | Jupyter Notebook | python/inflearn/Chapter 10. Objective-Oriented Programming.ipynb | mindudekim/TIL | 759d7ba770f9e12b4e2a3a4f5044150811fcbcf3 | [
"MIT"
] | 1 | 2019-10-10T02:15:33.000Z | 2019-10-10T02:15:33.000Z | python/inflearn/Chapter 10. Objective-Oriented Programming.ipynb | mindudekim/TIL | 759d7ba770f9e12b4e2a3a4f5044150811fcbcf3 | [
"MIT"
] | null | null | null | python/inflearn/Chapter 10. Objective-Oriented Programming.ipynb | mindudekim/TIL | 759d7ba770f9e12b4e2a3a4f5044150811fcbcf3 | [
"MIT"
] | null | null | null | 22.965184 | 125 | 0.493304 | [
[
[
"# class ClassName(object):",
"_____no_output_____"
],
[
"# __init __",
"_____no_output_____"
]
],
[
[
"class SoccerPlayer(object):\n def __init__(self, name, position, back_number):\n self.name = name\n self.position = position\n self.back_number = back_number",
"_____no_output_____"
]
],
[
[
"# Add Function",
"_____no_output_____"
]
],
[
[
"class SoccerPlayer(object):\n def change_back_number(self, new_number):\n print(\"Change the back number of a player: From %d to %d\" % (self.back_number, new_number))\n self.back_number = new_number",
"_____no_output_____"
]
],
[
[
"# Use Objects(Instance)",
"_____no_output_____"
]
],
[
[
"class SoccerPlayer(object):\n def __init__(self, name, position, back_number):\n self.name = name\n self.position = position\n self.back_number = back_number\n\n def change_back_number(self, new_number):\n print(\"Change the back number of a player : From %d to %d\" % (self.back_number, new_number))\n self.back_number = new_number\n def __str__(self):\n return \"Hello, My name is %s. I play in %s in center \" % (self.name, self.position)",
"_____no_output_____"
],
[
"jinhyun = SoccerPlayer(\"Jinhyun\", \"MF\", 10)\nprint(jinhyun)",
"Hello, My name is Jinhyun. I play in MF in center \n"
],
[
"print(\"Back Number for the player is :\", jinhyun.back_number)",
"Back Number for the player is : 10\n"
],
[
"jinhyun.change_back_number(5)\nprint(\"Back Number for the player is :\", jinhyun.back_number)",
"Change the back number of a player : From 10 to 5\nBack Number for the player is : 5\n"
]
],
[
[
"## Input five soccer players using class",
"_____no_output_____"
]
],
[
[
"names = [\"Jin\", \"Sungchul\", \"Ronaldo\", \"Hong\", \"Seo\"]\npositions = [\"MF\",\"DF\", \"CF\", \"WF\", \"GK\"]\nnumbers = [10, 15, 20, 3, 1]",
"_____no_output_____"
],
[
"class SoccerPlayer(object):\n def __init__(self, name, position, back_number):\n self.name = name\n self.position = position\n self.back_number = back_number\n\n def change_back_number(self, new_number):\n print(\"Change the back number of a player : From %d to %d\" % (self.back_number, new_number))\n self.back_number = new_number\n\n def __str__(self):\n return \"Hello, My name is %s. I play in %s in center \" % (self.name, self.position)",
"_____no_output_____"
],
[
"player_objects = [SoccerPlayer(name, position, number) for name, position, number in zip(names, positions, numbers)]\nprint(player_objects[0])",
"Hello, My name is Jin. I play in MF in center \n"
]
],
[
[
"# Lab : Create Notebook OOP",
"_____no_output_____"
]
],
[
[
"class Note(object):\n def __init__(self, contents):\n self.contents = contents\n\n def get_number_of_lines(self):\n return self.contents.count(\"\\n\")\n\n def get_number_of_characters(self):\n return len(self.contents)\n\n def remove(self):\n self.contents = \"삭제된 노트입니다\"\n\n def __str__(self):\n return self.contents\n\n\nclass NoteBook(object):\n def __init__(self, name):\n self.name = name\n self.pages = 0\n self.notes = {}\n\n def add_note(self, note, page_number=0):\n if len(self.notes.keys()) < 300:\n if page_number == 0:\n if self.pages < 301:\n self.notes[self.pages] = note\n self.pages += 1\n else:\n for i in range(300):\n if i not in list(self.notes.keys()):\n self.notes[self.pages] = note\n else:\n if page_number not in self.notes.keys():\n self.notes[page_number] = note\n else:\n print(\"해당 페이지에는 이미 노트가 존재합니다\")\n else:\n print(\"더 이상 노트를 추가하지 못합니다.\")\n\n def remove_note(self, page_number):\n del self.notes[page_number]\n\n def get_number_of_all_lines(self):\n result = 0\n for k in self.notes.keys():\n result += self.notes[k].get_number_of_lines()\n return result\n\n def get_number_of_all_characters(self):\n result = 0\n for k in self.notes.keys():\n result += self.notes[k].get_number_of_characters()\n return result\n\n def get_number_of_all_pages(self):\n return len(self.notes.keys())\n\n def __str__(self):\n return self.name",
"_____no_output_____"
],
[
"good_sentence = \"\"\"세상사는데 도움이되는 명언들 힘이되는 명언 용기를 주는 명언 위로가되는 명언 좋은명언 글귀 모음 100가지 자주 보면 좋을듯 하여 선별 했습니다.\"\"\"\nnote_1 = Note(good_sentence)\nprint(note_1)",
"세상사는데 도움이되는 명언들 힘이되는 명언 용기를 주는 명언 위로가되는 명언 좋은명언 글귀 모음 100가지 자주 보면 좋을듯 하여 선별 했습니다.\n"
],
[
"note_1.remove()\nprint(note_1)",
"삭제된 노트입니다\n"
],
[
"good_sentence = \"\"\"삶이 있는 한 희망은 있다 -키케로 \"\"\"\nnote_2 = Note(good_sentence)",
"_____no_output_____"
],
[
"good_sentence = \"\"\"하루에 3시간을 걸으면 7년 후에 지구를 한바퀴 돌 수 있다.-사무엘존슨\"\"\"\nnote_3 = Note(good_sentence)",
"_____no_output_____"
],
[
"good_sentence = \"\"\"행복의 문이 하나 닫히면 다른 문이 열린다 그러나 우리는 종종 닫힌 문을 멍하니 바라보다가\n\n 우리를 향해 열린 문을 보지 못하게 된다 - 헬렌켈러\"\"\"\nnote_4 = Note(good_sentence)",
"_____no_output_____"
],
[
"wise_saying_notebook = NoteBook(\"명언노트\")\nwise_saying_notebook.add_note(note_1)\nprint(wise_saying_notebook.get_number_of_all_pages())",
"1\n"
],
[
"wise_saying_notebook.add_note(note_2)\nprint(wise_saying_notebook.get_number_of_all_pages())",
"2\n"
],
[
"wise_saying_notebook.add_note(note_3)\nwise_saying_notebook.add_note(note_4)\nprint(wise_saying_notebook.get_number_of_all_pages())\nprint(wise_saying_notebook.get_number_of_all_characters())",
"4\n152\n"
],
[
"wise_saying_notebook.remove_note(3)\nprint(wise_saying_notebook.get_number_of_all_pages())",
"3\n"
],
[
"wise_saying_notebook.add_note(note_1, 100)",
"_____no_output_____"
],
[
"wise_saying_notebook.add_note(note_1, 100)",
"해당 페이지에는 이미 노트가 존재합니다\n"
],
[
"for i in range(300):\n wise_saying_notebook.add_note(note_1, i)",
"해당 페이지에는 이미 노트가 존재합니다\n해당 페이지에는 이미 노트가 존재합니다\n해당 페이지에는 이미 노트가 존재합니다\n해당 페이지에는 이미 노트가 존재합니다\n"
],
[
"print(wise_saying_notebook.get_number_of_all_pages())",
"300\n"
],
[
"wise_saying_notebook.add_note(note_1)",
"더 이상 노트를 추가하지 못합니다.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca07884c6893ed22ce26f5c39d98f099985b935 | 158,607 | ipynb | Jupyter Notebook | s16_create_wavelet_plots.ipynb | JelmerBot/qm-dipole-localisation | 14c215f0e89ab411787db14e88fec57de2199e8d | [
"MIT"
] | null | null | null | s16_create_wavelet_plots.ipynb | JelmerBot/qm-dipole-localisation | 14c215f0e89ab411787db14e88fec57de2199e8d | [
"MIT"
] | null | null | null | s16_create_wavelet_plots.ipynb | JelmerBot/qm-dipole-localisation | 14c215f0e89ab411787db14e88fec57de2199e8d | [
"MIT"
] | null | null | null | 300.392045 | 24,956 | 0.90868 | [
[
[
"# Create publication plots",
"_____no_output_____"
],
[
"#### Configure matplotlib for final figure styles",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n# Configure matplotlib\nfrom s90_helper_functions import *\nconfigure_matplotlib()\n\n# Other imports\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"#### 3D wavelet",
"_____no_output_____"
]
],
[
[
"# Compute the wavelets\nN=1024; # #of points used for wavelet \nb=0; # Source location along x-axis \nrhorange=2; # Range of rho values (from - to +)\nC=1; # Overall normalization constant\nlin=np.linspace(1,2*N+1,2*N+1); # Creates an array (linear increase without offset)\nx=lin-N-1 ; # Symmetric x-vector\nd=N/rhorange; # Source distance to array\nrho=(x-b)/d; # Normalization of x with respect to distance\nx_plot=np.linspace(-rhorange, rhorange, len(x)); # X-axis values for the plot\n\n# wavelet values\ndenom=(1+rho**2)**(5/2); # Denominator of the wavelets\npsi_e=(2*rho**2-1)/denom; # Even wavelet\npsi_o=-(3*rho)/denom; # Odd wavelet\npsi_n=(2-rho**2)/denom; # Navelet\n# envenope values\nenv_vx=np.sqrt(psi_e**2+psi_o**2); # Envelope of vx\nenv_vy=np.sqrt(psi_o**2+psi_n**2) / 2; # Envelope of vy\n# easy 3d plotting value\nzero = np.zeros(x.shape);",
"_____no_output_____"
],
[
"# Figures\nwidth, height = figure_dimensions(0.32)\nfig = plt.figure(figsize=(width, width))\nplt.plot(x_plot, psi_e, linewidth=0.5)\nplt.plot(x_plot, psi_o, linewidth=0.5)\nax = plt.gca()\nax.set_xlim([-rhorange, rhorange])\nax.set_ylim([-rhorange, rhorange])\nax.set_xlabel(r'$\\rho$')\nax.set_ylabel(r'')\nax.set_xticklabels([])\nax.set_yticklabels([])\nax.tick_params(axis='both', length=0)\nplt.legend([r'$\\psi_e$', r'$-\\psi_o$'])\nfig.savefig('images/publication/wavelet_2D')\nplt.show()\n\nwidth, height = figure_dimensions(0.32)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, width))\nax = fig.gca(projection='3d')\nax.view_init(elev=-160, azim=135)\nbounds = [-2, 2]\nax.auto_scale_xyz(bounds, bounds, bounds)\nax.tick_params(length=0, width=0, color=(0, 0, 0, 0))\nplt.plot(x_plot, psi_e, psi_o, color=my_palette[2], linewidth=0.5)\nax.set_xlim([-rhorange, rhorange])\nax.set_ylim([-rhorange, rhorange])\nax.set_zlim([-rhorange, rhorange])\nax.set_xticklabels([])\nax.set_yticklabels([])\nax.set_zticklabels([])\nax.set_xlabel(r'$\\rho$', labelpad=-14)\nax.set_ylabel(r'$\\psi_e$', labelpad=-14)\nax.set_zlabel(r'$-\\psi_o$', labelpad=-16)\nax.patch.set_facecolor((0, 0, 0, 0))\nax.set_position([0.06, 0, 0.94, 1])\nplt.legend([r'$\\vec{\\psi}_{env}$'],\\\n bbox_to_anchor=(1,0.85),\\\n bbox_transform=plt.gcf().transFigure)\nfig.savefig('images/publication/wavelet_3D')\nplt.show()\n",
"_____no_output_____"
],
[
"# Compute velocities\nnp.cosd = lambda x: np.cos(np.deg2rad(x))\nnp.sind = lambda x: np.sin(np.deg2rad(x))\n\nphis = np.c_[0, 80, 160, 240, 320].T\nlight = [0.8, 1, 1.2, 1.4, 1.6, 1.8]\nlabel = ['$0^\\circ$', '$80^\\circ$', '$160^\\circ$', '$240^\\circ$', '$320^\\circ$']\npos_x = [(1.05, 0.1), (-.65, 0.6), (-.12, 0.75), (0.4, 0.7), (0.8, 0.38)]\npos_y = [(-0.45, 0.3), (-0.05, 0.85), (0.35, 0.45), (-.15, -0.75), (.3, -0.6)]\nvx = np.cosd(phis) * psi_e + np.sind(phis) * psi_o;\nvy = (np.cosd(phis) * psi_o + np.sind(phis) * psi_n) / 2;",
"_____no_output_____"
],
[
"idx = 4\nwidth, height = figure_dimensions(0.49)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, height))\nplt.plot(x_plot, vy[idx], color=adjust_lightness(my_palette[1], light[idx]), linewidth=0.8)\nplt.plot(x_plot, env_vx, color=my_palette[2], linewidth=0.8)\nplt.plot(x_plot, -env_vx, color=my_palette[2], linewidth=0.8)\nphis[idx]",
"_____no_output_____"
],
[
"annotate = lambda x, pt : plt.annotate(x, \n xy=pt,\n xycoords='data',\n xytext=pt,\n textcoords='offset points',\n ha=\"center\", va=\"center\",\n fontsize=mpl.rcParams['xtick.labelsize']) \n\nwidth, height = figure_dimensions(0.49)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, height))\nfor idx in range(len(phis)):\n plt.plot(x_plot, vx[idx], color=adjust_lightness(my_palette[0], light[idx]), linewidth=0.8)\n annotate(label[idx], pos_x[idx])\nplt.plot(x_plot, env_vx, color=my_palette[2], linewidth=0.8)\nplt.plot(x_plot, -env_vx, color=my_palette[2], linewidth=0.8)\n# labels\nplt.yticks([-1, -0.5, 0, 0.5, 1])\nplt.ylabel(r'$v_x$ (normalised)')\nplt.xlabel(r'$\\rho$')\n# l = plt.legend([r'$\\psi_{x,env}$', r'$\\varphi=$~\\SI{0}{\\degree}', r'$\\varphi=$~\\SI{80}{\\degree}',\n# r'$\\varphi=$~\\SI{160}{\\degree}', r'$\\varphi=$~\\SI{240}{\\degree}', r'$\\varphi=$~\\SI{320}{\\degree}'], loc='lower left')\nfig.subplots_adjust(left=0.2, right=0.99, top=0.95, bottom=0.2)\nfig.savefig('images/publication/envelope_x')\nplt.show()\n\nwidth, height = figure_dimensions(0.49)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, height))\nfor idx in range(len(phis)):\n plt.plot(x_plot, vy[idx], color=adjust_lightness(my_palette[1], light[idx]), linewidth=0.8)\n annotate(label[idx], pos_y[idx])\nplt.plot(x_plot, env_vy, color=my_palette[2], linewidth=0.8)\nplt.plot(x_plot, -env_vy, color=my_palette[2], linewidth=0.8)\n# labels\nplt.yticks([-1, -0.5, 0, 0.5, 1])\nplt.ylabel(r'$v_y$ (normalised)')\nplt.xlabel(r'$\\rho$')\n# l = plt.legend([r'$\\psi_{x,env}$', r'$\\varphi=$~\\SI{0}{\\degree}', r'$\\varphi=$~\\SI{80}{\\degree}',\n# r'$\\varphi=$~\\SI{160}{\\degree}', r'$\\varphi=$~\\SI{240}{\\degree}', r'$\\varphi=$~\\SI{320}{\\degree}'], loc='lower left')\nfig.subplots_adjust(left=0.2, right=0.99, top=0.95, bottom=0.2)\nfig.savefig('images/publication/envelope_y')\nplt.show()",
"_____no_output_____"
],
[
"annotate = lambda x, pt : plt.annotate(x, \n xy=pt,\n xycoords='data',\n xytext=pt,\n textcoords='offset points',\n ha=\"center\", va=\"center\",\n fontsize=mpl.rcParams['xtick.labelsize']) \n\nwidth, height = figure_dimensions(0.49)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, height))\nfor idx in range(len(phis)):\n plt.plot(x_plot, vx[idx], color=adjust_lightness(my_palette[0], light[idx]), linewidth=0.8)\n annotate(label[idx], pos_x[idx])\n# labels\nplt.yticks([-1, -0.5, 0, 0.5, 1])\nplt.ylabel(r'$v_x$ (normalised)')\nplt.xlabel(r'$x-b$ (\\si{\\meter})')\n# l = plt.legend([r'$\\psi_{x,env}$', r'$\\varphi=$~\\SI{0}{\\degree}', r'$\\varphi=$~\\SI{80}{\\degree}',\n# r'$\\varphi=$~\\SI{160}{\\degree}', r'$\\varphi=$~\\SI{240}{\\degree}', r'$\\varphi=$~\\SI{320}{\\degree}'], loc='lower left')\nfig.subplots_adjust(left=0.2, right=0.99, top=0.95, bottom=0.22)\nfig.savefig('images/publication/vx')\nplt.show()\n\nwidth, height = figure_dimensions(0.49)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, height))\nfor idx in range(len(phis)):\n plt.plot(x_plot, vy[idx], color=adjust_lightness(my_palette[1], light[idx]), linewidth=0.8)\n annotate(label[idx], pos_y[idx])\n# labels\nplt.yticks([-1, -0.5, 0, 0.5, 1])\nplt.ylabel(r'$v_y$ (normalised)')\nplt.xlabel(r'$x-b$ (\\si{\\meter})')\n# l = plt.legend([r'$\\psi_{x,env}$', r'$\\varphi=$~\\SI{0}{\\degree}', r'$\\varphi=$~\\SI{80}{\\degree}',\n# r'$\\varphi=$~\\SI{160}{\\degree}', r'$\\varphi=$~\\SI{240}{\\degree}', r'$\\varphi=$~\\SI{320}{\\degree}'], loc='lower left')\nfig.subplots_adjust(left=0.2, right=0.99, top=0.95, bottom=0.22)\nfig.savefig('images/publication/vy')\nplt.show()",
"_____no_output_____"
],
[
"a = fig.get_axes()",
"_____no_output_____"
],
[
"# Quadrature curve\npsi_quad = np.sqrt(vx**2 + 1/2*((vy*2)**2))\npsi_quad = psi_quad / psi_quad[:,rho==0]\npsi_sym = ((7/4 *rho**4 - 13/4*rho**2 - 1/2) * np.cosd(2*phis) + 9/4*rho**4 + 15/4*rho**2 + 3/2) / ((1+rho**2)**5 * (1 + np.sind(phis)**2))\npsi_skew = (9/2*rho**3 * np.sind(2*phis)) / ((1+rho**2)**5 * (1 + np.sind(phis)**2))\n\nrho_anchor = 2 / np.sqrt(5)",
"_____no_output_____"
],
[
"annotate = lambda x, pt : plt.annotate(x, \n xy=pt,\n xycoords=plt.gca().get_xaxis_transform(),\n xytext=pt,\n textcoords='offset points',\n ha=\"center\", va=\"center\",\n fontsize=mpl.rcParams['xtick.labelsize']) \n\nwidth, height = figure_dimensions(0.9, 0.35)\nmy_palette = create_my_palette()\nfig = plt.figure(figsize=(width, height))\nplt.subplot(1,3,1)\nfor idx in range(len(phis)):\n plt.plot(x_plot, psi_quad[idx], color=my_palette[idx], linewidth=0.5)\n# labels\nplt.yticks([0, 0.5, 1])\nplt.ylabel(r'$\\psi_{quad,norm}$')\nplt.xlabel(r'$\\rho$')\nl = plt.legend([r'$\\varphi=$~\\SI{0}{\\degree}', r'$\\varphi=$~\\SI{80}{\\degree}',\n r'$\\varphi=$~\\SI{160}{\\degree}', r'$\\varphi=$~\\SI{240}{\\degree}', \n r'$\\varphi=$~\\SI{320}{\\degree}'], loc='upper left', bbox_to_anchor=(0.12, -0.4), ncol=5)\n\n\nplt.subplot(1,3,2)\nfor idx in range(len(phis)):\n plt.plot(x_plot, psi_sym[idx], color=my_palette[idx], linewidth=0.5)\nplt.axvline(-rho_anchor, color=[0, 0, 0], linestyle='--', linewidth=0.5)\nplt.axvline(rho_anchor, color=[0, 0, 0], linestyle='--', linewidth=0.5)\nannotate(r'$+\\rho_{anch}', (rho_anchor, -0.05))\nannotate(r'$-\\rho_{anch}', (-rho_anchor,-0.05))\nplt.yticks([0, 0.5, 1])\nplt.ylabel(r'$\\Phi_{sym}$')\nplt.xlabel(r'$\\rho$')\n\nplt.subplot(1,3,3)\nfor idx in range(len(phis)):\n plt.plot(x_plot, psi_skew[idx], color=my_palette[idx], linewidth=0.5)\n# labels\nplt.yticks([-0.5, 0, 0.5])\nplt.ylabel(r'$\\Phi_{skew}$')\nplt.xlabel(r'$\\rho$')\nplt.subplots_adjust(wspace=0.4, top=0.95, left=0.1, right=0.99, bottom=0.4)\n\nfig.savefig('images/publication/quad_curve')\nplt.show()\n",
"_____no_output_____"
],
[
"# Check the width of psi_quad for different orienations\nN=1024; # #of points used for wavelet \nb=0; # Source location along x-axis \nrhorange=2; # Range of rho values (from - to +)\nlin=np.linspace(1,2*N+1,2*N+1); # Creates an array (linear increase without offset)\nx=lin-N-1 ; # Symmetric x-vector\nd=N/rhorange; # Source distance to array\nrho=(x-b)/d; # Normalization of x with respect to distance\nx_plot=np.linspace(-rhorange, rhorange, len(x)); # X-axis values for the plot\n\n# wavelet values\ndenom=(1+rho**2)**(5/2); # Denominator of the wavelets\npsi_e=(2*rho**2-1)/denom; # Even wavelet\npsi_o=(3*rho)/denom; # Odd wavelet\npsi_n=(2-rho**2)/denom; # Navelet\n\n# velocity values\nphis = np.linspace(0, 2*np.pi, 1025)[np.newaxis].T\nvx = np.cos(phis) * psi_e + np.sin(phis) * psi_o;\nvy = (np.cos(phis) * psi_o + np.sin(phis) * psi_n);\nphi_quad = np.sqrt(vx**2 + 1/2 * vy**2)\nphi_quad_norm = phi_quad / phi_quad[:,rho==0]\n\n# Intersect with sqrt(0.2096)\nrow_mins_low = abs(phi_quad_norm[:,rho < 0] - 0.4572).argmin(axis=1)\nrow_mins_high = abs(phi_quad_norm[:,rho > 0] - 0.4572).argmin(axis=1)\ncurve_width = rho[rho > 0][row_mins_high] - rho[rho < 0][row_mins_low]\n\nwidth, height = figure_dimensions(0.7)\nfig = plt.figure(figsize=(width, height))\nplt.plot(np.rad2deg(phis), curve_width, '-')\nplt.ylim([0, 2.5])\nplt.xlabel(r'$\\varphi$ (\\si{\\degree})')\nplt.ylabel(r'width at $\\psi_{quad,norm} = \\sqrt{0.210}$')\n\nfig.subplots_adjust(bottom=0.17, right=0.95, top=0.95)\nfig.savefig('images/publication/quad_curve_width')\n\nprint(curve_width.mean())\nprint(curve_width.std())",
"1.7765282012195123\n0.011233818827353019\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca0917716df66851153d85917c122fa39ab7c49 | 294,593 | ipynb | Jupyter Notebook | module3-regression-diagnostics/regression-diagnostics-assignment.ipynb | tbradshaw91/DS-Unit-2-Sprint-2-Regression | ec3a528a217b38da2ab751f5ef5ec5c1db10e70a | [
"MIT"
] | null | null | null | module3-regression-diagnostics/regression-diagnostics-assignment.ipynb | tbradshaw91/DS-Unit-2-Sprint-2-Regression | ec3a528a217b38da2ab751f5ef5ec5c1db10e70a | [
"MIT"
] | null | null | null | module3-regression-diagnostics/regression-diagnostics-assignment.ipynb | tbradshaw91/DS-Unit-2-Sprint-2-Regression | ec3a528a217b38da2ab751f5ef5ec5c1db10e70a | [
"MIT"
] | null | null | null | 288.251468 | 161,366 | 0.872149 | [
[
[
"# Regression Diagnostics\n\nThe purpose of this assigment is introduce you to a new library for linear regression called statmodels which is much better suited for inferential modeling than sklearn. This assignment is also to familiarize yourself with some of most important procedures for improving the interpretability of regression coefficients. You will also perform important statistical tests that will help establish that whether or not important assumptions that safeguard the interpretability of OLS coefficients have been met. \n\nWe will continue to use the Ames Housing Dataset so that you can focus on the techniques and not on cleaning/getting associated with a brand new dataset.",
"_____no_output_____"
],
[
"**Libraries**",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom scipy import stats",
"_____no_output_____"
]
],
[
[
"**Importing DS**",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv\")\ndf = df.drop(['id','date','zipcode','lat','long','yr_renovated'], axis=1)",
"_____no_output_____"
]
],
[
[
"## 1.1 Choose an X and Y variable from your dataset and use them to create a Seaborn Regplot",
"_____no_output_____"
]
],
[
[
"# Need to come back through and make larger, bold font\nx = np.array(df['price'])\ny = np.array(df['sqft_living'])\n# Plotting\nfig = sns.regplot(x, y, color='hotpink').set_title('Cost Of Home Based On SQFT')",
"_____no_output_____"
]
],
[
[
"## 1.2 Now using the X variables that you feel like will be the best predictors of y use statsmodel to run the multiple regression between these variables and Y. You don't need to use every X variable in your dataset, in fact it's probably better if you don't. Just pick ones that you have already cleaned that seem the most relevant to house prices.",
"_____no_output_____"
]
],
[
[
"# Looking at what makes a great house\nx = df[['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'waterfront', 'condition', 'grade', 'sqft_basement', 'yr_built']] \ny = df['price']\n# Little Details AKA\n# Fitting the Model\nx = sm.add_constant(x)\nmodel = sm.OLS(y, x).fit()\npredictions = model.predict(x) \n# Summarizing\nsummarize = model.summary()\nprint(summarize)",
" OLS Regression Results \n==============================================================================\nDep. Variable: price R-squared: 0.646\nModel: OLS Adj. R-squared: 0.646\nMethod: Least Squares F-statistic: 3937.\nDate: Thu, 02 May 2019 Prob (F-statistic): 0.00\nTime: 23:20:38 Log-Likelihood: -2.9639e+05\nNo. Observations: 21613 AIC: 5.928e+05\nDf Residuals: 21602 BIC: 5.929e+05\nDf Model: 10 \nCovariance Type: nonrobust \n=================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------\nconst 6.589e+06 1.31e+05 50.243 0.000 6.33e+06 6.85e+06\nbedrooms -4.217e+04 2040.277 -20.671 0.000 -4.62e+04 -3.82e+04\nbathrooms 4.787e+04 3497.180 13.690 0.000 4.1e+04 5.47e+04\nsqft_living 173.2292 3.525 49.142 0.000 166.320 180.139\nsqft_lot -0.2246 0.037 -6.080 0.000 -0.297 -0.152\nfloors 2.615e+04 3769.886 6.937 0.000 1.88e+04 3.35e+04\nwaterfront 7.189e+05 1.74e+04 41.343 0.000 6.85e+05 7.53e+05\ncondition 1.793e+04 2490.092 7.202 0.000 1.31e+04 2.28e+04\ngrade 1.3e+05 2168.191 59.979 0.000 1.26e+05 1.34e+05\nsqft_basement 16.0369 4.442 3.611 0.000 7.331 24.743\nyr_built -3790.2774 67.296 -56.322 0.000 -3922.183 -3658.372\n==============================================================================\nOmnibus: 15947.087 Durbin-Watson: 1.980\nProb(Omnibus): 0.000 Jarque-Bera (JB): 1049806.235\nSkew: 2.950 Prob(JB): 0.00\nKurtosis: 36.629 Cond. No. 3.89e+06\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 3.89e+06. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n"
]
],
[
[
"## 1.3 Identify the standard errors and P-Values of these coefficients in the output table. What is the interpretation of the P-values here?",
"_____no_output_____"
],
[
" *Features that have p-values of < 0.05 means we will reject the null hypothesis, thus assuming that there is no significant difference between x and y and having high standard errors means that our coef are less accurate.* ",
"_____no_output_____"
],
[
"## 1.4 Remove outliers from your dataset and run the regression again. Do you see a change in some coefficients? Which seem to move the most?",
"_____no_output_____"
],
[
"**Removing Outliers**",
"_____no_output_____"
]
],
[
[
"# Looking at the original shape\nprint(df.shape)\n# Removing outliers\nnew_df = df[(np.abs(stats.zscore(df)) < 3).all(axis=1)]\n# Looking at the new shape\nprint(new_df.shape)",
"(21613, 15)\n(19628, 15)\n"
],
[
"# Same as above..\nx = new_df[['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'waterfront', 'condition', 'grade', 'sqft_basement', 'yr_built']] \ny = new_df['price']\n# Fitting our Model\nx = sm.add_constant(x)\nmodel = sm.OLS(y, x).fit()\npredictions = model.predict(x) \n# Summarizing\nsummarizing = model.summary()\nprint(summarizing)",
" OLS Regression Results \n==============================================================================\nDep. Variable: price R-squared: 0.573\nModel: OLS Adj. R-squared: 0.573\nMethod: Least Squares F-statistic: 2925.\nDate: Thu, 02 May 2019 Prob (F-statistic): 0.00\nTime: 23:20:46 Log-Likelihood: -2.6221e+05\nNo. Observations: 19628 AIC: 5.244e+05\nDf Residuals: 19618 BIC: 5.245e+05\nDf Model: 9 \nCovariance Type: nonrobust \n=================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------\nconst 5.336e+06 9.92e+04 53.807 0.000 5.14e+06 5.53e+06\nbedrooms -2.184e+04 1675.437 -13.037 0.000 -2.51e+04 -1.86e+04\nbathrooms 3.308e+04 2749.407 12.033 0.000 2.77e+04 3.85e+04\nsqft_living 106.3765 2.980 35.703 0.000 100.536 112.217\nsqft_lot -0.6139 0.100 -6.169 0.000 -0.809 -0.419\nfloors 4.206e+04 2885.830 14.574 0.000 3.64e+04 4.77e+04\nwaterfront -8.976e-11 2.32e-12 -38.703 0.000 -9.43e-11 -8.52e-11\ncondition 1.862e+04 1862.980 9.997 0.000 1.5e+04 2.23e+04\ngrade 1.176e+05 1690.613 69.553 0.000 1.14e+05 1.21e+05\nsqft_basement 23.3834 3.625 6.451 0.000 16.278 30.488\nyr_built -3078.1737 51.026 -60.325 0.000 -3178.190 -2978.158\n==============================================================================\nOmnibus: 3455.001 Durbin-Watson: 1.960\nProb(Omnibus): 0.000 Jarque-Bera (JB): 10515.307\nSkew: 0.916 Prob(JB): 0.00\nKurtosis: 6.083 Cond. No. 3.96e+20\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The smallest eigenvalue is 3e-29. This might indicate that there are\nstrong multicollinearity problems or that the design matrix is singular.\n"
]
],
[
[
"Only notice a slight change in the coefs, sqft_basement was the one with the most significant change",
"_____no_output_____"
],
[
"## 1.5 Create a new log(y) variable and use it to run a log-linear regression of your variables using statmodels ",
"_____no_output_____"
]
],
[
[
"# Creating a new log variable\nnew_df['ln_price'] = np.log(new_df['price'])\n# Setting up x and y\nx = new_df[['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'waterfront', 'condition', 'grade', 'sqft_basement', 'yr_built']] \ny = new_df['ln_price']\n# Fitting our model \nx = sm.add_constant(x)\nmodel = sm.OLS(y, x).fit()\npredictions = model.predict(x) \n# Summarizing\nsummarizing = model.summary()\nprint(summarizing)",
" OLS Regression Results \n==============================================================================\nDep. Variable: ln_price R-squared: 0.560\nModel: OLS Adj. R-squared: 0.560\nMethod: Least Squares F-statistic: 2772.\nDate: Thu, 02 May 2019 Prob (F-statistic): 0.00\nTime: 23:20:51 Log-Likelihood: -4678.7\nNo. Observations: 19628 AIC: 9377.\nDf Residuals: 19618 BIC: 9456.\nDf Model: 9 \nCovariance Type: nonrobust \n=================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------\nconst 21.8134 0.199 109.769 0.000 21.424 22.203\nbedrooms -0.0356 0.003 -10.599 0.000 -0.042 -0.029\nbathrooms 0.0790 0.006 14.347 0.000 0.068 0.090\nsqft_living 0.0002 5.97e-06 29.338 0.000 0.000 0.000\nsqft_lot -8.238e-07 1.99e-07 -4.131 0.000 -1.21e-06 -4.33e-07\nfloors 0.1073 0.006 18.554 0.000 0.096 0.119\nwaterfront -1.817e-16 4.65e-18 -39.109 0.000 -1.91e-16 -1.73e-16\ncondition 0.0375 0.004 10.038 0.000 0.030 0.045\ngrade 0.2296 0.003 67.775 0.000 0.223 0.236\nsqft_basement 8.412e-05 7.26e-06 11.582 0.000 6.99e-05 9.84e-05\nyr_built -0.0057 0.000 -55.807 0.000 -0.006 -0.006\n==============================================================================\nOmnibus: 75.028 Durbin-Watson: 1.970\nProb(Omnibus): 0.000 Jarque-Bera (JB): 81.471\nSkew: -0.118 Prob(JB): 2.04e-18\nKurtosis: 3.211 Cond. No. 3.96e+20\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The smallest eigenvalue is 3e-29. This might indicate that there are\nstrong multicollinearity problems or that the design matrix is singular.\n"
]
],
[
[
"## 2.1 Run a test for high levels of collinearity in your dataset. Calculate the Variance Inflation Factor for each X variable. Do you see VIF values greater than ten? If so try omitting those X variables and run your regression again. Do the standard errors change? Do the coefficients change? Do the coefficients seem to have an interpretation that matches your intuition?",
"_____no_output_____"
]
],
[
[
"from statsmodels.stats.outliers_influence import variance_inflation_factor",
"_____no_output_____"
],
[
"VIF = [variance_inflation_factor(x.values, i) for i in range(x.shape[1])]\nprint(VIF)",
"[8214.245306736519, 1.6943031668690007, 3.0076860370758545, 4.0450492273312815, 1.1283519115120608, 2.0114348919232468, nan, 1.199410357015754, 2.497010147657055, 1.6574551052141422, 1.8782253464772642]\n"
],
[
"# Setting up x and y\nx = new_df[['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'condition', 'grade', 'sqft_basement', 'yr_built']] \ny = new_df['ln_price']\n# Fitting the model\nx = sm.add_constant(x)\nmodel = sm.OLS(y, x).fit()\npredictions = model.predict(x) \n# Summarizing\nsummarizing = model.summary()\nprint(summarizing)",
" OLS Regression Results \n==============================================================================\nDep. Variable: ln_price R-squared: 0.560\nModel: OLS Adj. R-squared: 0.560\nMethod: Least Squares F-statistic: 2772.\nDate: Thu, 02 May 2019 Prob (F-statistic): 0.00\nTime: 23:20:59 Log-Likelihood: -4678.7\nNo. Observations: 19628 AIC: 9377.\nDf Residuals: 19618 BIC: 9456.\nDf Model: 9 \nCovariance Type: nonrobust \n=================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------\nconst 21.8134 0.199 109.769 0.000 21.424 22.203\nbedrooms -0.0356 0.003 -10.599 0.000 -0.042 -0.029\nbathrooms 0.0790 0.006 14.347 0.000 0.068 0.090\nsqft_living 0.0002 5.97e-06 29.338 0.000 0.000 0.000\nsqft_lot -8.238e-07 1.99e-07 -4.131 0.000 -1.21e-06 -4.33e-07\nfloors 0.1073 0.006 18.554 0.000 0.096 0.119\ncondition 0.0375 0.004 10.038 0.000 0.030 0.045\ngrade 0.2296 0.003 67.775 0.000 0.223 0.236\nsqft_basement 8.412e-05 7.26e-06 11.582 0.000 6.99e-05 9.84e-05\nyr_built -0.0057 0.000 -55.807 0.000 -0.006 -0.006\n==============================================================================\nOmnibus: 75.028 Durbin-Watson: 1.970\nProb(Omnibus): 0.000 Jarque-Bera (JB): 81.471\nSkew: -0.118 Prob(JB): 2.04e-18\nKurtosis: 3.211 Cond. No. 1.40e+06\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 1.4e+06. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n"
]
],
[
[
"Coefs didn't change too much",
"_____no_output_____"
],
[
"## 2.2 Variables that have high levels of multicollinearity should also be highly correlated with each other. Calculate your X matrix's correlation matrix to check if the variables highlighted by the VIF test truly are highly correlated.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(10, 10))\nsns.heatmap(x.corr());",
"_____no_output_____"
]
],
[
[
"## 2.3 If you have variables with high Variance Inflation Factors, try excluding them from your regression. Do your standard errors improve? (get smaller). If high levels of multicollinearity are removed, the precision of the dataset should increase.",
"_____no_output_____"
],
[
"N/A.... I might have missed something...",
"_____no_output_____"
],
[
"## 2.4 Recalculate your regression using Robust Standard Errors? What happens to your standard errors?",
"_____no_output_____"
]
],
[
[
"# My x and y\nx = new_df[['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'condition', 'grade', 'sqft_basement', 'yr_built']] \ny = new_df['ln_price']\n# Fitting the model\nx = sm.add_constant(x)\nmodel = sm.OLS(y, x).fit(cov_type='HC3')\npredictions = model.predict(x) \n# Summarizing\nsummarizing = model.summary()\nprint(summarizing)",
"/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py:2389: FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.\n return ptp(axis=axis, out=out, **kwargs)\n"
]
],
[
[
"## 2.5 Use scatterplots or Seaborn's pairplot functionality to perform an eyeball test for potential variables that would be candidates for generating polynomial regressors. ",
"_____no_output_____"
]
],
[
[
"# Didn't have time to make it pink :(\n# Just the columns I would like to look at\nx_cols = ['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'condition', 'grade', 'sqft_basement', 'yr_built']\n# Scatterplot of x and y variables\nsns.pairplot(data=new_df, y_vars=['ln_price'], x_vars=x_cols);",
"_____no_output_____"
]
],
[
[
"## 2.6 Use seaborn's residplot to plot the distribution of each x variable's residuals. Does these plots indicate any other features that would be potential candidates for polynomial features.",
"_____no_output_____"
]
],
[
[
"fig, axs = plt.subplots(ncols=2,nrows=4)\n# Setting up my variables \nsns.residplot(x['bedrooms'], y, lowess=True, color=\"black\", ax=axs[0][0], scatter_kws={'color':'hotpink'});\nsns.residplot(x['bathrooms'], y, lowess=True, color=\"black\", ax=axs[0][1], scatter_kws={'color':'hotpink'});\nsns.residplot(x['sqft_living'], y, lowess=True, color=\"black\", ax=axs[1][0], scatter_kws={'color':'hotpink'});\nsns.residplot(x['sqft_lot'], y, lowess=True, color=\"black\", ax=axs[1][1], scatter_kws={'color':'hotpink'});\nsns.residplot(x['floors'], y, lowess=True, color=\"black\", ax=axs[2][0], scatter_kws={'color':'hotpink'});\nsns.residplot(x['condition'], y, lowess=True, color=\"black\", ax=axs[2][1], scatter_kws={'color':'hotpink'});\nsns.residplot(x['grade'], y, lowess=True, color=\"black\", ax=axs[3][0], scatter_kws={'color':'hotpink'});\nsns.residplot(x['sqft_basement'], y, lowess=True, color=\"black\", ax=axs[3][1], scatter_kws={'color':'hotpink'});\nsns.residplot(x['yr_built'], y, lowess=True, color=\"black\", ax=axs[3][0], scatter_kws={'color':'hotpink'});",
"_____no_output_____"
]
],
[
[
"## 2.6 Feature Engineer the appropriate polynomial features from your analysis above and include them in one final log-polynomial, robust standard error, regression. Do the coefficients of this most advanced regression match your intuition better than the coefficients of the very first regression that we ran with the Statmodels library?",
"_____no_output_____"
]
],
[
[
"new_df.columns",
"_____no_output_____"
],
[
"# Feature Engineering\nnew_df['Yr_Built_SQRD'] = new_df['yr_built']**2\nnew_df['Basement_SQRD'] = new_df['sqft_basement']**2\n# Some Columns\nx_cols2 = ['bedrooms', 'bathrooms', 'sqft_living',\n 'sqft_lot', 'floors', 'condition', 'grade', 'sqft_basement', 'yr_built',\n 'Yr_Built_SQRD', 'Basement_SQRD']\n# Setting up my x and y\nx = new_df[x_cols2] \ny = new_df['ln_price']\n# Fitting model\nmodel = sm.OLS(y, sm.add_constant(x))\nresults = model.fit()\nprint(results.summary())",
" OLS Regression Results \n==============================================================================\nDep. Variable: ln_price R-squared: 0.565\nModel: OLS Adj. R-squared: 0.565\nMethod: Least Squares F-statistic: 2315.\nDate: Thu, 02 May 2019 Prob (F-statistic): 0.00\nTime: 23:21:43 Log-Likelihood: -4564.4\nNo. Observations: 19628 AIC: 9153.\nDf Residuals: 19616 BIC: 9247.\nDf Model: 11 \nCovariance Type: nonrobust \n=================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------\nconst 153.8343 11.392 13.504 0.000 131.505 176.164\nbedrooms -0.0298 0.003 -8.840 0.000 -0.036 -0.023\nbathrooms 0.0633 0.006 11.361 0.000 0.052 0.074\nsqft_living 0.0002 6.06e-06 30.280 0.000 0.000 0.000\nsqft_lot -2.757e-07 2.02e-07 -1.363 0.173 -6.72e-07 1.21e-07\nfloors 0.0789 0.006 12.430 0.000 0.066 0.091\ncondition 0.0450 0.004 11.996 0.000 0.038 0.052\ngrade 0.2278 0.003 67.414 0.000 0.221 0.234\nsqft_basement 0.0002 1.79e-05 13.652 0.000 0.000 0.000\nyr_built -0.1407 0.012 -12.087 0.000 -0.163 -0.118\nYr_Built_SQRD 3.449e-05 2.97e-06 11.602 0.000 2.87e-05 4.03e-05\nBasement_SQRD -1.567e-07 1.65e-08 -9.522 0.000 -1.89e-07 -1.24e-07\n==============================================================================\nOmnibus: 66.558 Durbin-Watson: 1.973\nProb(Omnibus): 0.000 Jarque-Bera (JB): 75.829\nSkew: -0.093 Prob(JB): 3.42e-17\nKurtosis: 3.241 Cond. No. 2.03e+10\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 2.03e+10. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n"
]
],
[
[
"# Stretch Goals\n\n- Research the assumptions that are required for OLS to be BLUE the \"Best Linear Unbiased Estimator\". You might try searching and trying to understand the conditions of what's called the Gauss-Markov Theorem.\n- Research other diagnostic tests. Can you show that residuals are normally distributed graphically?\n- Write a blog post about inferential modeling using linear regression.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
eca09b36c5d9c354a50b9554cb8bdd97bd0f9a08 | 8,207 | ipynb | Jupyter Notebook | day2/2_NumPy_Exercises.ipynb | vsay01/My_AI_Challeges | 6df24f3a3308556e59de212ef1baa20b9f0d01a7 | [
"MIT"
] | null | null | null | day2/2_NumPy_Exercises.ipynb | vsay01/My_AI_Challeges | 6df24f3a3308556e59de212ef1baa20b9f0d01a7 | [
"MIT"
] | null | null | null | day2/2_NumPy_Exercises.ipynb | vsay01/My_AI_Challeges | 6df24f3a3308556e59de212ef1baa20b9f0d01a7 | [
"MIT"
] | null | null | null | 20.314356 | 122 | 0.439503 | [
[
[
"References: https://www.machinelearningplus.com/python/101-numpy-exercises-python/",
"_____no_output_____"
],
[
"#### Q. Import numpy as np and print the version number.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nprint(np.__version__)",
"1.17.2\n"
]
],
[
[
"#### Q. Create a 1D array of numbers from 0 to 9\n\nDesired output:\n\n#> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])",
"_____no_output_____"
]
],
[
[
"# solution 1\nmy_array = np.array([0,1,2,3,4,5,6,7,8,9])\nmy_array",
"_____no_output_____"
],
[
"# solution 2\nmy_array = np.arange(10)\nmy_array",
"_____no_output_____"
]
],
[
[
"#### Q. Create a 3×3 numpy array of all True’s",
"_____no_output_____"
]
],
[
[
"my_array = np.full((3,3), True, dtype=bool)\nmy_array",
"_____no_output_____"
]
],
[
[
"- numpy.full(shape, fill_value, dtype=None, order='C')\nReturn a new array of given shape and type, filled with fill_value.\n\n Parameters:\t\n shape : int or sequence of ints \n Shape of the new array, e.g., (2, 3) or 2.\n\n fill_value : scalar Fill value.\n\n dtype : data-type, optional \n The desired data-type for the array The default, None, means : np.array(fill_value).dtype.\n\n order : {‘C’, ‘F’}, optional\n Whether to store multidimensional data in C- or Fortran-contiguous (row- or column-wise) order in memory.\n\n- Returns:\t\nout : ndarray\nArray of fill_value with the given shape, dtype, and order.",
"_____no_output_____"
],
[
"#### Q. How to extract items that satisfy a given condition from 1D array?\n\nExtract all odd numbers from arr\n\nInput:\n\narr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\nDesired output:\n\n#> array([1, 3, 5, 7, 9])",
"_____no_output_____"
]
],
[
[
"# solution 1\narr = np.array([0,1,2,3,4,5,6,7,8,9])\ncondition = np.mod(arr, 2)!=0\nprint(condition)\nextract_arr = np.extract(condition, arr)\nextract_arr",
"[False True False True False True False True False True]\n"
],
[
"# solution 2\narr[arr % 2 == 1]",
"_____no_output_____"
]
],
[
[
"#### Q. Replace all odd numbers in arr with -1\n\nInput:\n\narr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\nDesired Output:\n\n#> array([ 0, -1, 2, -1, 4, -1, 6, -1, 8, -1])",
"_____no_output_____"
]
],
[
[
"# solution 1\narr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\narr[arr % 2 == 1] = -1\narr",
"_____no_output_____"
],
[
"# solution 2\narr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\nprint(arr)\ncondition = np.mod(arr, 2) != 0\nnp.place(arr, condition, [-1])\narr",
"[0 1 2 3 4 5 6 7 8 9]\n"
]
],
[
[
"#### Q. Replace all odd numbers in arr with -1 without changing arr\n\nInput:\n\narr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\nDesired Output:\n\nout\n\n#> array([ 0, -1, 2, -1, 4, -1, 6, -1, 8, -1])\n\narr\n#> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])",
"_____no_output_____"
]
],
[
[
"arr = np.arange(10)\narr\nout = np.where(arr % 2 == 1, -1, arr)\nprint(arr)\nout",
"[0 1 2 3 4 5 6 7 8 9]\n"
]
],
[
[
"#### Q. Convert a 1D array to a 2D array with 2 rows\n\nInput:\n\nnp.arange(10)\n\n#> array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\nDesired Output:\n\n#> array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])",
"_____no_output_____"
]
],
[
[
"arr = np.arange(10)\narr.reshape(2, -1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca09d924329d6baca86117d8c955df843642f85 | 403 | ipynb | Jupyter Notebook | test/dssat/Cumulttfrom.ipynb | cyrillemidingoyi/SQ_Wheat_Phenology | 9f145e34eb837a7aadfb861f2c632d21b2e679f3 | [
"MIT"
] | null | null | null | test/dssat/Cumulttfrom.ipynb | cyrillemidingoyi/SQ_Wheat_Phenology | 9f145e34eb837a7aadfb861f2c632d21b2e679f3 | [
"MIT"
] | null | null | null | test/dssat/Cumulttfrom.ipynb | cyrillemidingoyi/SQ_Wheat_Phenology | 9f145e34eb837a7aadfb861f2c632d21b2e679f3 | [
"MIT"
] | null | null | null | 16.12 | 58 | 0.521092 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
eca0a165a6e6717b8b6f1212aab545684e4b56b4 | 111,616 | ipynb | Jupyter Notebook | model-lsr-quality.ipynb | meawal/SiFi-CC-neural-network2 | 6dbd6e6c7b59718ccf2be60614afa05c437657c6 | [
"MIT"
] | null | null | null | model-lsr-quality.ipynb | meawal/SiFi-CC-neural-network2 | 6dbd6e6c7b59718ccf2be60614afa05c437657c6 | [
"MIT"
] | null | null | null | model-lsr-quality.ipynb | meawal/SiFi-CC-neural-network2 | 6dbd6e6c7b59718ccf2be60614afa05c437657c6 | [
"MIT"
] | null | null | null | 213.007634 | 63,628 | 0.877697 | [
[
[
"import numpy as np\nimport uproot\nimport matplotlib.pyplot as plt\nfrom tensorflow import keras\nimport tensorflow.keras.backend as K\n\nfrom sificc_lib import DataModelQlty, AIQlty, utils, Event, Simulation, root_files\nnp.set_printoptions(precision=2, linewidth=115, suppress=True)\n\n%matplotlib inline",
"_____no_output_____"
],
[
"# model name\nmodel_name = 'model-lsr-quality'\n\n# source model name to load the network weights\nsource_model = 'model-lsr'\n\nshuffle_clusters = False\n\n# load the training data\ndata = DataModelQlty('data-qlty-top-8.npz', batch_size = 128, \n validation_percent = .1, test_percent = .2)\n\n# append an extra dimention to the features since we are using convolutional layers\ndata.append_dim = True\n\n# create an AI instance\nai = AIQlty(data, model_name)\n\n# shuffle the clusters within each event\nif shuffle_clusters:\n ai.data.shuffle_training_clusters()\n \n# balance the training data since there are too many background events\nai.data.balance_training = True\n\n# define the priority of selection\nai.data.weight_non_compton = .1",
"_____no_output_____"
],
[
"# define the learning rate scheduler\ndef lr_scheduler(epoch):\n if epoch < 20:\n return .001\n elif epoch < 40:\n return .0003\n elif epoch < 60:\n return .0001\n elif epoch < 80:\n return .00001\n else:\n return .000003\n\n# define and create the neural network architecture\nai.create_model(conv_layers=[128, 64], classifier_layers=[32], type_layers=[16, 8], \n pos_layers=[64,32], energy_layers=[32, 16], base_l2=.000, limbs_l2=.000)",
"Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninputs (InputLayer) [(None, 72, 1)] 0 \n__________________________________________________________________________________________________\nconv_1 (Conv1D) (None, 8, 128) 1280 inputs[0][0] \n__________________________________________________________________________________________________\nconv_2 (Conv1D) (None, 8, 64) 8256 conv_1[0][0] \n__________________________________________________________________________________________________\nflatting (Flatten) (None, 512) 0 conv_2[0][0] \n__________________________________________________________________________________________________\ndense_cluster_1 (Dense) (None, 32) 16416 flatting[0][0] \n__________________________________________________________________________________________________\ne_cluster (Dense) (None, 8) 264 dense_cluster_1[0][0] \n__________________________________________________________________________________________________\np_cluster (Dense) (None, 8) 264 dense_cluster_1[0][0] \n__________________________________________________________________________________________________\ne_hardmax (Lambda) (None, 8) 0 e_cluster[0][0] \n__________________________________________________________________________________________________\np_hardmax (Lambda) (None, 8) 0 p_cluster[0][0] \n__________________________________________________________________________________________________\njoin_layer (Concatenate) (None, 528) 0 flatting[0][0] \n e_hardmax[0][0] \n p_hardmax[0][0] \n__________________________________________________________________________________________________\ndense_type_1 (Dense) (None, 16) 8464 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_pos_1 (Dense) (None, 64) 33856 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_energy_1 (Dense) (None, 32) 16928 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_type_2 (Dense) (None, 8) 136 dense_type_1[0][0] \n__________________________________________________________________________________________________\ndense_pos_2 (Dense) (None, 32) 2080 dense_pos_1[0][0] \n__________________________________________________________________________________________________\ndense_energy_2 (Dense) (None, 16) 528 dense_energy_1[0][0] \n__________________________________________________________________________________________________\ntype (Dense) (None, 1) 9 dense_type_2[0][0] \n__________________________________________________________________________________________________\npos_x (Dense) (None, 2) 66 dense_pos_2[0][0] \n__________________________________________________________________________________________________\npos_y (Dense) (None, 2) 66 dense_pos_2[0][0] \n__________________________________________________________________________________________________\npos_z (Dense) (None, 2) 66 dense_pos_2[0][0] \n__________________________________________________________________________________________________\nenergy (Dense) (None, 2) 34 dense_energy_2[0][0] \n==================================================================================================\nTotal params: 88,713\nTrainable params: 88,713\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"# #LOADING THE MODEL\n# ai.load(source_model, optimizer=False)\n# ai.extend_model(quality_layers=[64, 32, 32], plot_summary=False)\n# ai.load(model_name, optimizer=False)\n# ai.compile_model()",
"_____no_output_____"
],
[
"# TRAINING THE MODEL\n# load the weights of the source model\nai.load(source_model, optimizer=False)\nai.compile_model()",
"_____no_output_____"
],
[
"# freeze all network components\nfor layer in ai.model.layers:\n layer.trainable = False\n \n# extend the model with the quality part\nai.extend_model(quality_layers=[64, 32, 32], limbs_l2=.00003)\n\n# # eliminate the components weight not intended for training\nai.weight_type = 0\nai.weight_e_cluster = 0\nai.weight_p_cluster = 0\nai.weight_pos_x = 0\nai.weight_pos_y = 0\nai.weight_pos_z = 0\nai.weight_energy = 0\nai.weight_qlty = 1\n\n# print the trainable layers\nprint('\\ntrainable layers:')\nfor layer in ai.model.layers:\n if layer.trainable:\n print('{:17s}{}'.format(layer.name, layer.trainable))\n\n# compile the model for training\nai.compile_model(learning_rate=.0003)",
"Model: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninputs (InputLayer) [(None, 72, 1)] 0 \n__________________________________________________________________________________________________\nconv_1 (Conv1D) (None, 8, 128) 1280 inputs[0][0] \n__________________________________________________________________________________________________\nconv_2 (Conv1D) (None, 8, 64) 8256 conv_1[0][0] \n__________________________________________________________________________________________________\nflatting (Flatten) (None, 512) 0 conv_2[0][0] \n__________________________________________________________________________________________________\ndense_cluster_1 (Dense) (None, 32) 16416 flatting[0][0] \n__________________________________________________________________________________________________\ne_cluster (Dense) (None, 8) 264 dense_cluster_1[0][0] \n__________________________________________________________________________________________________\np_cluster (Dense) (None, 8) 264 dense_cluster_1[0][0] \n__________________________________________________________________________________________________\ne_hardmax (Lambda) (None, 8) 0 e_cluster[0][0] \n__________________________________________________________________________________________________\np_hardmax (Lambda) (None, 8) 0 p_cluster[0][0] \n__________________________________________________________________________________________________\njoin_layer (Concatenate) (None, 528) 0 flatting[0][0] \n e_hardmax[0][0] \n p_hardmax[0][0] \n__________________________________________________________________________________________________\ndense_quality_1 (Dense) (None, 64) 33856 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_type_1 (Dense) (None, 16) 8464 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_pos_1 (Dense) (None, 64) 33856 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_energy_1 (Dense) (None, 32) 16928 join_layer[0][0] \n__________________________________________________________________________________________________\ndense_quality_2 (Dense) (None, 32) 2080 dense_quality_1[0][0] \n__________________________________________________________________________________________________\ndense_type_2 (Dense) (None, 8) 136 dense_type_1[0][0] \n__________________________________________________________________________________________________\ndense_pos_2 (Dense) (None, 32) 2080 dense_pos_1[0][0] \n__________________________________________________________________________________________________\ndense_energy_2 (Dense) (None, 16) 528 dense_energy_1[0][0] \n__________________________________________________________________________________________________\ndense_quality_3 (Dense) (None, 32) 1056 dense_quality_2[0][0] \n__________________________________________________________________________________________________\ntype (Dense) (None, 1) 9 dense_type_2[0][0] \n__________________________________________________________________________________________________\npos_x (Dense) (None, 2) 66 dense_pos_2[0][0] \n__________________________________________________________________________________________________\npos_y (Dense) (None, 2) 66 dense_pos_2[0][0] \n__________________________________________________________________________________________________\npos_z (Dense) (None, 2) 66 dense_pos_2[0][0] \n__________________________________________________________________________________________________\nenergy (Dense) (None, 2) 34 dense_energy_2[0][0] \n__________________________________________________________________________________________________\nquality (Dense) (None, 4) 132 dense_quality_3[0][0] \n==================================================================================================\nTotal params: 125,837\nTrainable params: 37,124\nNon-trainable params: 88,713\n__________________________________________________________________________________________________\n\ntrainable layers:\ndense_quality_1 True\ndense_quality_2 True\ndense_quality_3 True\nquality True\n"
],
[
"%%time\nl_callbacks = [\n keras.callbacks.LearningRateScheduler(lr_scheduler),\n]\n\n# start the training of the network\nai.train(epochs=100, shuffle_clusters=shuffle_clusters, \n verbose=0, callbacks = l_callbacks)",
"CPU times: user 3h 9min 38s, sys: 2min 34s, total: 3h 12min 12s\nWall time: 3h 13min 57s\n"
],
[
"ai.plot_training_loss(mode='loss-quality', summed_loss=False)",
"_____no_output_____"
],
[
"ai.evaluate(quality_filter=[.75,.8,.7,.7])",
"AI model\n Loss: 0.01567\n -Type: 0.49152 * 0.00 = 0.00000\n -Pos X: 0.04671 * 0.00 = 0.00000\n -Pos Y: 1.08707 * 0.00 = 0.00000\n -Pos Z: 0.02323 * 0.00 = 0.00000\n -Energy: 0.84564 * 0.00 = 0.00000\n -Cls e: 0.02332 * 0.00 = 0.00000\n -Cls p: 0.05627 * 0.00 = 0.00000\n -Quality: 0.01441 * 1.00 = 0.01441\n Accuracy: 0.75436\n -Precision: 0.31103\n -Recall: 0.18047\n -Cls e rate: 0.95611\n -Cls p rate: 0.86112\n Efficiency: 0.08105\n Purity: 0.13968\n Euc mean: 7.09194\n Euc std: 10.01548\n Energy mean: 0.30298\n Energy std: 0.70733\n\nReco\n Accuracy: 0.68734\n -TP rate: 0.66029\n Efficiency: 0.12303\n Purity: 0.04894\n Euc mean: 12.21821\n Euc std: 18.24261\n Energy mean: 0.44373\n Energy std: 0.94473\n"
],
[
"ai.evaluate(quality_filter=[.8,.8,.7,.7])",
"AI model\n Loss: 3.59581\n -Type: 0.49152 * 2.00 = 0.98304\n -Pos X: 0.04671 * 2.50 = 0.11678\n -Pos Y: 1.08707 * 1.00 = 1.08707\n -Pos Z: 0.02323 * 2.00 = 0.04646\n -Energy: 0.84564 * 1.50 = 1.26846\n -Cls e: 0.02332 * 1.00 = 0.02332\n -Cls p: 0.05627 * 1.00 = 0.05627\n -Quality: 0.01441 * 1.00 = 0.01441\n Accuracy: 0.75436\n -Precision: 0.31195\n -Recall: 0.16336\n -Cls e rate: 0.95611\n -Cls p rate: 0.86112\n Efficiency: 0.07554\n Purity: 0.14425\n Euc mean: 6.87115\n Euc std: 8.31707\n Energy mean: 0.30016\n Energy std: 0.68560\n\nReco\n Accuracy: 0.68734\n -TP rate: 0.66029\n Efficiency: 0.12303\n Purity: 0.04894\n Euc mean: 12.21821\n Euc std: 18.24261\n Energy mean: 0.44373\n Energy std: 0.94473\n"
],
[
"# save the trained model\nai.save(file_name=model_name)",
"_____no_output_____"
],
[
"ai.export_predictions_root('sificc-nn-llr-qlty.root', quality_filter=[.75,.8,.7,.7])",
"_____no_output_____"
],
[
"%%time\ndef plot_cross_evaluation(pos, title):\n l_quality = [None, .1, .2, .3, .4, .5, .6, .7, .8, .9]\n l_filter = [None, None, None, None]\n l_eff = []\n l_pur = []\n for i in l_quality:\n l_filter[pos] = i\n eff, pur = ai.calc_eff_pur(quality_filter=l_filter)\n l_eff.append(eff)\n l_pur.append(pur)\n\n plt.title(title)\n plt.plot(np.arange(0,1,.1), l_eff, label='Efficiency')\n plt.plot(np.arange(0,1,.1), l_pur, label='Purity')\n plt.grid()\n plt.legend()\n \nplt.figure(figsize=(16,8))\nplt.subplot(2,2,1)\nplot_cross_evaluation(0, 'e energy quality')\nplt.subplot(2,2,2)\nplot_cross_evaluation(1, 'p energy quality')\nplt.subplot(2,2,3)\nplot_cross_evaluation(2, 'e position quality')\nplt.subplot(2,2,4)\nplot_cross_evaluation(3, 'p position quality')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca0aaf282f7e37448c3ab29ed77062fea88bb43 | 1,634 | ipynb | Jupyter Notebook | src/puzzle/examples/gph/a_glistening_occasion.ipynb | PhilHarnish/forge | 663f19d759b94d84935c14915922070635a4af65 | [
"MIT"
] | 2 | 2020-08-18T18:43:09.000Z | 2020-08-18T20:05:59.000Z | src/puzzle/examples/gph/a_glistening_occasion.ipynb | PhilHarnish/forge | 663f19d759b94d84935c14915922070635a4af65 | [
"MIT"
] | null | null | null | src/puzzle/examples/gph/a_glistening_occasion.ipynb | PhilHarnish/forge | 663f19d759b94d84935c14915922070635a4af65 | [
"MIT"
] | null | null | null | 17.569892 | 51 | 0.488984 | [
[
[
"import forge\nfrom puzzle.puzzlepedia import puzzlepedia\n\npuzzle = puzzlepedia.parse(\"\"\"\n@ 1 5 2 13 4 12 5 6 2 11\n* GABE PRUITT\n* SMOKEY\n* LEE JONES\n* TOM IZZO\n* MICHAEL BRADLEY\n* BARACK OBAMA\n* CHRISTIAN WEBSTER\n* CLEANTHONY EARLY\n* GRAYSON ALLEN\n* NRG STADIUM\n\"\"\")",
"_____no_output_____"
],
[
"puzzlepedia.interact_with(puzzle)",
"WARNING\nMax fringe size was: 277\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
eca0b5a1f26d3c1bcc908b8737e3862b5da8c8c7 | 15,680 | ipynb | Jupyter Notebook | Object Tracking and Localization/Robot Localization/Inexact Move Function/Inexact Move Function, exercise.ipynb | brand909/Computer-Vision | 18e5bda880e40f0a355d1df8520770df5bb1ed6b | [
"MIT"
] | null | null | null | Object Tracking and Localization/Robot Localization/Inexact Move Function/Inexact Move Function, exercise.ipynb | brand909/Computer-Vision | 18e5bda880e40f0a355d1df8520770df5bb1ed6b | [
"MIT"
] | 4 | 2021-03-19T02:34:33.000Z | 2022-03-11T23:56:20.000Z | Object Tracking and Localization/Robot Localization/Inexact Move Function/Inexact Move Function, exercise.ipynb | brand909/Computer-Vision | 18e5bda880e40f0a355d1df8520770df5bb1ed6b | [
"MIT"
] | null | null | null | 76.487805 | 9,636 | 0.798661 | [
[
[
"# Inexact Move Function\n\nLet's see how we can incorporate **uncertain** motion into our motion update. We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. \n\nNext, you're tasked with modifying the `move` function so that it incorporates uncertainty in motion.\n\n<img src='images/uncertain_motion.png' width=50% height=50% />\n",
"_____no_output_____"
],
[
"First let's include our usual resource imports and display function.",
"_____no_output_____"
]
],
[
[
"# importing resources\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"A helper function for visualizing a distribution.",
"_____no_output_____"
]
],
[
[
"def display_map(grid, bar_width=1):\n if(len(grid) > 0):\n x_labels = range(len(grid))\n plt.bar(x_labels, height=grid, width=bar_width, color='b')\n plt.xlabel('Grid Cell')\n plt.ylabel('Probability')\n plt.ylim(0, 1) # range of 0-1 for probability values \n plt.title('Probability of the robot being at each cell in the grid')\n plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))\n plt.show()\n else:\n print('Grid is empty')\n",
"_____no_output_____"
]
],
[
[
"You are given the initial variables and the complete `sense` function, below.",
"_____no_output_____"
]
],
[
[
"# given initial variables\np=[0, 1, 0, 0, 0]\n# the color of each grid cell in the 1D world\nworld=['green', 'red', 'red', 'green', 'green']\n# Z, the sensor reading ('red' or 'green')\nZ = 'red'\npHit = 0.6\npMiss = 0.2\n\n# You are given the complete sense function\ndef sense(p, Z):\n ''' Takes in a current probability distribution, p, and a sensor reading, Z.\n Returns a *normalized* distribution after the sensor measurement has been made, q.\n This should be accurate whether Z is 'red' or 'green'. '''\n q=[]\n # loop through all grid cells\n for i in range(len(p)):\n # check if the sensor reading is equal to the color of the grid cell\n # if so, hit = 1\n # if not, hit = 0\n hit = (Z == world[i])\n q.append(p[i] * (hit * pHit + (1-hit) * pMiss))\n \n # sum up all the components\n s = sum(q)\n # divide all elements of q by the sum to normalize\n for i in range(len(p)):\n q[i] = q[i] / s\n return q\n\n# Commented out code for measurements\n# for k in range(len(measurements)):\n# p = sense(p, measurements)\n",
"_____no_output_____"
]
],
[
[
"### QUIZ: Modify the move function to accommodate the added probabilities of overshooting or undershooting the intended destination.\n\nThis function should shift a distribution with the motion, U, with some probability of under/overshooting. For the given, initial `p`, you should see the result for U = 1 and incorporated uncertainties: `[0.0, 0.1, 0.8, 0.1, 0.0]`.",
"_____no_output_____"
]
],
[
[
"## TODO: Modify the move function to accommodate the added robabilities of overshooting or undershooting \npExact = 0.8\npOvershoot = 0.1\npUndershoot = 0.1\n\n# Complete the move function\ndef move(p, U):\n q=[]\n # iterate through all values in p\n for i in range(len(p)):\n ## TODO: Modify this distribution code to incorporate values \n ## for over/undershooting the exact location\n # use the modulo operator to find the new location for a p value\n index = (i-U) % len(p)\n nextIndex = (index+1) % len(p)\n prevIndex = (index-1) % len(p)\n s = pExact * p[index]\n s = s + pOvershoot * p[nextIndex]\n s = s + pUndershoot * p[prevIndex] \n # append the correct, modified value of p to q\n q.append(s)\n return q\n\n## TODO: try this for U = 2 and see the result\np = move(p,1)\nprint(p)\ndisplay_map(p)",
"[0.0, 0.1, 0.8, 0.1, 0.0]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca0bc39180e57b37de2624134dd8f41303565d7 | 22,970 | ipynb | Jupyter Notebook | examples/notebooks/tethered_bead_experiment.ipynb | cellular-nanoscience/pyotic | 4cf68d4fd4efe2f1cbb4bb6fd61a66af0d15eaff | [
"Apache-2.0"
] | 1 | 2018-06-12T11:46:54.000Z | 2018-06-12T11:46:54.000Z | examples/notebooks/tethered_bead_experiment.ipynb | cellular-nanoscience/pyotic | 4cf68d4fd4efe2f1cbb4bb6fd61a66af0d15eaff | [
"Apache-2.0"
] | 6 | 2017-09-08T09:02:20.000Z | 2018-11-14T10:22:01.000Z | examples/notebooks/tethered_bead_experiment.ipynb | cellular-nanoscience/pyotic | 4cf68d4fd4efe2f1cbb4bb6fd61a66af0d15eaff | [
"Apache-2.0"
] | 3 | 2017-09-08T11:08:28.000Z | 2019-07-17T21:40:13.000Z | 26.043084 | 176 | 0.567392 | [
[
[
"\n## Import investigator package of the PyOTIC software",
"_____no_output_____"
]
],
[
[
"# Import os to easily join names of filepaths\nimport os\n\n# Add the path of the PyOTIC Software to the system path\n# Adjust this path to where the PyOTIC Software package is located\nimport sys\nsys.path.append('../../')\n\n#Load investigator package\nimport pyoti\npyoti.info()\n\n#Create an experiment\nexperiment = pyoti.create_experiment()",
"_____no_output_____"
]
],
[
[
"## Create experiment file (or open previously saved one)",
"_____no_output_____"
]
],
[
[
"# Choose the path, were the experiment should be created (or opened from)\n#\n# datadir: The path to where the experiment (and the data) are located\n# datafile: The name of the file that contains the data. Here it is only used to generate dbfile.\n# The data is loaded further down upon creation of a Record.\n# dbfile: The name of the database file the experiment is saved to (or loaded from).\ndatadir = '../data/'\ndatafile = 'B01.bin'\n\n# For the name of the experiment, exchange the extension '.bin' with '.fs'\ndbfile = os.path.join(datadir, datafile.replace('.bin', '.fs'))\n\n# Create/open the experiment dbfile\nexperiment.open(dbfile)\n\n#datadir = '/srv/files/common/Practicals/SingleMoleculeBiophysics SS2015/ASWAD 2015-09-24/'\n#datadir = 'Z:\\\\Practicals\\\\SingleMoleculeBiophysics SS2015\\\\ASWAD 2015-09-24\\\\'",
"_____no_output_____"
],
[
"# show status of Records, Views, MultiRegions, and Modifications in experiment\nexperiment.print_status()",
"_____no_output_____"
],
[
"# cleanup/pack database file\nexperiment.cleanup()",
"_____no_output_____"
],
[
"# save the state of the experiment in the database file\nexperiment.save(pack=True)",
"_____no_output_____"
],
[
"# revert changes since last commit of experiment\nexperiment.abort()",
"_____no_output_____"
],
[
"# close database file\nexperiment.close()",
"_____no_output_____"
]
],
[
[
"## Create a calibration",
"_____no_output_____"
]
],
[
[
"# Choose the calibration type that should be created.\n# See 'pyoti/etc/calibration.cfg' for known types.\n# If you choose an unknown type, a generic calibration is created.\ncalibration_type='pyoticf'\n\n# You can provide a calibration file, where previously stored calibration values are loaded from.\n# Make sure to set a proper corresponding calibration_type, which will load the files provided.\ncalibdir = os.path.join(\"..\", \"calibration\", \"converted_data\")\n#calibfile = 'B01__hc_results.txt'\ncalibfile = datafile.replace('.bin', '__hc_results.txt')\n\n# Create a calibration and assign it to the variable 'calibration'\ncalibration = pyoti.create_calibration(calibration_type=calibration_type, filename=calibfile, directory=calibdir)\n\n#calibdir = os.path.join(datadir, 'analysis')\n#calibdir = os.path.join('/home/tobiasj/experiments/ASWAD/2013-12-18/flow_cell_c', 'hdcalibration/analysis')\n#calibdir = '/media/tobiasj/cbd_drive/data/ASWAD/2015-10-28 - unzipping/analysis/'",
"_____no_output_____"
]
],
[
[
"## Create record(s) and add to experiment",
"_____no_output_____"
],
[
"### Either: Define a generic function to read in the data and create a record:",
"_____no_output_____"
]
],
[
[
"# Define a name for the record (defaults to 'default')\nname='alpha'\n\n# Define a function that is used to read in the data you want to analyze.\n# The function needs to receive at least the positional parameter 'filename'.\n# The return value of the function needs to be the data as a numpy array.\n# You can (beside other options) use functions that the package numpy offers to read in data:\n# http://docs.scipy.org/doc/numpy/reference/routines.io.html\n#\n# One example, to read in data from a text file with 5 header lines followed by the data,\n# could look like this:\n\nimport numpy as np\nimport pyoti.data.labview as lv\nimport os\n\ndef load_data(filename):\n #data = np.loadtxt(filename, skiprows=5)\n data = lv.read_labview_bin_data(filename)[:,0:3]\n return data\n\n# Define the samplingrate (either provide a function or simply a variable).\n# The function gets executed once, upon initialisation of the record. The\n# return value of the function (or the value of the variable) gets stored in\n# the record object:\ndef samplingrate():\n samplingrate = 40000.0\n return samplingrate\n#samplingrate = 40000.0\n\n# Name the traces here, the load_data() function returns. Make sure the \n# traces are properly describing the data returned by load_data function.\n# This definition takes precedence over the traces defined in the\n# configfile (see below)\ntraces = [ 'psdX', 'psdY', 'psdZ' ]\n\n# You can provide a configfile, which, for instance, defines the traces returned by load_data().\n# If not provided, configfile defaults to '../pyoti/etc/GenericDataFile.cfg'.\n# You could also create your own setup specific configfile and use GenericDataFile as a template.\n# Make sure to also add the parameter cfgfile to the function call below, if you define a cfgfile,\n# like: experiment.create_record(cfgfile=cfgfile, ...)\n#cfgfile = '../pyoti/etc/record/GenericDataFile.cfg' \n\nrecord = experiment.create_record(name=name, calibration=calibration, traces=traces, load_data=load_data, filename=datafile, directory=datadir, samplingrate=samplingrate)",
"_____no_output_____"
]
],
[
[
"### Or: Read in a record for a predefined setup:",
"_____no_output_____"
]
],
[
[
"# Define a name for the record (defaults to 'default')\nname='alpha'\n\n# Choose the file, where standard values for the Record are defined\ncfgfile = '../pyoti/etc/record/ASWAD.cfg'\n\nexperiment.create_record(name=name, calibration=calibration, cfgfile=cfgfile, filename=datafile, directory=datadir)",
"_____no_output_____"
],
[
"# Create/load additional records (e.g. extra_unzipping or beadscan)\nname = 'beta'\nextradatadir = datadir\nextradatafile = 'B01b.bin'\n\nexperiment.create_record(name=name, calibration=calibration, cfgfile=cfgfile, filename=extradatafile, directory=extradatadir)\n#experiment.records.beta.calibration = experiment.records.alpha.calibration",
"_____no_output_____"
],
[
"name = 'generic'\ngroup = 'modification'\nparent = 'used'\n\ntraces_apply=['psdX', 'psdYZ']\nextra_mod_params='factor'\nimport numpy as np\n\ndef modify(self, data, samples, data_traces, data_index, mod_index):\n # data: Contains the data, indexed by samples and data_traces\n # samples: Is the index of the samples contained in data, which was\n # given/asked by the user/process who called _get_data().\n # data_traces: Contains a list of traces (str) existent in data, which\n # was given/asked by the user/process who called _get_data().\n # data_index: data[:,data_index] gives the data, which is modified by\n # this modification (defined by traces_apply)\n # mod_index: numpy.array(self.traces_apply)[mod_index] gives the traces,\n # which are existent in data and also modified by this modfication\n # self.mod_params[mod_index] gives the mod_params of the traces\n # self.mod_params gives a list of all available mod_parameters\n # self.get_mod_params(names=...) returns a list with the\n # mod_parameters with names=...\n # self.name gives the mod_parameter with name name\n #\n # Modify and return the data ...\n print('Modifying the data ...')\n #\n #\n # A simple example of a modification (subtraction of the mod_param multiplied with\n # the extra_mod_param factor from traces):\n #data[:, data_index] -= self.mod_params[np.newaxis, mod_index] * self.factor\n #\n return data\n\nexperiment.add_group(name, parent, group_type=group, adjust=True, modify=modify, traces_apply=traces_apply, mod_params=extra_mod_params)",
"_____no_output_____"
]
],
[
[
"## Analyse and modify data",
"_____no_output_____"
]
],
[
[
"name = 'used'\ngroup = 'selection'\nparent = 'alpha'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'used_beta'\ngroup = 'selection'\nparent = 'beta'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'offset'\ngroup = 'offset'\nparent = 'used'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'offset_beta'\ngroup = 'offset'\nparent = 'used_beta'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"experiment.concatenate('offset_concatenated', 'offset', 'offset_beta')",
"_____no_output_____"
],
[
"name = 'touchdown'\ngroup = 'touchdown'\nparent = 'offset'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"experiment.replace_in('touchdown', 'offset', 'offset_concatenated')\nexperiment.replace_in('touchdown_mod', 'offset', 'offset_concatenated')",
"_____no_output_____"
],
[
"name = 'beadscan'\ngroup = 'beadscan'\nparent = 'touchdown'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"experiment.remove('beadscan')\nexperiment.remove('beadscan_mod')",
"_____no_output_____"
],
[
"name = 'attachment'\ngroup = 'attachment'\nparent = 'beadscan'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'attachment_2nd'\ngroup = 'attachment'\nparent = 'attachment'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'baseline'\ngroup = 'baseline'\nparent = 'attachment_2nd'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'rotation'\ngroup = 'rotation'\nparent = 'baseline'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
],
[
"name = 'rotation_2nd'\ngroup = 'rotation'\nparent = 'rotation'\n\nexperiment.add_group(name, parent, group_type=group)",
"_____no_output_____"
]
],
[
[
"## Select data to generate the results",
"_____no_output_____"
]
],
[
[
"name = 'results'\ngroup = 'selection'\nparent = 'rotation_2nd'\n\n# traces used to select data from\ntraces = ['psdXYZ', 'positionXYZ']\n\nresults_region = experiment.add_group(name, parent, group_type=group, traces=traces)\n\n# Enable caching for results region, for faster data return\nexperiment.set_cached_region(name)",
"_____no_output_____"
],
[
"# Choose resolution for presentation of data (extension, force)\nresolution = 1000 # points/s resolution\n\n# Create Result objects to obtain force and extension\ntether = pyoti.create_tether(region=results_region, resolution=resolution)\n\n# Show the autodetected minima, maxima and sections\n#tether._sf.highest_frequency=32\n#tether._sf.reduce_false_positives = True\n#tether._sf.compare_time = 0.005\ntether.update()\ntether.init_rfig(legend=True)",
"_____no_output_____"
],
[
"# Create force extension curves\nprefix = ''.join((os.path.splitext(os.path.basename(experiment.filename))[0], \"_\"))\nresultsdir = os.path.join(\"..\", \"results\")\n\n# Save force extension stress/release pair plots\ntether.save_force_extension_plots(directory=resultsdir,\n file_prefix=prefix,\n bps=9018)\n\n# Display saved force extension stress/release pair plots\n# pyoti.gui.browse_images(directory=resultsdir, prefix=prefix)",
"_____no_output_____"
],
[
"# Display force extension stress/release pair plots\ntether.init_fe_fig()\ntether.show_force_extension_plots(bps=1399, autolimit=False)",
"_____no_output_____"
],
[
"# plot Timecourse of Extension\nplt.close('all')\nplt.figure()\nplt.grid(True)\n\n# Timevector, extension and stress/release pairs\nt = tether.timevector\ne = tether.extension\npl, ps = tether.stress_release_pairs()\n\n# Plot all stress/release extension/timevector sections\nfor pl, ps in zip(pl, ps):\n plt.plot(t[pl], e[pl] * 1000, 'g.', ms=1.0)\n plt.plot(t[ps], e[ps] * 1000, 'r.', ms=1.0)\n\nplt.title('Timecourse of extension')\nplt.ylabel(\"Extension (nm)\")\nplt.xlabel(\"Time (s)\")\n\nplt.show(plt.gcf())\nplt.close()",
"_____no_output_____"
],
[
"plt.close('all')\nplt.figure()\nplt.grid(True)\n\nfXYZ = tether.forceXYZ\nrpl_ = tether.sections(direction='right', cycle='stress')\nlpl_ = tether.sections(direction='left', cycle='stress')\n\nfor rpl in rpl_:\n plt.plot(fXYZ[rpl, 1] * 1000, fXYZ[rpl, 2] * 1000, 'r')\nfor lpl in lpl_:\n plt.plot(fXYZ[lpl, 1] * 1000, fXYZ[lpl ,2] * 1000, 'g')\n\nexcited_axis = results_region.excited_axis\nplt.xlabel(''.join((\"Force (\", excited_axis,\")\")))\nplt.ylabel(\"Force (Z)\")\nplt.title(\"Y vs. Z\")\nplt.show(plt.gcf())\nplt.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca0d0945536bac49b1c54659cafdac4c42515be | 7,373 | ipynb | Jupyter Notebook | gabs-timetables/read-timetables.ipynb | shafmo/transport-analytics | a6961a4718b7a88b51c219f42b191ec37992d064 | [
"MIT"
] | null | null | null | gabs-timetables/read-timetables.ipynb | shafmo/transport-analytics | a6961a4718b7a88b51c219f42b191ec37992d064 | [
"MIT"
] | null | null | null | gabs-timetables/read-timetables.ipynb | shafmo/transport-analytics | a6961a4718b7a88b51c219f42b191ec37992d064 | [
"MIT"
] | null | null | null | 54.614815 | 130 | 0.754238 | [
[
[
"import PyPDF2\nimport csv\nimport glob\nimport os",
"_____no_output_____"
],
[
"path = 'D:\\\\001_Projects\\\\08 - Bellville\\\\BellvilleTT\\\\'\n\nfilenames = glob.glob(path + \"/*.pdf\")\n\nfor filename in filenames:\n try:\n PdfFileObj = open(filename, 'rb')\n name = os.path.basename(filename)\n pdfReader = PyPDF2.PdfFileReader(PdfFileObj)\n\n for i in range(num_of_pages):\n pageObj = pdfReader.getPage(i)\n text = pageObj.extractText()\n text = text.replace(' -- ','').replace(' |',',').replace('| ','\\n').replace('|',',').replace('via ','via')\n text = text.replace(',-','\\n').replace('-','').replace(', ,',',')\n\n with open(path + name + str(i) +'.csv', 'w') as csvfile:\n csvfile.write(text)\n print('successfully saved file' + name)\n except:\n print('Not sure what happened with ' + name)\n",
"successfully saved fileBELLVILLE___AIRPORT_IND_valid_from_20190401_to_99999999_004401.pdf\nsuccessfully saved fileBELLVILLE___AIRPORT_IND_valid_from_20190401_to_99999999_004401.pdf\nsuccessfully saved fileBELLVILLE___ATLANTIS_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___ATLANTIS_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___BLUEDOWNS_valid_from_20190401_to_99999999_007502.pdf\nsuccessfully saved fileBELLVILLE___BLUEDOWNS_valid_from_20190401_to_99999999_007502.pdf\nNot sure what happened with BELLVILLE___CAPE_GATE_valid_from_20190321_to_20190321_001902.pdf\nsuccessfully saved fileBELLVILLE___CAPE_TOWN_valid_from_20190401_to_99999999_000102.pdf\nsuccessfully saved fileBELLVILLE___CAPE_TOWN_valid_from_20190401_to_99999999_000102.pdf\nsuccessfully saved fileBELLVILLE___DELFT_valid_from_20190401_to_99999999_007502.pdf\nsuccessfully saved fileBELLVILLE___DELFT_valid_from_20190401_to_99999999_007502.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_002402.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_002402.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_002502.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_002502.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_013302.pdf\nsuccessfully saved fileBELLVILLE___DURBANVILLE_valid_from_20190401_to_99999999_013302.pdf\nsuccessfully saved fileBELLVILLE___ELSIES_RIVER_valid_from_20190401_to_99999999_000102.pdf\nsuccessfully saved fileBELLVILLE___ELSIES_RIVER_valid_from_20190401_to_99999999_000102.pdf\nsuccessfully saved fileBELLVILLE___EVERSDAL_valid_from_20190401_to_99999999_002502.pdf\nsuccessfully saved fileBELLVILLE___EVERSDAL_valid_from_20190401_to_99999999_002502.pdf\nsuccessfully saved fileBELLVILLE___HANOVER_PARK_valid_from_20190401_to_99999999_003602.pdf\nsuccessfully saved fileBELLVILLE___HANOVER_PARK_valid_from_20190401_to_99999999_003602.pdf\nsuccessfully saved fileBELLVILLE___HARARE_valid_from_20190401_to_99999999_006202.pdf\nsuccessfully saved fileBELLVILLE___HARARE_valid_from_20190401_to_99999999_006202.pdf\nsuccessfully saved fileBELLVILLE___KENRIDGE_valid_from_20190401_to_99999999_005002.pdf\nsuccessfully saved fileBELLVILLE___KENRIDGE_valid_from_20190401_to_99999999_005002.pdf\nsuccessfully saved fileBELLVILLE___KHAYELITSHA_valid_from_20190401_to_99999999_006202.pdf\nsuccessfully saved fileBELLVILLE___KHAYELITSHA_valid_from_20190401_to_99999999_006202.pdf\nsuccessfully saved fileBELLVILLE___KHAYELITSHA_valid_from_20190401_to_99999999_006302.pdf\nsuccessfully saved fileBELLVILLE___KHAYELITSHA_valid_from_20190401_to_99999999_006302.pdf\nsuccessfully saved fileBELLVILLE___KILLARNEY_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___KILLARNEY_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___MAKHAZA_valid_from_20190401_to_99999999_006302.pdf\nsuccessfully saved fileBELLVILLE___MAKHAZA_valid_from_20190401_to_99999999_006302.pdf\nsuccessfully saved fileBELLVILLE___MAMRE_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___MAMRE_valid_from_20190401_to_99999999_010504.pdf\nsuccessfully saved fileBELLVILLE___MOWBRAY_valid_from_20190401_to_99999999_002302.pdf\nsuccessfully saved fileBELLVILLE___MOWBRAY_valid_from_20190401_to_99999999_002302.pdf\nsuccessfully saved fileBELLVILLE___NYANGA_valid_from_20190401_to_99999999_004402.pdf\nsuccessfully saved fileBELLVILLE___NYANGA_valid_from_20190401_to_99999999_004402.pdf\nsuccessfully saved fileBELLVILLE___PHILIPPI_valid_from_20190401_to_99999999_013302.pdf\nsuccessfully saved fileBELLVILLE___PHILIPPI_valid_from_20190401_to_99999999_013302.pdf\nsuccessfully saved fileBELLVILLE___SCOTTSDENE_valid_from_20190401_to_99999999_002402.pdf\nsuccessfully saved fileBELLVILLE___SCOTTSDENE_valid_from_20190401_to_99999999_002402.pdf\nsuccessfully saved fileBELLVILLE___STRAND_valid_from_20190311_to_99999999_021302.pdf\nsuccessfully saved fileBELLVILLE___STRAND_valid_from_20190311_to_99999999_021302.pdf\nNot sure what happened with BELLVILLE___TYGER_VALLEY_valid_from_20190321_to_20190321_008502.pdf\nsuccessfully saved fileBELLVILLE___TYGER_VALLEY_valid_from_20190401_to_99999999_002502.pdf\nsuccessfully saved fileBELLVILLE___TYGER_VALLEY_valid_from_20190401_to_99999999_002502.pdf\nsuccessfully saved fileBELLVILLE___TYGER_VALLEY_valid_from_20190401_to_99999999_007502.pdf\nsuccessfully saved fileBELLVILLE___TYGER_VALLEY_valid_from_20190401_to_99999999_007502.pdf\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
eca101e594b23e817edfe105df8086a459f0523e | 26,894 | ipynb | Jupyter Notebook | docs/tutorials/tensor_nets.ipynb | towadroid/t3f | 1ce4f6985037a6aac34dd64fbe68d75a9dc1474f | [
"MIT"
] | 217 | 2017-01-19T17:56:28.000Z | 2022-03-04T08:05:55.000Z | docs/tutorials/tensor_nets.ipynb | towadroid/t3f | 1ce4f6985037a6aac34dd64fbe68d75a9dc1474f | [
"MIT"
] | 155 | 2017-03-06T11:03:18.000Z | 2022-03-21T17:52:54.000Z | docs/tutorials/tensor_nets.ipynb | towadroid/t3f | 1ce4f6985037a6aac34dd64fbe68d75a9dc1474f | [
"MIT"
] | 61 | 2017-03-07T06:24:16.000Z | 2022-03-29T09:17:23.000Z | 33.91425 | 472 | 0.474753 | [
[
[
"# Tensor Nets (compressing neural networks)\n\n[Open](https://colab.research.google.com/github/Bihaqo/t3f/blob/develop/docs/tutorials/tensor_nets.ipynb) **this page in an interactive mode via Google Colaboratory.**\n\nIn this notebook we provide an example of how to build a simple Tensor Net (see https://arxiv.org/abs/1509.06569).\n\nThe main ingredient is the so-called TT-Matrix, a generalization of the Kronecker product matrices, i.e. matrices of the form \n$$A = A_1 \\otimes A_2 \\cdots \\otimes A_n$$\n\nIn `t3f` TT-Matrices are represented using the `TensorTrain` class.",
"_____no_output_____"
]
],
[
[
"# Import TF 2.\n%tensorflow_version 2.x\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow.keras.backend as K\n\n# Fix seed so that the results are reproducable.\ntf.random.set_seed(0)\nnp.random.seed(0)\n\ntry:\n import t3f\nexcept ImportError:\n # Install T3F if it's not already installed.\n !git clone https://github.com/Bihaqo/t3f.git\n !cd t3f; pip install .\n import t3f",
"TensorFlow 2.x selected.\nCloning into 't3f'...\nremote: Enumerating objects: 321, done.\u001b[K\nremote: Counting objects: 100% (321/321), done.\u001b[K\nremote: Compressing objects: 100% (182/182), done.\u001b[K\nremote: Total 4715 (delta 209), reused 226 (delta 139), pack-reused 4394\u001b[K\nReceiving objects: 100% (4715/4715), 1.52 MiB | 1.26 MiB/s, done.\nResolving deltas: 100% (3203/3203), done.\nProcessing /content/t3f\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from t3f==1.1.0) (1.18.1)\nBuilding wheels for collected packages: t3f\n Building wheel for t3f (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for t3f: filename=t3f-1.1.0-cp36-none-any.whl size=75051 sha256=a20c22745abcbe82d9a467cf607135da9d5399940712bfbf134bbf7e40ac53b3\n Stored in directory: /tmp/pip-ephem-wheel-cache-vnw71g5i/wheels/66/f2/16/8d2b16c34f7e12d446db3584514f9e34e681f4c602325d175c\nSuccessfully built t3f\nInstalling collected packages: t3f\nSuccessfully installed t3f-1.1.0\n"
],
[
"W = t3f.random_matrix([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)\n\nprint(W)",
"A TT-Matrix of size 784 x 625, underlying tensor shape: (4, 7, 4, 7) x (5, 5, 5, 5), TT-ranks: (1, 2, 2, 2, 1)\n"
]
],
[
[
"Using TT-Matrices we can compactly represent densely connected layers in neural networks, which allows us to greatly reduce number of parameters. Matrix multiplication can be handled by the `t3f.matmul` method which allows for multiplying dense (ordinary) matrices and TT-Matrices. Very simple neural network could look as following (for initialization several options such as `t3f.glorot_initializer`, `t3f.he_initializer` or `t3f.random_matrix` are available):",
"_____no_output_____"
]
],
[
[
"class Learner:\n def __init__(self):\n initializer = t3f.glorot_initializer([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)\n self.W1 = t3f.get_variable('W1', initializer=initializer)\n self.W2 = tf.Variable(tf.random.normal([625, 10]))\n self.b2 = tf.Variable(tf.random.normal([10]))\n \n def predict(self, x):\n b1 = tf.Variable(tf.zeros([625]))\n h1 = t3f.matmul(x, W1) + b1\n h1 = tf.nn.relu(h1)\n return tf.matmul(h1, W2) + b2\n\n def loss(self, x, y):\n y_ = tf.one_hot(y, 10)\n logits = self.predict(x)\n return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits))\n",
"_____no_output_____"
]
],
[
[
"For convenience we have implemented a layer analogous to *Keras* `Dense` layer but with a TT-Matrix instead of an ordinary matrix. An example of fully trainable net is provided below.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation, Dropout, Flatten\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras import optimizers",
"_____no_output_____"
],
[
"(x_train, y_train), (x_test, y_test) = mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n"
]
],
[
[
"Some preprocessing...",
"_____no_output_____"
]
],
[
[
"x_train = x_train / 127.5 - 1.0\nx_test = x_test / 127.5 - 1.0\n\ny_train = to_categorical(y_train, num_classes=10)\ny_test = to_categorical(y_test, num_classes=10)",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\ntt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],\n tt_rank=4, activation='relu',\n bias_initializer=1e-3)\nmodel.add(tt_layer)\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"sequential_12\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten_12 (Flatten) (None, 784) 0 \n_________________________________________________________________\ntt_dense_1 (KerasDense) (None, 625) 1725 \n_________________________________________________________________\ndense_8 (Dense) (None, 10) 6260 \n_________________________________________________________________\nactivation_7 (Activation) (None, 10) 0 \n=================================================================\nTotal params: 7,985\nTrainable params: 7,985\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"Note that in the dense layer we only have $1725$ parameters instead of $784 * 625 = 490000$.",
"_____no_output_____"
]
],
[
[
"optimizer = optimizers.Adam(lr=1e-2)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.fit(x_train, y_train, epochs=3, batch_size=64, validation_data=(x_test, y_test))",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/3\n60000/60000 [==============================] - 4s 69us/sample - loss: 0.2549 - accuracy: 0.9248 - val_loss: 0.1195 - val_accuracy: 0.9638\nEpoch 2/3\n60000/60000 [==============================] - 4s 62us/sample - loss: 0.1448 - accuracy: 0.9574 - val_loss: 0.1415 - val_accuracy: 0.9585\nEpoch 3/3\n60000/60000 [==============================] - 4s 62us/sample - loss: 0.1308 - accuracy: 0.9619 - val_loss: 0.1198 - val_accuracy: 0.9638\n"
]
],
[
[
"Compression of Dense layers\n------------------------------------------",
"_____no_output_____"
],
[
"Let us now train an ordinary DNN (without TT-Matrices) and show how we can compress it using the TT decomposition. (In contrast to directly training a TT-layer from scratch in the example above.)",
"_____no_output_____"
]
],
[
[
"model = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\nmodel.add(Dense(625, activation='relu'))\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"sequential_13\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten_13 (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense_9 (Dense) (None, 625) 490625 \n_________________________________________________________________\ndense_10 (Dense) (None, 10) 6260 \n_________________________________________________________________\nactivation_8 (Activation) (None, 10) 0 \n=================================================================\nTotal params: 496,885\nTrainable params: 496,885\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"optimizer = optimizers.Adam(lr=1e-3)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.fit(x_train, y_train, epochs=5, batch_size=64, validation_data=(x_test, y_test))",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/5\n60000/60000 [==============================] - 3s 57us/sample - loss: 0.2779 - accuracy: 0.9158 - val_loss: 0.1589 - val_accuracy: 0.9501\nEpoch 2/5\n60000/60000 [==============================] - 3s 52us/sample - loss: 0.1297 - accuracy: 0.9610 - val_loss: 0.1632 - val_accuracy: 0.9483\nEpoch 3/5\n60000/60000 [==============================] - 3s 53us/sample - loss: 0.0991 - accuracy: 0.9692 - val_loss: 0.1083 - val_accuracy: 0.9674\nEpoch 4/5\n60000/60000 [==============================] - 3s 54us/sample - loss: 0.0835 - accuracy: 0.9742 - val_loss: 0.1191 - val_accuracy: 0.9619\nEpoch 5/5\n60000/60000 [==============================] - 3s 55us/sample - loss: 0.0720 - accuracy: 0.9771 - val_loss: 0.0918 - val_accuracy: 0.9714\n"
]
],
[
[
"Let us convert the matrix used in the Dense layer to the TT-Matrix with tt-ranks equal to 16 (since we trained the network without the low-rank structure assumption we may wish start with high rank values).",
"_____no_output_____"
]
],
[
[
"W = model.trainable_weights[0]\nprint(W)\nWtt = t3f.to_tt_matrix(W, shape=[[7, 4, 7, 4], [5, 5, 5, 5]], max_tt_rank=16)\nprint(Wtt)",
"<tf.Variable 'dense_9/kernel:0' shape=(784, 625) dtype=float32, numpy=\narray([[-0.03238887, 0.06103956, 0.03255948, ..., -0.02577683,\n 0.06993102, -0.00263362],\n [-0.05367032, -0.0324776 , -0.04441883, ..., 0.0338573 ,\n 0.01554517, 0.04145934],\n [ 0.03441307, 0.04183276, 0.05157001, ..., 0.00082603,\n 0.03731582, -0.01392014],\n ...,\n [ 0.03070629, 0.02113252, 0.01526976, ..., -0.00541451,\n 0.03794012, 0.04027091],\n [-0.01376432, -0.0064889 , -0.03118961, ..., 0.06237663,\n -0.000577 , -0.02628548],\n [-0.01680673, 0.00364697, 0.01722438, ..., 0.01579029,\n -0.00826585, 0.03203061]], dtype=float32)>\nA TT-Matrix of size 784 x 625, underlying tensor shape: (7, 4, 7, 4) x (5, 5, 5, 5), TT-ranks: (1, 16, 16, 16, 1)\n"
]
],
[
[
"We need to evaluate the tt-cores of Wtt. We also need to store other parameters for later (biases and the second dense layer).",
"_____no_output_____"
]
],
[
[
"cores = Wtt.tt_cores\nother_params = model.get_weights()[1:]",
"_____no_output_____"
]
],
[
[
"Now we can construct a tensor network with the first Dense layer replaced by `Wtt`\ninitialized using the previously computed cores.",
"_____no_output_____"
]
],
[
[
"model = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\ntt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],\n tt_rank=16, activation='relu')\nmodel.add(tt_layer)\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))",
"_____no_output_____"
],
[
"optimizer = optimizers.Adam(lr=1e-3)\nmodel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.set_weights(list(cores) + other_params)",
"_____no_output_____"
],
[
"print(\"new accuracy: \", model.evaluate(x_test, y_test)[1])",
"10000/10000 [==============================] - 1s 91us/sample - loss: 1.0276 - accuracy: 0.6443\nnew accuracy: 0.6443\n"
],
[
"model.summary()",
"Model: \"sequential_16\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten_16 (Flatten) (None, 784) 0 \n_________________________________________________________________\ntt_dense_2 (KerasDense) (None, 625) 15585 \n_________________________________________________________________\ndense_13 (Dense) (None, 10) 6260 \n_________________________________________________________________\nactivation_11 (Activation) (None, 10) 0 \n=================================================================\nTotal params: 21,845\nTrainable params: 21,845\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"We see that even though we now have about 5% of the original number of parameters we still achieve a relatively high accuracy.",
"_____no_output_____"
],
[
"Finetuning the model \n-------------------------------\nWe can now finetune this tensor network.",
"_____no_output_____"
]
],
[
[
"model.fit(x_train, y_train, epochs=2, batch_size=64, validation_data=(x_test, y_test))",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/2\n60000/60000 [==============================] - 5s 81us/sample - loss: 0.1349 - accuracy: 0.9594 - val_loss: 0.0982 - val_accuracy: 0.9703\nEpoch 2/2\n60000/60000 [==============================] - 5s 75us/sample - loss: 0.0822 - accuracy: 0.9750 - val_loss: 0.0826 - val_accuracy: 0.9765\n"
]
],
[
[
"We see that we were able to achieve higher validation accuracy than we had in the plain DNN, while keeping the number of parameters extremely small (21845 vs 496885 parameters in the uncompressed model).",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca10436c39f7184c03ba9805e856b3f064920d0 | 8,675 | ipynb | Jupyter Notebook | examples/5. Bubble Point calculation.ipynb | koroshkorosh1/phasepy | eee927d945e3f98f0bd8687395dc12ccd31113ef | [
"MIT"
] | 33 | 2019-08-29T16:36:40.000Z | 2022-03-08T11:29:45.000Z | examples/5. Bubble Point calculation.ipynb | pferna6/phasepy | 92bea943e93b1e1decc5f2c5af4c29528eb7c96c | [
"MIT"
] | 11 | 2019-08-29T16:28:31.000Z | 2022-03-09T14:49:38.000Z | examples/5. Bubble Point calculation.ipynb | pferna6/phasepy | 92bea943e93b1e1decc5f2c5af4c29528eb7c96c | [
"MIT"
] | 18 | 2019-11-13T11:11:30.000Z | 2022-03-08T22:00:46.000Z | 28.165584 | 444 | 0.500749 | [
[
[
"# Bubble Point (VLE)\n\nThe isothermal-isobaric two-phase flash is the base for the calculation Vapor-Liquid Equilibria. This calculation is based on the solution of the Rachford-Rice mass balance. \n\n$$ FO = \\sum_{i=1}^c \\left( x_i^\\beta - x_i^\\alpha \\right) = \\sum_{i=1}^c \\frac{z_i (K_i-1)}{1+\\psi (K_i-1)} $$\n\n\nWhere, $K_i = x_i^\\beta/x_i^\\alpha =\\hat{\\phi}_i^\\alpha /\\hat{\\phi}_i^\\beta $ represents the equilibrium constant and $\\psi$ the fraction of the phase $\\beta$. For bubble and dew points calculations the phase fraction $\\psi$ is known beforehand and set to 0 for bubble points (differential size bubble) and to 1 for dew point (differential size liquid drop).\n\nThe Rachford-Rice mass balance reduces to the following equations:\n\n### Bubble\n\n$$ FO = \\sum_{i=1}^c x_i (K_i-1) = \\sum_{i=1}^c y_i -1 = 0 $$\n\nThe solution of this calculations includes using accelerated successive substitution (ASS) to update the phase compositions in an inner loop and the quasi-Newton method is used to update pressure or temperature in an outer loop. If slow convergence is detected, the algorithm attempts to solve the following system of equations using equilibrium constants, $K$, as iteration variables. This is done using SciPy's optimization routines.\n\n$$ f_i = \\ln K_i + \\ln \\hat{\\phi}_i^v(\\underline{y}, T, P) -\\ln \\hat{\\phi}_i^l(\\underline{x}, T, P) \\quad i = 1,...,c $$\n$$ f_{c+1} = \\sum_{i=1}^c (y_i-x_i) $$\n\n**note:** these calculations does not check for the stability of the phases.\n\n\nIn this notebook, examples of calculation of bubble point properties using Peng-Robinson equation of state are shown. The mixing rule applied is Modified Huron Vidal combined with Modified UNIFAC.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom phasepy import mixture, component, preos\nfrom phasepy.equilibrium import bubblePy, bubbleTy",
"_____no_output_____"
]
],
[
[
"---\n# Binary mixture example\n\nThis calculation will be exemplified for the mixture of ethanol and water.",
"_____no_output_____"
]
],
[
[
"water = component(name='water', Tc=647.13, Pc=220.55, \n w=0.344861, GC={'H2O':1})\nethanol = component(name='ethanol', Tc=514.0, Pc=61.37,\n w=0.643558, GC={'CH3':1, 'CH2':1, 'OH(P)':1})\nmix = mixture(ethanol, water)\n# or\nmix = ethanol + water\nmix.unifac() \neos = preos(mix, 'mhv_unifac')",
"_____no_output_____"
]
],
[
[
"-----\n#### Bubble point algorithm x, T -> y, P",
"_____no_output_____"
]
],
[
[
"x = np.array([0.5, 0.5])\nT = 350.0\ny0 = np.array([0.5, 0.5])\nP0 = 1.0\nbubblePy(y0, P0, x, T, eos, full_output=True)",
"_____no_output_____"
]
],
[
[
"-----\n#### Bubble point algorithm x, P -> y, T",
"_____no_output_____"
]
],
[
[
"x = np.array([0.6, 0.4])\nP = 3.0\ny0 = np.array([0.5, 0.5])\nT0 = 320.0\nbubbleTy(y0, T0, x, P, eos, full_output=True)",
"_____no_output_____"
]
],
[
[
"# Ternary mixture example",
"_____no_output_____"
]
],
[
[
"mtbe = component(name='mtbe', Tc=497.1, Pc=34.3, Zc=0.273, Vc=329.0, w=0.266059,\n Ant=[9.16238246, 2541.97883529, -50.40534341], \n GC={'CH3':3, 'CH3O':1, 'C':1})\n\nethanol = component(name='ethanol', Tc=514.0, Pc=61.37, Zc=0.241, Vc=168.0, w=0.643558,\n Ant=[11.61809279, 3423.0259436, -56.48094263],\n GC={'CH3':1, 'CH2':1,'OH(P)':1})\n\nbutanol = component(name='n-Butanol', Tc=563.0, Pc=44.14, Zc=0.258, Vc=274.0, w=0.589462,\n Ant=[9.74673479, 2668.52570016, -116.66189545],\n GC={'CH3':1, 'CH2':3, 'OH(P)':1})\n\nmix = mixture(mtbe, ethanol)\nmix.add_component(butanol)\n# or \nmix = mtbe + ethanol + butanol\n\nmix.unifac()\neos = preos(mix, 'mhv_unifac')",
"_____no_output_____"
]
],
[
[
"-----\n#### Bubble point algorithm x, T -> y, P",
"_____no_output_____"
]
],
[
[
"x = np.array([0.2, 0.5, 0.3])\nT = 350.0\ny0 = np.array([0.2, 0.5, 0.3])\nP0 = 1\nbubblePy(y0, P0, x, T, eos, full_output=True)",
"_____no_output_____"
]
],
[
[
"-----\n#### Bubble point algorithm x, P -> y, T",
"_____no_output_____"
]
],
[
[
"x = np.array([0.2, 0.5, 0.3])\nP = 2.0\ny0 = np.array([0.2, 0.5, 0.3])\nT0 = 320.0\nbubbleTy(y0, T0, x, P, eos, full_output=True)",
"_____no_output_____"
]
],
[
[
"For further information please also check [official documentation](https://phasepy.readthedocs.io/), or just try:\n\n```function?```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca10b93e533537747a77b7a6ff4119f8420f3f9 | 187,095 | ipynb | Jupyter Notebook | Preliminary Work/Data_prediction.ipynb | init-13/PROJECT-Analysis-and-Prediction-of-Cryptocurrency-Price-Using-RNN | bc5ca25006d57724edeb12e53da7aff1a147d68e | [
"MIT"
] | null | null | null | Preliminary Work/Data_prediction.ipynb | init-13/PROJECT-Analysis-and-Prediction-of-Cryptocurrency-Price-Using-RNN | bc5ca25006d57724edeb12e53da7aff1a147d68e | [
"MIT"
] | null | null | null | Preliminary Work/Data_prediction.ipynb | init-13/PROJECT-Analysis-and-Prediction-of-Cryptocurrency-Price-Using-RNN | bc5ca25006d57724edeb12e53da7aff1a147d68e | [
"MIT"
] | 2 | 2022-03-15T17:54:11.000Z | 2022-03-16T06:25:49.000Z | 182.888563 | 76,740 | 0.867148 | [
[
[
"#tensorflow_version 2.x\nimport json\nimport requests\nfrom keras.models import Sequential\nfrom keras.layers import Activation, Dense, Dropout, LSTM\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.metrics import mean_absolute_error\n#matplotlib inline",
"_____no_output_____"
],
[
"endpoint = 'https://min-api.cryptocompare.com/data/histoday'\nres = requests.get(endpoint + '?fsym=BTC&tsym=CAD&limit=500')\nhist = pd.DataFrame(json.loads(res.content)['Data'])\nhist = hist.set_index('time')\nhist.index = pd.to_datetime(hist.index, unit='s')\ntarget_col = 'close'\n",
"_____no_output_____"
],
[
"print(hist)\n",
" high low open volumefrom volumeto close \\\ntime \n2020-12-08 24707.49 23425.28 24626.17 102.86 2479400.96 23626.47 \n2020-12-09 23982.97 22764.23 23626.47 145.18 3417104.26 23932.51 \n2020-12-10 23932.51 22812.36 23932.51 138.09 3238904.37 23355.09 \n2020-12-11 23396.16 22588.52 23355.09 79.81 1830605.31 23081.68 \n2020-12-12 24281.12 23081.68 23081.68 40.93 973072.53 24155.59 \n... ... ... ... ... ... ... \n2022-04-18 51662.93 48601.04 49966.46 64.73 3232703.51 51284.40 \n2022-04-19 52508.03 50856.11 51284.40 80.09 4136822.73 52224.65 \n2022-04-20 52784.10 51120.99 52224.65 65.68 3410542.93 51679.82 \n2022-04-21 53578.29 50081.09 51679.82 108.57 5651787.23 50956.68 \n2022-04-22 51381.47 50710.33 50956.68 31.09 1588420.16 51087.89 \n\n conversionType conversionSymbol \ntime \n2020-12-08 direct \n2020-12-09 direct \n2020-12-10 direct \n2020-12-11 direct \n2020-12-12 direct \n... ... ... \n2022-04-18 direct \n2022-04-19 direct \n2022-04-20 direct \n2022-04-21 direct \n2022-04-22 direct \n\n[501 rows x 8 columns]\n"
],
[
"hist.drop([\"conversionType\", \"conversionSymbol\"], axis = 'columns', inplace = True)",
"_____no_output_____"
],
[
"hist",
"_____no_output_____"
],
[
"def train_test_split(df, test_size=0.2):\n split_row = len(df) - int(test_size * len(df))\n train_data = df.iloc[:split_row]\n test_data = df.iloc[split_row:]\n return train_data, test_data",
"_____no_output_____"
],
[
"train, test = train_test_split(hist, test_size=0.2)",
"_____no_output_____"
],
[
"train",
"_____no_output_____"
],
[
"test",
"_____no_output_____"
],
[
"def line_plot(line1, line2, label1=None, label2=None, title='', lw=2):\n fig, ax = plt.subplots(1, figsize=(13, 7))\n ax.plot(line1, label=label1, linewidth=lw)\n ax.plot(line2, label=label2, linewidth=lw)\n ax.set_ylabel('price [CAD]', fontsize=14)\n ax.set_title(title, fontsize=16)\n ax.legend(loc='best', fontsize=16);",
"_____no_output_____"
],
[
"line_plot(train[target_col], test[target_col], 'training', 'test', title='')",
"_____no_output_____"
],
[
"def normalise_zero_base(df):\n return df / df.iloc[0] - 1\n\ndef normalise_min_max(df):\n return (df - df.min()) / (data.max() - df.min())",
"_____no_output_____"
],
[
"def extract_window_data(df, window_len=5, zero_base=True):\n window_data = []\n for idx in range(len(df) - window_len):\n tmp = df[idx: (idx + window_len)].copy()\n if zero_base:\n tmp = normalise_zero_base(tmp)\n window_data.append(tmp.values)\n return np.array(window_data)",
"_____no_output_____"
],
[
"def prepare_data(df, target_col, window_len=10, zero_base=True, test_size=0.2):\n train_data, test_data = train_test_split(df, test_size=test_size)\n X_train = extract_window_data(train_data, window_len, zero_base)\n X_test = extract_window_data(test_data, window_len, zero_base)\n y_train = train_data[target_col][window_len:].values\n y_test = test_data[target_col][window_len:].values\n if zero_base:\n y_train = y_train / train_data[target_col][:-window_len].values - 1\n y_test = y_test / test_data[target_col][:-window_len].values - 1\n\n return train_data, test_data, X_train, X_test, y_train, y_test",
"_____no_output_____"
],
[
"def build_lstm_model(input_data, output_size, neurons=100, activ_func='linear',\n dropout=0.2, loss='mse', optimizer='adam'):\n model = Sequential()\n model.add(LSTM(neurons, input_shape=(input_data.shape[1], input_data.shape[2])))\n model.add(Dropout(dropout))\n model.add(Dense(units=output_size))\n model.add(Activation(activ_func))\n\n model.compile(loss=loss, optimizer=optimizer)\n return model",
"_____no_output_____"
],
[
"np.random.seed(42)\nwindow_len = 5\ntest_size = 0.2\nzero_base = True\nlstm_neurons = 200\nepochs = 25\nbatch_size = 36\nloss = 'mse'\ndropout = 0.2\noptimizer = 'adam'",
"_____no_output_____"
],
[
"train, test, X_train, X_test, y_train, y_test = prepare_data(hist, target_col, window_len=window_len, zero_base=zero_base, test_size=test_size)",
"_____no_output_____"
],
[
"model = build_lstm_model(X_train, output_size=1, neurons=lstm_neurons, dropout=dropout, loss=loss,optimizer=optimizer)\nhistory = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=batch_size, verbose=1, shuffle=True)\nprint(type(model))\nprint(type(history))\nprint(type(history.history))",
"Epoch 1/25\n11/11 [==============================] - 2s 45ms/step - loss: 0.0113 - val_loss: 0.0043\nEpoch 2/25\n11/11 [==============================] - 0s 9ms/step - loss: 0.0059 - val_loss: 0.0047\nEpoch 3/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0087 - val_loss: 0.0095\nEpoch 4/25\n11/11 [==============================] - 0s 9ms/step - loss: 0.0127 - val_loss: 0.0062\nEpoch 5/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0098 - val_loss: 0.0036\nEpoch 6/25\n11/11 [==============================] - 0s 9ms/step - loss: 0.0077 - val_loss: 0.0041\nEpoch 7/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0039 - val_loss: 0.0027\nEpoch 8/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0054 - val_loss: 0.0022\nEpoch 9/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0030 - val_loss: 0.0019\nEpoch 10/25\n11/11 [==============================] - 0s 9ms/step - loss: 0.0034 - val_loss: 0.0019\nEpoch 11/25\n11/11 [==============================] - 0s 11ms/step - loss: 0.0030 - val_loss: 0.0019\nEpoch 12/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0038 - val_loss: 0.0020\nEpoch 13/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0040 - val_loss: 0.0018\nEpoch 14/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0047 - val_loss: 0.0019\nEpoch 15/25\n11/11 [==============================] - 0s 11ms/step - loss: 0.0031 - val_loss: 0.0020\nEpoch 16/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0036 - val_loss: 0.0019\nEpoch 17/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0037 - val_loss: 0.0023\nEpoch 18/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0032 - val_loss: 0.0018\nEpoch 19/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0027 - val_loss: 0.0017\nEpoch 20/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0025 - val_loss: 0.0016\nEpoch 21/25\n11/11 [==============================] - 0s 11ms/step - loss: 0.0023 - val_loss: 0.0017\nEpoch 22/25\n11/11 [==============================] - 0s 9ms/step - loss: 0.0041 - val_loss: 0.0017\nEpoch 23/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0027 - val_loss: 0.0018\nEpoch 24/25\n11/11 [==============================] - 0s 10ms/step - loss: 0.0035 - val_loss: 0.0018\nEpoch 25/25\n11/11 [==============================] - 0s 12ms/step - loss: 0.0041 - val_loss: 0.0018\n<class 'keras.engine.sequential.Sequential'>\n<class 'keras.callbacks.History'>\n<class 'dict'>\n"
],
[
"print(history)",
"<keras.callbacks.History object at 0x0000017ACB6B3BB0>\n"
],
[
"import matplotlib.pyplot as plt\nplt.plot(history.history['loss'],'r',linewidth=2, label='Train loss')\nplt.plot(history.history['val_loss'], 'g',linewidth=2, label='Validation loss')\nplt.title('LSTM')\nplt.xlabel('Epochs')\nplt.ylabel('MSE')\nplt.show()",
"_____no_output_____"
],
[
"targets = test[target_col][window_len:]\npreds = model.predict(X_test).squeeze()\nmean_absolute_error(preds, y_test)\nprint(type(targets))\nprint(type(preds))\nmean_absolute_error(preds, y_test)",
"<class 'pandas.core.series.Series'>\n<class 'numpy.ndarray'>\n"
],
[
"from sklearn.metrics import mean_squared_error\nMAE=mean_squared_error(preds, y_test)\nMAE",
"_____no_output_____"
],
[
"from sklearn.metrics import r2_score\nR2=r2_score(y_test, preds)\nR2",
"_____no_output_____"
],
[
"preds = test[target_col].values[:-window_len] * (preds + 1)\npreds = pd.Series(index=targets.index, data=preds)\nline_plot(targets, preds, 'actual', 'prediction', lw=3)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca10c2d87f85c92c547d21db2faed986759f555 | 5,446 | ipynb | Jupyter Notebook | cs_oracle_scrape_1.0.ipynb | Bohdanski/jupyter-notebooks | 37176092cf7568a4a39cb6f87bc590634fa7cb8f | [
"MIT"
] | null | null | null | cs_oracle_scrape_1.0.ipynb | Bohdanski/jupyter-notebooks | 37176092cf7568a4a39cb6f87bc590634fa7cb8f | [
"MIT"
] | null | null | null | cs_oracle_scrape_1.0.ipynb | Bohdanski/jupyter-notebooks | 37176092cf7568a4a39cb6f87bc590634fa7cb8f | [
"MIT"
] | null | null | null | 23.575758 | 114 | 0.506427 | [
[
[
"from time import sleep\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.common.exceptions import NoSuchElementException",
"_____no_output_____"
],
[
"driver = webdriver.Chrome(\"C:\\\\Users\\\\tubxt2p\\\\Downloads\\\\chromedriver_win32\\\\chromedriver.exe\")",
"_____no_output_____"
],
[
"def log_in(username, password):\n \"\"\"\n Takes in user credentials and logs into the website\n \"\"\"\n username_field = driver.find_element_by_name(\"username\")\n username_field.send_keys(username)\n\n password_field = driver.find_element_by_name(\"password\")\n password_field.send_keys(password)\n\n password_field.send_keys(Keys.ENTER)\n sleep(5)",
"_____no_output_____"
],
[
"def check_refresh():\n \"\"\"\n Check if Oracle DB is prompting Adobe Flash Update.\n \"\"\"\n try:\n driver.find_element_by_class_name(\"HeaderLogo\")\n except NoSuchElementException:\n return True\n return False",
"_____no_output_____"
],
[
"def click_element(path, method):\n \"\"\"\n Main function to interact with different webpage elements.\n Paramaeters are the path or id of the element that is being interacted with\n as well as the method Selenium is using to find the element.\n \"\"\"\n while True:\n try:\n if method == \"xpath\":\n element = driver.find_element_by_xpath(path)\n elif method == \"class\":\n element = driver.find_element_by_class_name(path)\n elif method == \"id\":\n element = driver.find_element_by_id(path)\n else:\n raise NameError(\"The specified method is not defined.\")\n element.click()\n sleep(5)\n break\n except Exception:\n while check_refresh() == True:\n driver.refresh()\n sleep(5)",
"_____no_output_____"
],
[
"driver.get(\"https://tinyurl.com/yx6l85d8\")",
"_____no_output_____"
],
[
"log_in(\"\", \"\")",
"_____no_output_____"
],
[
"click_element('//*[@title=\"Customer Detail Export Dashboard - Standard SL Export\"]', \"xpath\")",
"_____no_output_____"
],
[
"click_element(\"promptDropDownButton\", \"class\")",
"_____no_output_____"
],
[
"click_element('//*[@title=\"WTD\"]', \"xpath\")",
"_____no_output_____"
],
[
"click_element(\"gobtn\", \"id\")",
"_____no_output_____"
],
[
"click_element('//*[@title=\"Export to different format\"]', \"xpath\")",
"_____no_output_____"
],
[
"click_element('//*[@title=\"Download columnar data\"]', \"xpath\")",
"_____no_output_____"
],
[
"click_element('//*[@aria-label=\"Tab delimited Format\"]', \"xpath\")",
"_____no_output_____"
],
[
"click_element('//*[@name=\"OK\"]', \"xpath\")",
"_____no_output_____"
],
[
"click_element(\"logout\", \"id\")",
"_____no_output_____"
],
[
"driver.delete_all_cookies()\ndriver.close()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca11559cf1bc74695af229169feaa3b13f03a44 | 22,021 | ipynb | Jupyter Notebook | examples/tutorial/03_Composing_Plots.ipynb | ablythed/holoviz | ddfbfc504ade73e24aeb66560d9d3aa6f578956b | [
"BSD-3-Clause"
] | 207 | 2019-11-14T08:41:44.000Z | 2022-03-31T11:26:18.000Z | examples/tutorial/03_Composing_Plots.ipynb | ablythed/holoviz | ddfbfc504ade73e24aeb66560d9d3aa6f578956b | [
"BSD-3-Clause"
] | 74 | 2019-11-21T16:39:45.000Z | 2022-02-15T16:46:51.000Z | examples/tutorial/03_Composing_Plots.ipynb | ablythed/holoviz | ddfbfc504ade73e24aeb66560d9d3aa6f578956b | [
"BSD-3-Clause"
] | 36 | 2020-01-17T08:01:53.000Z | 2022-03-11T01:33:47.000Z | 35.2336 | 873 | 0.636529 | [
[
[
"<style>div.container { width: 100% }</style>\n<img style=\"float:left; vertical-align:text-bottom;\" height=\"65\" width=\"172\" src=\"../assets/holoviz-logo-unstacked.svg\" />\n<div style=\"float:right; vertical-align:text-bottom;\"><h2>Tutorial 3. Composing Plots</h2></div>",
"_____no_output_____"
],
[
"So far we have generated plots using [hvPlot](http://hvplot.pyviz.org), but we haven't discussed what exactly these plots are and how they differ from the output of other libraries offering the `.plot` API. Here we will see that the `.hvplot()` output is actually a rich, composable and compositional object that can be used in many different ways, not just as an immediate plot. Specifically, hvPlot generates [HoloViews](https://holoviews.org) objects rendered using the [Bokeh](https://bokeh.org) backend so that they support interactive hovering, panning, and zooming. \n\nIn this notebook, we'll examine the output of hvPlot calls to take a look at individual HoloViews objects. Then we will see how these \"elements\" offer us powerful ways of combining and composing layered visualizations.",
"_____no_output_____"
],
[
"### Read in the data\n\nWe'll read in the data as before, and also reindex by time so that we can more easily do resampling. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"%%time\ndf = pd.read_parquet('../data/earthquakes-projected.parq')\ndf.time = df.time.astype('datetime64[ns]')\ndf = df.set_index(df.time)",
"_____no_output_____"
]
],
[
[
"### Composing plots\nIn this section we'll start looking at how we can group plots to gain a deeper understanding of the data. We'll start by resampling the data to explore patterns in number and magnitude of earthquakes over time. ",
"_____no_output_____"
]
],
[
[
"import hvplot.pandas # noqa: adds hvplot method to pandas objects",
"_____no_output_____"
],
[
"weekly_count = df.id.resample('1W').count().rename('count')\nweekly_count_plot = weekly_count.hvplot(title='weekly count')",
"_____no_output_____"
]
],
[
[
"The first thing to note is that with `hvplot`, it is common to grab a handle on the returned output, which isn't necessarily displayed right away. The value returned from the Matplotlib based `.plot` API of Pandas is an axis object that is typically discarded, with plotting display occurring only as a side effect if `%matplotlib inline` is loaded. With hvPlot, there are no side effects; the plot is displayed only if it is returned as the last value in the Jupyter cell (and thus no plot is visible in the above cell's output). hvPlot objects thus work like any other normal Python object; just as you don't expect `x=2` to display anything, `x=df.hvplot()` will not display anything; both just assign a value to `x`.\n\nOnce you have a handle on it, however, you can plot it if you wish to:",
"_____no_output_____"
]
],
[
[
"weekly_count_plot",
"_____no_output_____"
]
],
[
[
"But we can also do other things, such as look at its textual representation by printing it:",
"_____no_output_____"
]
],
[
[
"print(weekly_count_plot)",
"_____no_output_____"
]
],
[
[
"\"`:Curve [time] (count)`\" is HoloViews notation for saying that this object is a `Curve` element with `time` as a key dimension (`kdim`, in square brackets) and `count` as a value dimension (`vdim`, in parentheses). In other contexts, key dimensions are also called index dimensions or independent variables, while value dimensions are also called dependent variables. HoloViews will be discussed in more detail in a [later section](./07_Custom_Interactivity.ipynb).\n\nNow let's do a similar resampling, but for magnitude:",
"_____no_output_____"
]
],
[
[
"weekly_mean_magnitude = df.mag.resample('1W').mean()\nweekly_mean_magnitude_plot = weekly_mean_magnitude.hvplot(title='weekly mean magnitude')\nweekly_mean_magnitude_plot",
"_____no_output_____"
],
[
"print(weekly_mean_magnitude_plot)",
"_____no_output_____"
]
],
[
[
"This plot has time on the x axis like the other, but the value dimension is magnitude rather than count. HoloViews objects can be composed into an overlay using `*` symbol, with a legend to distinguish them:",
"_____no_output_____"
]
],
[
[
"weekly_mean_magnitude_plot * weekly_count_plot",
"_____no_output_____"
]
],
[
[
"The two timeseries have quite different value ranges, making it very hard to see the fluctuations in magnitude with an overlay like this. It is possible to zoom in to see the detail in `mag`, but a more useful form of composition here is a layout of separate plots, using a `+` symbol to compose HoloViews objects side-by-side with axes linked for any shared dimensions:",
"_____no_output_____"
]
],
[
[
"(weekly_mean_magnitude_plot + weekly_count_plot).cols(1)",
"_____no_output_____"
]
],
[
[
"Try zooming in and out (including on the axes) to explore the linking between the plots above.\n\nInterestingly, there are three clear peaks in the monthly counts, and two of them correspond to sudden dips in the mean magnitude, while the third corresponds to a peak in the mean magnitude.",
"_____no_output_____"
],
[
"### Adding a third dimension\n\nNow let's filter the earthquakes to only include the really high intensity ones. Using the pandas `.plot()` API, we can add extra dimensions to the visualization by using color to represent magnitude in addition to the x and y locations:",
"_____no_output_____"
]
],
[
[
"most_severe = df[df.mag >= 7]\n\n%matplotlib inline\nmost_severe.plot.scatter(x='longitude', y='latitude', c='mag');",
"_____no_output_____"
]
],
[
[
"Here is the analogous version using `hvplot` where we grab the handle `high_mag_scatter` so we can inspect the return value:",
"_____no_output_____"
]
],
[
[
"high_mag_scatter = most_severe.hvplot.scatter(x='longitude', y='latitude', c='mag')\nhigh_mag_scatter",
"_____no_output_____"
]
],
[
[
"As always, this return value is actually a HoloViews element which has a printed representation:",
"_____no_output_____"
]
],
[
[
"print(high_mag_scatter)",
"_____no_output_____"
]
],
[
[
"This representation reveals that even though the scatterplot here looks ok, it's actually flawed in a subtle way. Earthquakes occur at a particular 2D longitude, latitude location on the earth's surface and have a measured magnitude, i.e., they are 2D points with some associated value, and should have a representation like \"[longitude,latitude] (mag). That is, there should be two key dimensions (independent variables), with the magnitude being a dependent variable (\"value dimension\"). The problem here is that the Pandas .scatter call makes no distinction between these types of dimensions, which will confuse HoloViews when it is doing automatic linking and other operations that depend on the interpretation of the data. For this purpose, HoloViews provides a separate `.hvplot.points` call that has the same visual representation but the correct semantics:",
"_____no_output_____"
]
],
[
[
"high_mag_points = most_severe.hvplot.points(x='longitude', y='latitude', c='mag')\nhigh_mag_points",
"_____no_output_____"
],
[
"print(high_mag_points)",
"_____no_output_____"
]
],
[
[
"This object now appropriately represents that latitude and longitude are the key dimensions, with one value dimension (the magnitude).\n\nWe will learn more about HoloViews objects in the next notebook. But first, let's adjust the options to create a better plot. First we'll use [colorcet](https://colorcet.pyviz.org) to get a colormap that will have a good contrast with blue oceans when we show earthquakes on a map; the default blue colormap above would get lost against the seas! We can choose one from the website and use the HoloViews/Bokeh-based `colorcet` plotting module to make sure it looks good. ",
"_____no_output_____"
]
],
[
[
"import colorcet as cc\nfrom colorcet.plotting import swatch\n\nswatch('CET_L4')",
"_____no_output_____"
]
],
[
[
"We'll reverse the colors to align dark reds with higher magnitude earthquakes for better contrast.",
"_____no_output_____"
]
],
[
[
"mag_cmap = cc.CET_L4[::-1]\nswatch(\"CET_L4_r\", mag_cmap)",
"_____no_output_____"
]
],
[
[
"In addition to fixing the colormap, we will add some additional columns to the hover text, and add a title.",
"_____no_output_____"
]
],
[
[
"high_mag_points = most_severe.hvplot.points(\n x='longitude', y='latitude', c='mag', hover_cols=['place', 'time'],\n cmap=mag_cmap, title='Earthquakes with magnitude >= 7')\n\nhigh_mag_points",
"_____no_output_____"
]
],
[
[
"When you hover over the points you'll see the place and time of the earthquake in addition to the magnitude and lat/lon. What's displayed by default corresponds to the dimensions that HoloViews is keeping track of, even though not all have been mapped to visible features of the plot:",
"_____no_output_____"
]
],
[
[
"print(high_mag_points)",
"_____no_output_____"
]
],
[
[
"#### Exercise\n\nUse the colorcet plotting function `swatches(group='linear')` to choose a different colormap and create a plot with it. \n\n\n<details><summary><i><u>(Hint)</u></i></summary><br>\n\n```python\nfrom colorcet.plotting import swatches\nswatches(group='linear')\n```\nthen \n```python\nmost_severe.hvplot.points(\n x='longitude', y='latitude', c='mag', hover_cols=['place', 'time'],\n cmap=\"bgy\", title='Earthquakes with magnitude >= 7')\n```\n\n</details>",
"_____no_output_____"
],
[
"### Overlay with a tiled map \n\nThe \"CET_L4\" colormap works well here, and we can kind of see the outlines of the continents, but the visualization would be much easier to parse if we added a base map underneath. To do this, we'll import a tile element from HoloViews, namely the `OSM` tiles from [openstreetmap](https://www.openstreetmap.org/) using the Web Mercator projection: ",
"_____no_output_____"
]
],
[
[
"from holoviews.element.tiles import OSM\nOSM()",
"_____no_output_____"
]
],
[
[
"Note that when you zoom, the map becomes more and more detailed, downloading new tiles dynamically as necessary. \n\nIn order to overlay on this basemap, we need to project our earthquakes to the Web Mercator projection system used by Bokeh. This dataset already includes columns for Web Mercator `easting` (meters East of Greenwich) and `northing` (meters north of the equator) columns calculated from `longitude` and `latitude` using the HoloViews [hv.util.transform.lon_lat_to_easting_northing](https://holoviews.org/reference_manual/holoviews.util.html#holoviews.util.transform.lon_lat_to_easting_northing) function, so we'll use those and overlay our points on top of the `OSM` tile source by using the HoloViews `*` operator:",
"_____no_output_____"
]
],
[
[
"OSM() * most_severe.hvplot.points(\n x='easting', y='northing', c='mag', hover_cols=['place', 'time'], \n cmap=mag_cmap, title='Earthquakes with magnitude >= 7', line_color='black')",
"_____no_output_____"
]
],
[
[
"Actually, for this special but common case of overlaying data on geographic tiles, hvPlot lets you simply specify `tiles='OSM'` as a string in the `hvplot.points` call instead of explicitly overlaying a tile source with `*`:",
"_____no_output_____"
]
],
[
[
"most_severe.hvplot.points(\n x='easting', y='northing', c='mag', hover_cols=['place', 'time'], \n cmap=mag_cmap, title='Earthquakes with magnitude >= 7', tiles='OSM',\n line_color='black')",
"_____no_output_____"
]
],
[
[
"Note that the Web Mercator projection is only one of many possible projections used when working with geospatial data. If you need to work with these different projections, you can use the [GeoViews](http://geoviews.org) extension to HoloViews that makes elements aware of the projection they are defined in and automatically projects into whatever coordinates are needed for display. \n\n\n#### Exercise\n\nImport and use different tiles. \n\n\n<details><summary><i><u>(Hint)</u></i></summary><br>\n\nTry EsriImagery or Wikipedia.\n\n</details>\n",
"_____no_output_____"
],
[
"### Overlay with a raster\nThat base map is helpful for orienting ourselves, but it isn't really adding much new information. We might instead like to overlay the earthquakes on other data, such as a map of global population. We'll start by reading in a raster of global population, to see how the events might affect people. We'll use [xarray](https://xarray.pydata.org) to load this multidimensional raster data file, as such formats are not handled well by Pandas. (Luckily, hvPlot supports xarray just as well as Pandas.)",
"_____no_output_____"
]
],
[
[
"import xarray as xr\nimport hvplot.xarray # noqa: adds hvplot method to xarray objects",
"_____no_output_____"
],
[
"ds = xr.open_dataarray('../data/raster/gpw_v4_population_density_rev11_2010_2pt5_min.nc')\nds",
"_____no_output_____"
],
[
"cleaned_ds = ds.where(ds.values != ds.nodatavals).sel(band=1)\ncleaned_ds.name = 'population'",
"_____no_output_____"
]
],
[
[
"The `xarray.plot()` Matplotlib API is fine for plotting small sections of this dataset, such as the population of Indonesia:",
"_____no_output_____"
]
],
[
[
"ROI = cleaned_ds.sel(y=slice(10, -10), x=slice(90, 110))\nROI.plot();",
"_____no_output_____"
]
],
[
[
"Matplotlib will struggle with the full dataset, but we can use hvPlot+Datashader to see it all and interactively look at patterns with different spatial scales. Here we will apply a logarithmic colormap to the population counts (similar in appearance to `eq_hist` in this case, but easier for people to interpret), and set a `clim` to exclude the lower bound of zero from the log calculation:",
"_____no_output_____"
]
],
[
[
"rasterized_pop = cleaned_ds.hvplot.image(rasterize=True, cnorm='log', clim=(1, np.nan))\nrasterized_pop",
"_____no_output_____"
]
],
[
[
"By inspecting the HoloViews object, we can see that the output isn't actually an Image; instead it is a DynamicMap of Images. This means that any particular image that is displayed is actually just one of many images that are computed on the fly (dynamically), as you can see when you zoom in.",
"_____no_output_____"
]
],
[
[
"print(rasterized_pop)",
"_____no_output_____"
]
],
[
[
"#### Exercise\n\nUse `.last` to inspect the last image from the `DynamicMap`. Zoom into the plot above and inspect `.last` again.",
"_____no_output_____"
],
[
"### Putting it together\n\nNow let's overlay the earthquake data on top of the population data, using different colormaps so that we can see both types of data clearly in the same plot.",
"_____no_output_____"
]
],
[
[
"title = 'High magnitude (>=7) earthquakes over population density [people/km^2] from 2000 to 2019'\n\nrasterized_pop = cleaned_ds.hvplot.image(\n rasterize=True, cmap='kbc', logz=True, clim=(1, np.nan),\n height=600, width=1000, xaxis=None, yaxis=None\n)\nhigh_mag_points = most_severe.hvplot.points(\n x='longitude', y='latitude', c='mag',\n hover_cols=['place', 'time'], cmap=mag_cmap).opts(bgcolor='black') # .opts is described in the next notebook\n\npop_and_high_mag = (rasterized_pop * high_mag_points).relabel(title)\npop_and_high_mag ",
"_____no_output_____"
]
],
[
[
"Once again we have used the `*` operator, this time to build an overlay of our earthquake points on top of the population raster.",
"_____no_output_____"
]
],
[
[
"print(pop_and_high_mag)",
"_____no_output_____"
]
],
[
[
"As you can see, the HoloViews objects returned by hvPlot are in no way dead ends -- you can flexibly compare, combine, lay out, and overlay them to reveal complex interrelationships in your data. In fact, this idea of avoiding dead ends is one of the major design principles for HoloViz tools:\n\n<img src=\"../assets/shortcuts.png\" width=70%>\n\nThat is, instead of forcing you to choose between a powerful low-level tool that requires extensive expertise even to get started, or a high-level tool that's eventually a dead end, HoloViz tools try to provide you a quick way to get to something that is at least _nearly_ what you want, while making it possible to further customize and tweak what you get so that the end result can be precisely what you want. That way you only need to learn the specific low-level bits actually required, whether that's at the HoloViews level (if you need to customize your hvPlot-generated objects), or further down to Bokeh and JavaScript.\n\nIn the next notebook we'll learn how to link up the HoloViews plots you've generated, to help you understand the relationships between the various views of your data that you create.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca1162715098c6eae8c58f1115336a4c8225bf2 | 12,721 | ipynb | Jupyter Notebook | PyAbst Testing.ipynb | joshuadavidwood/pyabst | 566a60bbf4e6bd8ddf62b1d681739187c59a2e0c | [
"MIT"
] | 1 | 2019-03-14T20:04:17.000Z | 2019-03-14T20:04:17.000Z | PyAbst Testing.ipynb | joshuadavidwood/pyabs | 566a60bbf4e6bd8ddf62b1d681739187c59a2e0c | [
"MIT"
] | null | null | null | PyAbst Testing.ipynb | joshuadavidwood/pyabs | 566a60bbf4e6bd8ddf62b1d681739187c59a2e0c | [
"MIT"
] | null | null | null | 44.017301 | 1,019 | 0.630689 | [
[
[
"# PyAbst Example",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
]
],
[
[
"import nltk\nfrom collections import Counter\nfrom nltk.corpus import stopwords\nimport re",
"_____no_output_____"
]
],
[
[
"### Example News Article: Emma Haruka Iwao smashes pi world record with Google help",
"_____no_output_____"
],
[
"The value of the number pi has been calculated to a new world record length of 31 trillion digits, far past the previous record of 22 trillion. Emma Haruka Iwao, a Google employee from Japan, found the new digits with the help of the companys cloud computing service.<br>\n\nPi is the number you get when you divide a circles circumference by its diameter. The first digits, 3.14, are well known but the number is infinitely long. Extending the known sequence of digits in pi is very difficult because the number follows no set pattern.<br>\n\nPi is used in engineering, physics, supercomputing and space exploration - because its value can be used in calculations for waves, circles and cylinders. The pursuit of longer versions of pi is a long-standing pastime among mathematicians. And Ms Iwao said she had been fascinated by the number since she had been a child.<br>\n\nThe calculation required 170TB of data (for comparison, 200,000 music tracks take up 1TB) and took 25 virtual machines 121 days to complete.<br>\n<br>\n<i>Source: https://www.bbc.co.uk/news/technology-47524760</i>\n",
"_____no_output_____"
],
[
"### Example News Article Ingest",
"_____no_output_____"
]
],
[
[
"input_text = '''The value of the number pi has been calculated to a new world record length of 31 trillion digits, far past the previous record of 22 trillion. Emma Haruka Iwao, a Google employee from Japan, found the new digits with the help of the companys cloud computing service. Pi is the number you get when you divide a circles circumference by its diameter. The first digits, 3.14, are well known but the number is infinitely long. Extending the known sequence of digits in pi is very difficult because the number follows no set pattern. Pi is used in engineering, physics, supercomputing and space exploration - because its value can be used in calculations for waves, circles and cylinders. The pursuit of longer versions of pi is a long-standing pastime among mathematicians. And Ms Iwao said she had been fascinated by the number since she had been a child. The calculation required 170TB of data (for comparison, 200,000 music tracks take up 1TB) and took 25 virtual machines 121 days to complete.'''",
"_____no_output_____"
]
],
[
[
"### PyAbst Source Code",
"_____no_output_____"
]
],
[
[
"def PyAbst(text, target_words=[], word_weight=1):\n '''A function which returns the most important sentences from a list of sentences using common word weighting.'''\n\n\n # Define StopWords corpus.\n new_stopwords = ['said', 'so'] # Additional StopWords.\n stopwords = nltk.corpus.stopwords.words('english')\n stopwords = stopwords + new_stopwords\n\n\n # Evaluate upper, lower and capitalised combinations of target words.\n target_words_combinations = []\n for i in target_words:\n upper = i.upper() # Evaluate upper.\n target_words_combinations.append(upper)\n lower = i.lower() # Evaluate lower.\n target_words_combinations.append(lower)\n capitalise = i.capitalize() # Evaluate capitalised.\n target_words_combinations.append(capitalise)\n\n\n # Unprocessed text.\n input_text = text # Defined here to evaluate reduction percentage.\n text = text.replace('?', '.') # Replace ? character with period.\n text = text.replace('!', '.') # Replace ! character with period.\n text = [x.split('.') for x in text.split('.')] # Use period to create list of lists with period separation.\n text = [[x.lstrip() for x in listx] for listx in text] # Remove heading whitespace text for each list element.\n text = [[x + '.' for x in listx] for listx in text] # Add a period at the end of each list.\n text = text[:-1] # Remove list containing [.] at the end of list of lists.\n\n processed_text = [] #NOTE: This is a duplicate of unprocessed text with additional processing methods.\n for i in text:\n i = [re.sub(r'[^\\w\\s]', '', j).lower() for j in i] # Remove punctuation and lower all words.\n i = [nltk.word_tokenize(j) for j in i] # Tokenize words using NLTK.\n i = [item for sublist in i for item in sublist] # Flatten the list of lists.\n i = [j for j in i if j not in stopwords] # Remove StopWords using NLTK.\n processed_text.append(i)\n\n\n sentences_unpacked = [item for sublist in processed_text for item in sublist] # Unpacked list of lists.\n\n\n def replace_list_dict(list, dictionary):\n '''A function which replaces list elements with corresponding dictionary key-values.'''\n replaced = [(item, Counter(sentences_unpacked).get(item, item)) for item in list]\n return replaced\n\n\n sentences_list = []\n for i in processed_text: # Converts list (word) element to (word, frequency).\n sentences_list.append(replace_list_dict(i, Counter(sentences_unpacked)))\n\n\n weighted_sentences_list = []\n for i in sentences_list: # Replaces list (word, frequency) element with (word, frequency * weight) if word is in target list.\n weighted_sentences_list.append([(t[0], t[1] * word_weight) if t[0] in target_words_combinations else (t[0], t[1]) for t in i])\n\n\n sentences_list_scores = []\n for i in weighted_sentences_list:\n sum_score = sum(x[1] for x in i) # Evalute the sum of frequency for each sentence (list within list of lists).\n sentences_list_scores.append(sum_score) # Evaluate the sum of frequencies for each sentence.\n\n\n sentences_length = []\n for i in processed_text:\n sentence_length = len(i) # Evaluate how many words are in each sentence.\n sentences_length.append(sentence_length)\n\n\n weighted_scores = [(x, x//y) for x, y in zip(sentences_list_scores, sentences_length)] # Evaluate (score, weighted score).\n index = int(len(processed_text) * 0.4) # Evaluate fraction of sentences to return.\n reduced_indexes = sorted((sorted(range(len(sentences_list_scores)), key=lambda i: sentences_list_scores[i])[-index:]))\n\n\n reduced_text = list(text[i] for i in reduced_indexes) # Only return the sentences with index in reduced_indexes.\n reduced_text = [item for sublist in reduced_text for item in sublist] # Unpacked list of lists.\n reduced_text = ' '.join(reduced_text)\n\n\n # Evaluate text reduction percentage.\n original_length = len(input_text)\n reduced_length = len(reduced_text)\n percentage_diff = str(int(((original_length - reduced_length) / original_length) * 100)) + '%'\n\n\n return reduced_text, percentage_diff",
"_____no_output_____"
]
],
[
[
"### Example 1: Default Arguments",
"_____no_output_____"
]
],
[
[
"print(PyAbst(input_text, [], 1)[0], '\\n')\nprint('Original text reduced by: ', PyAbst(input_text, [], 1)[1])",
"The value of the number pi has been calculated to a new world record length of 31 trillion digits, far past the previous record of 22 trillion. Emma Haruka Iwao, a Google employee from Japan, found the new digits with the help of the companys cloud computing service. Extending the known sequence of digits in pi is very difficult because the number follows no set pattern. Pi is used in engineering, physics, supercomputing and space exploration - because its value can be used in calculations for waves, circles and cylinders. \n\nOriginal text reduced by: 46%\n"
]
],
[
[
"### Example 2: Suppressing the word 'Japan'",
"_____no_output_____"
]
],
[
[
"print(PyAbst(input_text, ['Japan'], -20000)[0], '\\n')\nprint('Original text reduced by: ', PyAbst(input_text, ['Japan'], -20000)[1])",
"The value of the number pi has been calculated to a new world record length of 31 trillion digits, far past the previous record of 22 trillion. Extending the known sequence of digits in pi is very difficult because the number follows no set pattern. Pi is used in engineering, physics, supercomputing and space exploration - because its value can be used in calculations for waves, circles and cylinders. The calculation required 170TB of data (for comparison, 200,000 music tracks take up 1TB) and took 25 virtual machines 121 days to complete. \n\nOriginal text reduced by: 45%\n"
]
],
[
[
"### Example 3: Amplifying the word 'number'",
"_____no_output_____"
]
],
[
[
"print(PyAbst(input_text, ['number'], 200)[0], '\\n')\nprint('Original text reduced by: ', PyAbst(input_text, ['number'], 200)[1])",
"The value of the number pi has been calculated to a new world record length of 31 trillion digits, far past the previous record of 22 trillion. Pi is the number you get when you divide a circles circumference by its diameter. Extending the known sequence of digits in pi is very difficult because the number follows no set pattern. And Ms Iwao said she had been fascinated by the number since she had been a child. \n\nOriginal text reduced by: 58%\n"
]
],
[
[
"### Summry Comparison",
"_____no_output_____"
],
[
"When you ingest the same text into Smmry ([www.smmry.com](www.smmry.com)). with select a 5 sentence argument we find the results to match Example 3. However, PyAbs replaces <i>'The first digits, 3.14, are well known but the number is infinitely long.'</i> with <i>'And Ms Iwao said she had been fascinated by the number since she had been a child.'</i>.\n\nThe value of the number pi has been calculated to a new world record length of 31 trillion digits, far past the previous record of 22 trillion. Pi is the number you get when you divide a circles circumference by its diameter. The first digits, 3.14, are well known but the number is infinitely long. Extending the known sequence of digits in pi is very difficult because the number follows no set pattern.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
eca11f4b7a0b6b3a37baaae89343126c27221422 | 108,545 | ipynb | Jupyter Notebook | docs/_build/html/notebooks/Introduction.ipynb | busyweaver/Straph | b97a7b99ffab2416eb81df11073cc927f648fa10 | [
"Apache-2.0"
] | null | null | null | docs/_build/html/notebooks/Introduction.ipynb | busyweaver/Straph | b97a7b99ffab2416eb81df11073cc927f648fa10 | [
"Apache-2.0"
] | null | null | null | docs/_build/html/notebooks/Introduction.ipynb | busyweaver/Straph | b97a7b99ffab2416eb81df11073cc927f648fa10 | [
"Apache-2.0"
] | null | null | null | 158.923865 | 23,796 | 0.883818 | [
[
[
"import matplotlib.pyplot as plt\nimport straph as sg",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = (12,9)",
"_____no_output_____"
]
],
[
[
"# Introduction",
"_____no_output_____"
],
[
"Formally, a stream graph $S = (T,V,W,E)$ is defined by a set of time instants $T$, a finite set of nodes $V$, a set of temporal nodes $W \\subseteq T \\times V$, and a set of temporal links $E\\subseteq T \\times V \\times V$.\nThe set of time instants $T$ can be continuous or discrete. Likewise interactions (temporal links) between two nodes can be discrete $(b,b,u,v)$ or continuous $(b,e,u,v)$ (with $b,e \\in T$ and $u,v \\in V$).",
"_____no_output_____"
],
[
"Stream graphs can be used to model any connected structure evolving through time. For instance, IP traffic between entities can be modelised as follow: whenever two IP adresses exchanges packets we record a temporal link between these two nodes corresponding to the duration of the exchange.",
"_____no_output_____"
],
[
"First, we load an artificial example that will be used in the following steps of analysis and visualisation.",
"_____no_output_____"
]
],
[
[
"path_directory = \"examples/\"\nS = sg.read_stream_graph(path_nodes=path_directory + \"example_nodes.sg\",\n path_links=path_directory + \"example_links.sg\")",
"_____no_output_____"
]
],
[
[
"## Basic visualisation",
"_____no_output_____"
]
],
[
[
"fig = S.plot()",
"findfont: Font family ['Garamond'] not found. Falling back to DejaVu Sans.\n"
]
],
[
[
"We refer to this [Notebook](Drawing.ipynb) for more details on visualisation.",
"_____no_output_____"
],
[
"## Stream Graph Object",
"_____no_output_____"
],
[
"**Paradigm**: In ``Straph`` simple data structures should be represented by built-in python objects, resulting in a more comprehensive and intuitive code.\n\nAfter a comparative analysis we choose to use the following data structures for manipulating stream graphs. (As numerous algorithms and basic computations are not based on vectorials operations ``Numpy`` arrays were deemed to slow.)",
"_____no_output_____"
],
[
"A ``StreamGraph`` object is constituted by five main attributes:\n\n- ```times```: the time window of the stream graph ($T$)\n- ```nodes```: the list of nodes present in the stream graph ($V$)\n- ```node_presence```: a list of list, each list corresponds to a node and contains its presence time ($W$)\n- ```links```: the list of links present in the stream graph ($E$)\n- ```link_presence```: a list of list, each list corresponds to a link and contains its presence time ($E$)",
"_____no_output_____"
]
],
[
[
"S.times",
"_____no_output_____"
]
],
[
[
"The stream graph spans from instant $0$ to $10$",
"_____no_output_____"
]
],
[
[
"S.nodes",
"_____no_output_____"
]
],
[
[
"``S`` contains $6$ nodes.\nNodes are always represented by integers, their labels can be stored in the attribute ```node_to_label```.",
"_____no_output_____"
]
],
[
[
"S.node_to_label",
"_____no_output_____"
],
[
"S.node_presence",
"_____no_output_____"
]
],
[
[
"As we can see on the above figure, node $A$ with index $0$ is present from time $0$ to $5$, absent from $5$ to $7$ and present again from $7$ to $10$.",
"_____no_output_____"
]
],
[
[
"S.links",
"_____no_output_____"
],
[
"S.link_presence",
"_____no_output_____"
]
],
[
[
"The link $(4,5)$ (corresponding to nodes $E$ and $F$) with index $5$ is active from time $0$ to $3$ and again from $7$ to $10$.",
"_____no_output_____"
],
[
"A short description of a stream graph scale can be obtained with ``.describe()``",
"_____no_output_____"
]
],
[
[
"S.describe()",
"Nb of Nodes : 6\nNb of segmented nodes : 11.0\nNb of links : 8\nNb of segmented links : 11.0\nNb of event times : 11\n"
]
],
[
[
"We can add or remove nodes and links to/from a ``stream_graph`` object (we refer to this [notebook](Readers%20and%20Writers%20(Input-Output).ipynb) for further information).\n\nWe can easily add a new node. Let's add $G$ present from $0$ to $3$ and from $8$ to $9$.",
"_____no_output_____"
]
],
[
[
"S.add_node('G',[0,3,8,9])",
"_____no_output_____"
]
],
[
[
"Likewise we can add a new link. If one of the extrimities is new, it will be added automatically for the duration of the link.\nLet's add a link $(G,H)$ from $1$ to $3$ and from $8$ to $9$.",
"_____no_output_____"
]
],
[
[
"S.add_link(('G','H'),[1,3,8,9])\n_ = S.plot()",
"_____no_output_____"
]
],
[
[
"Let's remove these new interactions between nodes $G$ and $H$.",
"_____no_output_____"
]
],
[
[
"S.remove_link(('G','H'))\n_ = S.plot()",
"_____no_output_____"
]
],
[
[
"*Note* : If we remove a node, all of its links will be automatically removed.",
"_____no_output_____"
]
],
[
[
"S.remove_node('G')\nS.remove_node('H')",
"_____no_output_____"
],
[
"S.plot()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Manipulating a Stream Graph Object",
"_____no_output_____"
],
[
"There are several manners to manipulate a ``StreamGraph`` object:\n - Iterate on nodes \n - Iterate on links\n - Iterate on temporally ordered links\n - Iterate on temporally ordered events",
"_____no_output_____"
]
],
[
[
"for n,np in zip(S.nodes,S.node_presence):\n for b,e in zip(np[::2],np[1::2]): # Even index are arrivals and odd index departure\n print(\"Node \",S.node_to_label[n],\" is present from \",b,\" to \",e)",
"Node A is present from 0.0 to 5.0\nNode A is present from 7.0 to 10.0\nNode B is present from 0.0 to 10.0\nNode C is present from 0.0 to 1.0\nNode C is present from 4.0 to 6.0\nNode D is present from 0.0 to 1.0\nNode D is present from 2.0 to 4.0\nNode D is present from 8.0 to 10.0\nNode E is present from 0.0 to 10.0\nNode F is present from 0.0 to 4.0\nNode F is present from 6.0 to 10.0\n"
],
[
"for l,lp in zip(S.links,S.link_presence):\n for b,e in zip(lp[::2],lp[1::2]): # Even index are arrivals and odd index departure\n u,v = l\n print(\"Link \",(S.node_to_label[u],S.node_to_label[v]),\" is present from \",b,\" to \",e)",
"Link ('A', 'B') is present from 0.0 to 4.0\nLink ('A', 'B') is present from 8.0 to 9.0\nLink ('B', 'C') is present from 4.0 to 5.0\nLink ('B', 'E') is present from 7.0 to 7.0\nLink ('C', 'D') is present from 0.0 to 1.0\nLink ('C', 'E') is present from 5.0 to 5.0\nLink ('D', 'E') is present from 2.0 to 4.0\nLink ('D', 'E') is present from 8.0 to 10.0\nLink ('D', 'F') is present from 3.0 to 4.0\nLink ('E', 'F') is present from 0.0 to 4.0\nLink ('E', 'F') is present from 6.0 to 10.0\n"
],
[
"for e in S.ordered_links():\n if e[0] == 1:\n _, t0, t1, u, v = e\n print(\"Link arrival \\t:\",(t0,t1,S.node_to_label[u],S.node_to_label[v]))\n if e[0] == -1:\n _, t1, u, v = e\n print(\"Link departure \\t:\",(t1,S.node_to_label[u],S.node_to_label[v]))",
"Link arrival \t: (0.0, 4.0, 'A', 'B')\nLink arrival \t: (0.0, 1.0, 'C', 'D')\nLink arrival \t: (0.0, 4.0, 'E', 'F')\nLink departure \t: (1.0, 'C', 'D')\nLink arrival \t: (2.0, 4.0, 'D', 'E')\nLink arrival \t: (3.0, 4.0, 'D', 'F')\nLink arrival \t: (4.0, 5.0, 'B', 'C')\nLink departure \t: (4.0, 'A', 'B')\nLink departure \t: (4.0, 'D', 'E')\nLink departure \t: (4.0, 'D', 'F')\nLink departure \t: (4.0, 'E', 'F')\nLink arrival \t: (5.0, 5.0, 'C', 'E')\nLink departure \t: (5.0, 'B', 'C')\nLink departure \t: (5.0, 'C', 'E')\nLink arrival \t: (6.0, 10.0, 'E', 'F')\nLink arrival \t: (7.0, 7.0, 'B', 'E')\nLink departure \t: (7.0, 'B', 'E')\nLink arrival \t: (8.0, 9.0, 'A', 'B')\nLink arrival \t: (8.0, 10.0, 'D', 'E')\nLink departure \t: (9.0, 'A', 'B')\nLink departure \t: (10.0, 'D', 'E')\nLink departure \t: (10.0, 'E', 'F')\n"
],
[
"for e in S.ordered_events():\n if e[0] == 2:\n _,t0,t1,u = e\n print(\"Node arrival \\t:\",(t0,t1,S.node_to_label[u]))\n elif e[0] == 1:\n _, t0, t1, u, v = e\n print(\"Link arrival \\t:\",(t0,t1,S.node_to_label[u],S.node_to_label[v]))\n elif e[0] == -1:\n _, t1, u, v = e\n print(\"Link departure \\t:\",(t1,S.node_to_label[u],S.node_to_label[v]))\n elif e[0] == -2:\n _,t1,u = e\n print(\"Node departure \\t:\",(t1,S.node_to_label[u]))\n",
"Node arrival \t: (0.0, 5.0, 'A')\nNode arrival \t: (0.0, 10.0, 'B')\nNode arrival \t: (0.0, 1.0, 'C')\nNode arrival \t: (0.0, 1.0, 'D')\nNode arrival \t: (0.0, 10.0, 'E')\nNode arrival \t: (0.0, 4.0, 'F')\nLink arrival \t: (0.0, 4.0, 'A', 'B')\nLink arrival \t: (0.0, 1.0, 'C', 'D')\nLink arrival \t: (0.0, 4.0, 'E', 'F')\nLink departure \t: (1.0, 'C', 'D')\nNode departure \t: (1.0, 'C')\nNode departure \t: (1.0, 'D')\nNode arrival \t: (2.0, 4.0, 'D')\nLink arrival \t: (2.0, 4.0, 'D', 'E')\nLink arrival \t: (3.0, 4.0, 'D', 'F')\nNode arrival \t: (4.0, 6.0, 'C')\nLink arrival \t: (4.0, 5.0, 'B', 'C')\nLink departure \t: (4.0, 'A', 'B')\nLink departure \t: (4.0, 'D', 'E')\nLink departure \t: (4.0, 'D', 'F')\nLink departure \t: (4.0, 'E', 'F')\nNode departure \t: (4.0, 'D')\nNode departure \t: (4.0, 'F')\nLink arrival \t: (5.0, 5.0, 'C', 'E')\nLink departure \t: (5.0, 'B', 'C')\nLink departure \t: (5.0, 'C', 'E')\nNode departure \t: (5.0, 'A')\nNode arrival \t: (6.0, 10.0, 'F')\nLink arrival \t: (6.0, 10.0, 'E', 'F')\nNode departure \t: (6.0, 'C')\nNode arrival \t: (7.0, 10.0, 'A')\nLink arrival \t: (7.0, 7.0, 'B', 'E')\nLink departure \t: (7.0, 'B', 'E')\nNode arrival \t: (8.0, 10.0, 'D')\nLink arrival \t: (8.0, 9.0, 'A', 'B')\nLink arrival \t: (8.0, 10.0, 'D', 'E')\nLink departure \t: (9.0, 'A', 'B')\nLink departure \t: (10.0, 'D', 'E')\nLink departure \t: (10.0, 'E', 'F')\nNode departure \t: (10.0, 'A')\nNode departure \t: (10.0, 'B')\nNode departure \t: (10.0, 'D')\nNode departure \t: (10.0, 'E')\nNode departure \t: (10.0, 'F')\n"
]
],
[
[
"In ``Straph`` almost all algorithms are based on these data structures.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
eca11fe966f58d14878f97632a3a5e93ef201527 | 42,892 | ipynb | Jupyter Notebook | docs/site/tutorials/model_training_walkthrough.ipynb | texasmichelle/swift | 37821ab7a57bc3013be20565a9a52f94145e2225 | [
"Apache-2.0"
] | null | null | null | docs/site/tutorials/model_training_walkthrough.ipynb | texasmichelle/swift | 37821ab7a57bc3013be20565a9a52f94145e2225 | [
"Apache-2.0"
] | null | null | null | docs/site/tutorials/model_training_walkthrough.ipynb | texasmichelle/swift | 37821ab7a57bc3013be20565a9a52f94145e2225 | [
"Apache-2.0"
] | null | null | null | 39.278388 | 1,049 | 0.624965 | [
[
[
"##### Copyright 2018 - 2020 The TensorFlow Authors. [Licensed under the Apache License, Version 2.0](#scrollTo=y_UVSRtBBsJk).",
"_____no_output_____"
]
],
[
[
"// #@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// https://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.",
"_____no_output_____"
]
],
[
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/swift/tutorials/model_training_walkthrough\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/swift/blob/main/docs/site/tutorials/model_training_walkthrough.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/swift/blob/main/docs/site/tutorials/model_training_walkthrough.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"# Model training walkthrough",
"_____no_output_____"
],
[
"This guide introduces Swift for TensorFlow by building a machine learning model that categorizes iris flowers by species. It uses Swift for TensorFlow to:\n1. Build a model,\n2. Train this model on example data, and\n3. Use the model to make predictions about unknown data.\n\n## TensorFlow programming\n\nThis guide uses these high-level Swift for TensorFlow concepts:\n\n* Import data with the Epochs API.\n* Build models using Swift abstractions.\n* Use Python libraries using Swift's Python interoperability when pure Swift libraries are not available.\n\nThis tutorial is structured like many TensorFlow programs:\n\n1. Import and parse the data sets.\n2. Select the type of model.\n3. Train the model.\n4. Evaluate the model's effectiveness.\n5. Use the trained model to make predictions.",
"_____no_output_____"
],
[
"## Setup program",
"_____no_output_____"
],
[
"### Configure imports\n\nImport TensorFlow and some useful Python modules.",
"_____no_output_____"
]
],
[
[
"import TensorFlow\nimport PythonKit",
"_____no_output_____"
],
[
"// This cell is here to display the plots in a Jupyter Notebook.\n// Do not copy it into another environment.\n%include \"EnableIPythonDisplay.swift\"\nprint(IPythonDisplay.shell.enable_matplotlib(\"inline\"))",
"_____no_output_____"
],
[
"let plt = Python.import(\"matplotlib.pyplot\")",
"_____no_output_____"
],
[
"import Foundation\nimport FoundationNetworking\nfunc download(from sourceString: String, to destinationString: String) {\n let source = URL(string: sourceString)!\n let destination = URL(fileURLWithPath: destinationString)\n let data = try! Data.init(contentsOf: source)\n try! data.write(to: destination)\n}",
"_____no_output_____"
]
],
[
[
"## The iris classification problem\n\nImagine you are a botanist seeking an automated way to categorize each iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).\n\nThe Iris genus entails about 300 species, but our program will only classify the following three:\n\n* Iris setosa\n* Iris virginica\n* Iris versicolor\n\n<table>\n <tr><td>\n <img src=\"https://www.tensorflow.org/images/iris_three_species.jpg\"\n alt=\"Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://commons.wikimedia.org/w/index.php?curid=170298\">Iris setosa</a> (by <a href=\"https://commons.wikimedia.org/wiki/User:Radomil\">Radomil</a>, CC BY-SA 3.0), <a href=\"https://commons.wikimedia.org/w/index.php?curid=248095\">Iris versicolor</a>, (by <a href=\"https://commons.wikimedia.org/wiki/User:Dlanglois\">Dlanglois</a>, CC BY-SA 3.0), and <a href=\"https://www.flickr.com/photos/33397993@N05/3352169862\">Iris virginica</a> (by <a href=\"https://www.flickr.com/photos/33397993@N05\">Frank Mayfield</a>, CC BY-SA 2.0).<br/> \n </td></tr>\n</table>\n\nFortunately, someone has already created a [data set of 120 iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems.",
"_____no_output_____"
],
[
"## Import and parse the training dataset\n\nDownload the dataset file and convert it into a structure that can be used by this Swift program.\n\n### Download the dataset\n\nDownload the training dataset file from http://download.tensorflow.org/data/iris_training.csv.",
"_____no_output_____"
]
],
[
[
"let trainDataFilename = \"iris_training.csv\"\ndownload(from: \"http://download.tensorflow.org/data/iris_training.csv\", to: trainDataFilename)",
"_____no_output_____"
]
],
[
[
"### Inspect the data\n\nThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Let's look a the first 5 entries.",
"_____no_output_____"
]
],
[
[
"let f = Python.open(trainDataFilename)\nfor _ in 0..<5 {\n print(Python.next(f).strip())\n}\nprint(f.close())",
"_____no_output_____"
]
],
[
[
"From this view of the dataset, notice the following:\n\n1. The first line is a header containing information about the dataset:\n * There are 120 total examples. Each example has four features and one of three possible label names. \n2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/#example)* per line, where:\n * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/#feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements.\n * The last column is the *[label](https://developers.google.com/machine-learning/glossary/#label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.\n\nLet's write that out in code:",
"_____no_output_____"
]
],
[
[
"let featureNames = [\"sepal_length\", \"sepal_width\", \"petal_length\", \"petal_width\"]\nlet labelName = \"species\"\nlet columnNames = featureNames + [labelName]\n\nprint(\"Features: \\(featureNames)\")\nprint(\"Label: \\(labelName)\")",
"_____no_output_____"
]
],
[
[
"Each label is associated with string name (for example, \"setosa\"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:\n\n* `0`: Iris setosa\n* `1`: Iris versicolor\n* `2`: Iris virginica\n\nFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology).",
"_____no_output_____"
]
],
[
[
"let classNames = [\"Iris setosa\", \"Iris versicolor\", \"Iris virginica\"]",
"_____no_output_____"
]
],
[
[
"### Create a dataset using the Epochs API\n\nSwift for TensorFlow's Epochs API is a high-level API for reading data and transforming it into a form used for training. ",
"_____no_output_____"
]
],
[
[
"let batchSize = 32\n\n/// A batch of examples from the iris dataset.\nstruct IrisBatch {\n /// [batchSize, featureCount] tensor of features.\n let features: Tensor<Float>\n\n /// [batchSize] tensor of labels.\n let labels: Tensor<Int32>\n}\n\n/// Conform `IrisBatch` to `Collatable` so that we can load it into a `TrainingEpoch`.\nextension IrisBatch: Collatable {\n public init<BatchSamples: Collection>(collating samples: BatchSamples)\n where BatchSamples.Element == Self {\n /// `IrisBatch`es are collated by stacking their feature and label tensors\n /// along the batch axis to produce a single feature and label tensor\n features = Tensor<Float>(stacking: samples.map{$0.features})\n labels = Tensor<Int32>(stacking: samples.map{$0.labels})\n }\n}",
"_____no_output_____"
]
],
[
[
"Since the datasets we downloaded are in CSV format, let's write a function to load in the data as a list of IrisBatch objects",
"_____no_output_____"
]
],
[
[
"/// Initialize an `IrisBatch` dataset from a CSV file.\nfunc loadIrisDatasetFromCSV(\n contentsOf: String, hasHeader: Bool, featureColumns: [Int], labelColumns: [Int]) -> [IrisBatch] {\n let np = Python.import(\"numpy\")\n\n let featuresNp = np.loadtxt(\n contentsOf,\n delimiter: \",\",\n skiprows: hasHeader ? 1 : 0,\n usecols: featureColumns,\n dtype: Float.numpyScalarTypes.first!)\n guard let featuresTensor = Tensor<Float>(numpy: featuresNp) else {\n // This should never happen, because we construct featuresNp in such a\n // way that it should be convertible to tensor.\n fatalError(\"np.loadtxt result can't be converted to Tensor\")\n }\n\n let labelsNp = np.loadtxt(\n contentsOf,\n delimiter: \",\",\n skiprows: hasHeader ? 1 : 0,\n usecols: labelColumns,\n dtype: Int32.numpyScalarTypes.first!)\n guard let labelsTensor = Tensor<Int32>(numpy: labelsNp) else {\n // This should never happen, because we construct labelsNp in such a\n // way that it should be convertible to tensor.\n fatalError(\"np.loadtxt result can't be converted to Tensor\")\n }\n\n return zip(featuresTensor.unstacked(), labelsTensor.unstacked()).map{IrisBatch(features: $0.0, labels: $0.1)}\n\n }",
"_____no_output_____"
]
],
[
[
"We can now use the CSV loading function to load the training dataset and create a `TrainingEpochs` object",
"_____no_output_____"
]
],
[
[
"let trainingDataset: [IrisBatch] = loadIrisDatasetFromCSV(contentsOf: trainDataFilename, \n hasHeader: true, \n featureColumns: [0, 1, 2, 3], \n labelColumns: [4])\n\nlet trainingEpochs: TrainingEpochs = TrainingEpochs(samples: trainingDataset, batchSize: batchSize)",
"_____no_output_____"
]
],
[
[
"The `TrainingEpochs` object is an infinite sequence of epochs. Each epoch contains `IrisBatch`es. Let's look at the first element of the first epoch.",
"_____no_output_____"
]
],
[
[
"let firstTrainEpoch = trainingEpochs.next()!\nlet firstTrainBatch = firstTrainEpoch.first!.collated\nlet firstTrainFeatures = firstTrainBatch.features\nlet firstTrainLabels = firstTrainBatch.labels\n\nprint(\"First batch of features: \\(firstTrainFeatures)\")\nprint(\"firstTrainFeatures.shape: \\(firstTrainFeatures.shape)\")\nprint(\"First batch of labels: \\(firstTrainLabels)\")\nprint(\"firstTrainLabels.shape: \\(firstTrainLabels.shape)\")",
"_____no_output_____"
]
],
[
[
"Notice that the features for the first `batchSize` examples are grouped together (or *batched*) into `firstTrainFeatures`, and that the labels for the first `batchSize` examples are batched into `firstTrainLabels`.\n\nYou can start to see some clusters by plotting a few features from the batch, using Python's matplotlib:",
"_____no_output_____"
]
],
[
[
"let firstTrainFeaturesTransposed = firstTrainFeatures.transposed()\nlet petalLengths = firstTrainFeaturesTransposed[2].scalars\nlet sepalLengths = firstTrainFeaturesTransposed[0].scalars\n\nplt.scatter(petalLengths, sepalLengths, c: firstTrainLabels.array.scalars)\nplt.xlabel(\"Petal length\")\nplt.ylabel(\"Sepal length\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Select the type of model\n\n### Why model?\n\nA *[model](https://developers.google.com/machine-learning/crash-course/glossary#model)* is a relationship between features and the label. For the iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.\n\nCould you determine the relationship between the four features and the iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.\n\n### Select the model\n\nWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/#neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/#hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/#neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer:\n\n<table>\n <tr><td>\n <img src=\"https://www.tensorflow.org/images/custom_estimators/full_network.png\"\n alt=\"A diagram of the network architecture: Inputs, 2 hidden layers, and outputs\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/> \n </td></tr>\n</table>\n\nWhen the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossary#inference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*.",
"_____no_output_____"
],
[
"### Create a model using the Swift for TensorFlow Deep Learning Library\n\nThe [Swift for TensorFlow Deep Learning Library](https://github.com/tensorflow/swift-apis) defines primitive layers and conventions for wiring them together, which makes it easy to build models and experiment.\n\nA model is a `struct` that conforms to [`Layer`](https://www.tensorflow.org/swift/api_docs/Protocols/Layer), which means that it defines a [`callAsFunction(_:)`](https://www.tensorflow.org/swift/api_docs/Protocols/Layer#callasfunction_:) method that maps input `Tensor`s to output `Tensor`s. The `callAsFunction(_:)` method often simply sequences the input through sublayers. Let's define an `IrisModel` that sequences the input through three [`Dense`](https://www.tensorflow.org/swift/api_docs/Structs/Dense) sublayers.",
"_____no_output_____"
]
],
[
[
"import TensorFlow\n\nlet hiddenSize: Int = 10\nstruct IrisModel: Layer {\n var layer1 = Dense<Float>(inputSize: 4, outputSize: hiddenSize, activation: relu)\n var layer2 = Dense<Float>(inputSize: hiddenSize, outputSize: hiddenSize, activation: relu)\n var layer3 = Dense<Float>(inputSize: hiddenSize, outputSize: 3)\n \n @differentiable\n func callAsFunction(_ input: Tensor<Float>) -> Tensor<Float> {\n return input.sequenced(through: layer1, layer2, layer3)\n }\n}\n\nvar model = IrisModel()",
"_____no_output_____"
]
],
[
[
"The activation function determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many available activations, but [ReLU](https://www.tensorflow.org/swift/api_docs/Functions#relu_:) is common for hidden layers.\n\nThe ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.",
"_____no_output_____"
],
[
"### Using the model\n\nLet's have a quick look at what this model does to a batch of features:",
"_____no_output_____"
]
],
[
[
"// Apply the model to a batch of features.\nlet firstTrainPredictions = model(firstTrainFeatures)\nprint(firstTrainPredictions[0..<5])",
"_____no_output_____"
]
],
[
[
"Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logits) for each class. \n\nTo convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function:",
"_____no_output_____"
]
],
[
[
"print(softmax(firstTrainPredictions[0..<5]))",
"_____no_output_____"
]
],
[
[
"Taking the `argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions.",
"_____no_output_____"
]
],
[
[
"print(\"Prediction: \\(firstTrainPredictions.argmax(squeezingAxis: 1))\")\nprint(\" Labels: \\(firstTrainLabels)\")",
"_____no_output_____"
]
],
[
[
"## Train the model\n\n*[Training](https://developers.google.com/machine-learning/crash-course/glossary#training)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossary#overfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.\n\nThe iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/#supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/#unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features.",
"_____no_output_____"
],
[
"### Choose a loss function\n\nBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossary#loss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.\n\nOur model will calculate its loss using the [`softmaxCrossEntropy(logits:labels:)`](https://www.tensorflow.org/swift/api_docs/Functions#/s:10TensorFlow19softmaxCrossEntropy6logits6labelsAA0A0VyxGAG_AFys5Int32VGtAA0aB13FloatingPointRzlF) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples.\n\nLet's calculate the loss for the current untrained model:",
"_____no_output_____"
]
],
[
[
"let untrainedLogits = model(firstTrainFeatures)\nlet untrainedLoss = softmaxCrossEntropy(logits: untrainedLogits, labels: firstTrainLabels)\nprint(\"Loss test: \\(untrainedLoss)\")",
"_____no_output_____"
]
],
[
[
"### Create an optimizer\n\nAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n\n<table>\n <tr><td>\n <img src=\"https://cs231n.github.io/assets/nn3/opt1.gif\" width=\"70%\"\n alt=\"Optimization algorithms visualized over time in 3D space.\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 3.</b> Optimization algorithms visualized over time in 3D space.<br/>(Source: <a href=\"http://cs231n.github.io/neural-networks-3/\">Stanford class CS231n</a>, MIT License, Image credit: <a href=\"https://twitter.com/alecrad\">Alec Radford</a>)\n </td></tr>\n</table>\n\nSwift for TensorFlow has many [optimization algorithms](https://github.com/tensorflow/swift-apis/tree/master/Sources/TensorFlow/Optimizers) available for training. This model uses the SGD optimizer that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learningRate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results.",
"_____no_output_____"
]
],
[
[
"let optimizer = SGD(for: model, learningRate: 0.01)",
"_____no_output_____"
]
],
[
[
"Let's use `optimizer` to take a single gradient descent step. First, we compute the gradient of the loss with respect to the model:",
"_____no_output_____"
]
],
[
[
"let (loss, grads) = valueWithGradient(at: model) { model -> Tensor<Float> in\n let logits = model(firstTrainFeatures)\n return softmaxCrossEntropy(logits: logits, labels: firstTrainLabels)\n}\nprint(\"Current loss: \\(loss)\")",
"_____no_output_____"
]
],
[
[
"Next, we pass the gradient that we just calculated to the optimizer, which updates the model's differentiable variables accordingly:",
"_____no_output_____"
]
],
[
[
"optimizer.update(&model, along: grads)",
"_____no_output_____"
]
],
[
[
"If we calculate the loss again, it should be smaller, because gradient descent steps (usually) decrease the loss:",
"_____no_output_____"
]
],
[
[
"let logitsAfterOneStep = model(firstTrainFeatures)\nlet lossAfterOneStep = softmaxCrossEntropy(logits: logitsAfterOneStep, labels: firstTrainLabels)\nprint(\"Next loss: \\(lossAfterOneStep)\")",
"_____no_output_____"
]
],
[
[
"### Training loop\n\nWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:\n\n1. Iterate over each *epoch*. An epoch is one pass through the dataset.\n2. Within an epoch, iterate over each batch in the training epoch \n3. Collate the batch and grab its *features* (`x`) and *label* (`y`).\n3. Using the collated batch's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.\n4. Use gradient descent to update the model's variables.\n5. Keep track of some stats for visualization.\n6. Repeat for each epoch.\n\nThe `epochCount` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `epochCount` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/#hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation.",
"_____no_output_____"
]
],
[
[
"let epochCount = 500\nvar trainAccuracyResults: [Float] = []\nvar trainLossResults: [Float] = []",
"_____no_output_____"
],
[
"func accuracy(predictions: Tensor<Int32>, truths: Tensor<Int32>) -> Float {\n return Tensor<Float>(predictions .== truths).mean().scalarized()\n}\n\nfor (epochIndex, epoch) in trainingEpochs.prefix(epochCount).enumerated() {\n var epochLoss: Float = 0\n var epochAccuracy: Float = 0\n var batchCount: Int = 0\n for batchSamples in epoch {\n let batch = batchSamples.collated\n let (loss, grad) = valueWithGradient(at: model) { (model: IrisModel) -> Tensor<Float> in\n let logits = model(batch.features)\n return softmaxCrossEntropy(logits: logits, labels: batch.labels)\n }\n optimizer.update(&model, along: grad)\n \n let logits = model(batch.features)\n epochAccuracy += accuracy(predictions: logits.argmax(squeezingAxis: 1), truths: batch.labels)\n epochLoss += loss.scalarized()\n batchCount += 1\n }\n epochAccuracy /= Float(batchCount)\n epochLoss /= Float(batchCount)\n trainAccuracyResults.append(epochAccuracy)\n trainLossResults.append(epochLoss)\n if epochIndex % 50 == 0 {\n print(\"Epoch \\(epochIndex): Loss: \\(epochLoss), Accuracy: \\(epochAccuracy)\")\n }\n}",
"_____no_output_____"
]
],
[
[
"### Visualize the loss function over time",
"_____no_output_____"
],
[
"While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. We can create basic charts using Python's `matplotlib` module.\n\nInterpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize: [12, 8])\n\nlet accuracyAxes = plt.subplot(2, 1, 1)\naccuracyAxes.set_ylabel(\"Accuracy\")\naccuracyAxes.plot(trainAccuracyResults)\n\nlet lossAxes = plt.subplot(2, 1, 2)\nlossAxes.set_ylabel(\"Loss\")\nlossAxes.set_xlabel(\"Epoch\")\nlossAxes.plot(trainLossResults)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note that the y-axes of the graphs are not zero-based.",
"_____no_output_____"
],
[
"## Evaluate the model's effectiveness\n\nNow that the model is trained, we can get some statistics on its performance.\n\n*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at iris classification, pass some sepal and petal measurements to the model and ask the model to predict what iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/#accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy:\n\n<table cellpadding=\"8\" border=\"0\">\n <colgroup>\n <col span=\"4\" >\n <col span=\"1\" bgcolor=\"lightblue\">\n <col span=\"1\" bgcolor=\"lightgreen\">\n </colgroup>\n <tr bgcolor=\"lightgray\">\n <th colspan=\"4\">Example features</th>\n <th colspan=\"1\">Label</th>\n <th colspan=\"1\" >Model prediction</th>\n </tr>\n <tr>\n <td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align=\"center\">1</td><td align=\"center\">1</td>\n </tr>\n <tr>\n <td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align=\"center\">2</td><td align=\"center\">2</td>\n </tr>\n <tr>\n <td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align=\"center\">0</td><td align=\"center\">0</td>\n </tr>\n <tr>\n <td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align=\"center\">1</td><td align=\"center\" bgcolor=\"red\">2</td>\n </tr>\n <tr>\n <td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align=\"center\">1</td><td align=\"center\">1</td>\n </tr>\n <tr><td align=\"center\" colspan=\"6\">\n <b>Figure 4.</b> An iris classifier that is 80% accurate.<br/> \n </td></tr>\n</table>",
"_____no_output_____"
],
[
"### Setup the test dataset\n\nEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossary#test_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.\n\nThe setup for the test dataset is similar to the setup for training dataset. Download the test set from http://download.tensorflow.org/data/iris_test.csv:",
"_____no_output_____"
]
],
[
[
"let testDataFilename = \"iris_test.csv\"\ndownload(from: \"http://download.tensorflow.org/data/iris_test.csv\", to: testDataFilename)",
"_____no_output_____"
]
],
[
[
" Now load it into a an array of `IrisBatch`es:",
"_____no_output_____"
]
],
[
[
"let testDataset = loadIrisDatasetFromCSV(\n contentsOf: testDataFilename, hasHeader: true,\n featureColumns: [0, 1, 2, 3], labelColumns: [4]).inBatches(of: batchSize)",
"_____no_output_____"
]
],
[
[
"### Evaluate the model on the test dataset\n\nUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/#epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set.",
"_____no_output_____"
]
],
[
[
"// NOTE: Only a single batch will run in the loop since the batchSize we're using is larger than the test set size\nfor batchSamples in testDataset {\n let batch = batchSamples.collated\n let logits = model(batch.features)\n let predictions = logits.argmax(squeezingAxis: 1)\n print(\"Test batch accuracy: \\(accuracy(predictions: predictions, truths: batch.labels))\")\n}",
"_____no_output_____"
]
],
[
[
"We can see on the first batch, for example, the model is usually correct:",
"_____no_output_____"
]
],
[
[
"let firstTestBatch = testDataset.first!.collated\nlet firstTestBatchLogits = model(firstTestBatch.features)\nlet firstTestBatchPredictions = firstTestBatchLogits.argmax(squeezingAxis: 1)\n\nprint(firstTestBatchPredictions)\nprint(firstTestBatch.labels)",
"_____no_output_____"
]
],
[
[
"## Use the trained model to make predictions\n\nWe've trained a model and demonstrated that it's good—but not perfect—at classifying iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/#unlabeled_example); that is, on examples that contain features but not a label.\n\nIn real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:\n\n* `0`: Iris setosa\n* `1`: Iris versicolor\n* `2`: Iris virginica",
"_____no_output_____"
]
],
[
[
"let unlabeledDataset: Tensor<Float> =\n [[5.1, 3.3, 1.7, 0.5],\n [5.9, 3.0, 4.2, 1.5],\n [6.9, 3.1, 5.4, 2.1]]\n\nlet unlabeledDatasetPredictions = model(unlabeledDataset)\n\nfor i in 0..<unlabeledDatasetPredictions.shape[0] {\n let logits = unlabeledDatasetPredictions[i]\n let classIdx = logits.argmax().scalar!\n print(\"Example \\(i) prediction: \\(classNames[Int(classIdx)]) (\\(softmax(logits)))\")\n}",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca13e75c58b8f4d6de73abcccf08567676affaf | 131,660 | ipynb | Jupyter Notebook | PyBer_Challenge.ipynb | honoruru/PyBer_Analysis | a4f8a3d70bc0fe74c005269998d7eb32ede657fe | [
"MIT"
] | null | null | null | PyBer_Challenge.ipynb | honoruru/PyBer_Analysis | a4f8a3d70bc0fe74c005269998d7eb32ede657fe | [
"MIT"
] | null | null | null | PyBer_Challenge.ipynb | honoruru/PyBer_Analysis | a4f8a3d70bc0fe74c005269998d7eb32ede657fe | [
"MIT"
] | null | null | null | 75.536431 | 79,828 | 0.73544 | [
[
[
"# Pyber Challenge",
"_____no_output_____"
],
[
"### 4.3 Loading and Reading CSV files",
"_____no_output_____"
]
],
[
[
"# Add Matplotlib inline magic command\n%matplotlib inline\n# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n# File to Load (Remember to change these)\ncity_data_to_load = \"Resources/city_data.csv\"\nride_data_to_load = \"Resources/ride_data.csv\"\n\n# Read the City and Ride Data\ncity_data_df = pd.read_csv(city_data_to_load)\nride_data_df = pd.read_csv(ride_data_to_load)",
"_____no_output_____"
]
],
[
[
"### Merge the DataFrames",
"_____no_output_____"
]
],
[
[
"# Combine the data into a single dataset\npyber_data_df = pd.merge(ride_data_df, city_data_df, how=\"left\", on=[\"city\", \"city\"])\n\n# Display the data table for preview\npyber_data_df.head()",
"_____no_output_____"
],
[
"# export pyber_data_df\npyber_data_df.to_csv(r\"C:\\my house\\bootcamp\\matplotlib\\pyber_analysis\\resources\\pyber_data_df.csv\", header = True)",
"_____no_output_____"
]
],
[
[
"## Deliverable 1: Get a Summary DataFrame ",
"_____no_output_____"
]
],
[
[
"# 1. Get the total rides for each city type\ntotal_rides_by_type = pyber_data_df.groupby([\"type\"]).count()[\"ride_id\"]\ntotal_rides_by_type",
"_____no_output_____"
],
[
"# 2. Get the total drivers for each city type\ntotal_drivers_by_type = city_data_df.groupby([\"type\"]).sum()[\"driver_count\"]\ntotal_drivers_by_type",
"_____no_output_____"
],
[
"# 3. Get the total amount of fares for each city type\ntotal_fare_amt_by_type = pyber_data_df.groupby([\"type\"]).sum()[\"fare\"]\ntotal_fare_amt_by_type",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"# 4. Get the average fare per ride for each city type. \nfare_per_ride_by_type = pyber_data_df.groupby([\"type\"]).sum()[\"fare\"] / total_rides_by_type\nfare_per_ride_by_type",
"_____no_output_____"
],
[
"# 5. Get the average fare per driver for each city type. \nfare_per_driver_by_type = pyber_data_df.groupby([\"type\"]).sum()[\"fare\"] / total_drivers_by_type\nfare_per_driver_by_type",
"_____no_output_____"
],
[
"# 6. Create a PyBer summary DataFrame. \npyber_summary_df = pd.DataFrame({\n 'Total Rides': total_rides_by_type,\n 'Total Drivers': total_drivers_by_type,\n 'Total Fares': total_fare_amt_by_type,\n 'Average Fare per Ride': fare_per_ride_by_type,\n 'Average Fare per Driver': fare_per_driver_by_type})\n\npyber_summary_df",
"_____no_output_____"
],
[
"# 7. Cleaning up the DataFrame. Delete the index name\npyber_summary_df.index.name = None\n\npyber_summary_df",
"_____no_output_____"
],
[
"# 8. Format the columns.\npyber_summary_df[\"Total Fares\"] = pyber_summary_df[\"Total Fares\"].map(\"${:,.2f}\".format)\npyber_summary_df[\"Average Fare per Ride\"] = pyber_summary_df[\"Average Fare per Ride\"].map(\"${:.2f}\".format)\npyber_summary_df[\"Average Fare per Driver\"] = pyber_summary_df[\"Average Fare per Driver\"].map(\"${:.2f}\".format)\n\npyber_summary_df",
"_____no_output_____"
],
[
"pyber_summary2_df = pd.DataFrame({\n 'Average Fare per Ride': fare_per_ride_by_type,\n 'Average Fare per Driver': fare_per_driver_by_type})\n\npyber_summary2_df",
"_____no_output_____"
],
[
"# 8. Format the columns.\n\npyber_summary2_df[\"Average Fare per Ride\"] = pyber_summary2_df[\"Average Fare per Ride\"].map(\"${:.2f}\".format)\npyber_summary2_df[\"Average Fare per Driver\"] = pyber_summary2_df[\"Average Fare per Driver\"].map(\"${:.2f}\".format)\n\npyber_summary2_df",
"_____no_output_____"
]
],
[
[
"## Deliverable 2. Create a multiple line plot that shows the total weekly of the fares for each type of city.",
"_____no_output_____"
]
],
[
[
"# 1. Read the merged DataFrame\npyber_data_df.head()",
"_____no_output_____"
]
],
[
[
"# Groupby() function",
"_____no_output_____"
]
],
[
[
"# 2. Using groupby() to create a new DataFrame showing the sum of the fares \n# for each date where the indices are the city type and date.\nnew_sum = pyber_data_df.groupby([\"date\",\"type\"]).sum()[\"fare\"]\n\nnew_pyber_data_df = pd.DataFrame({\n 'Rural': new_sum,\n 'Suburban': new_sum,\n 'Urban': new_sum}) \n \nnew_pyber_data_df.head(10)",
"_____no_output_____"
],
[
"# DELETE\n# fare_per_ride_by_type = pyber_data_df.groupby([\"type\"]).sum()[\"fare\"] / total_rides_by_type",
"_____no_output_____"
],
[
"# 3. Reset the index on the DataFrame you created in #1. This is needed to use the 'pivot()' function.\nnew_pyber_data_df = new_pyber_data_df.reset_index()\nnew_pyber_data_df.head(10)",
"_____no_output_____"
]
],
[
[
"# Pivot() function",
"_____no_output_____"
]
],
[
[
"# 4. Create a pivot table with the 'date' as the index, the columns ='type', and values='fare' \n# to get the total fares for each type of city by the date. \n\nnew_pyber_data_pivot = pyber_data_df.pivot(index='date', columns='type', values='fare')\nnew_pyber_data_pivot.head(10)",
"_____no_output_____"
]
],
[
[
"# loc method",
"_____no_output_____"
]
],
[
[
"# 5. Create a new DataFrame from the pivot table DataFrame using loc on the given dates, '2019-01-01':'2019-04-29'.\n# CHANGED ENDING DATE TO MATCH LINE GRAPH TO ANSWER KEY LOL\npyber_pivot_jan_apr = new_pyber_data_pivot.loc[\n '2019-01-01':'2019-04-28'] \npyber_pivot_jan_apr",
"_____no_output_____"
],
[
"# 6. Set the \"date\" index to datetime datatype. This is necessary to use the resample() method in Step 8.\npyber_pivot_jan_apr.index = pd.to_datetime(pyber_pivot_jan_apr.index)\n\n# df.index = pd.to_datetime(df.index)",
"_____no_output_____"
],
[
"# 7. Check that the datatype for the index is datetime using df.info()\npyber_pivot_jan_apr.info()",
"<class 'pandas.core.frame.DataFrame'>\nDatetimeIndex: 2177 entries, 2019-01-01 00:08:16 to 2019-04-27 23:52:44\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Rural 114 non-null float64\n 1 Suburban 567 non-null float64\n 2 Urban 1496 non-null float64\ndtypes: float64(3)\nmemory usage: 68.0 KB\n"
]
],
[
[
"# resample() function",
"_____no_output_____"
]
],
[
[
"# 8. Create a new DataFrame using the \"resample()\" function by week 'W' and get the sum of the fares for each week.\nnew_pyber_data_week = pyber_pivot_jan_apr.resample('W').sum()\nnew_pyber_data_week",
"_____no_output_____"
],
[
"# 8. Using the object-oriented interface method, plot the resample DataFrame using the df.plot() function. \n\n# Import the style from Matplotlib.\nfrom matplotlib import style\n\n# Use the graph style fivethirtyeight.\nstyle.use('fivethirtyeight')\n\nax = new_pyber_data_week.plot(figsize=(20,6))\nax.set_title('Total Fare by City Type')\n\nax.set_xlabel('Month')\nax.set_ylabel('Fare ($USD)')\n\nplt.savefig(\"analysis/Pyber_fare_summary.png\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
eca1438db171895b9c88b50201ba34f1f656c1ff | 21,016 | ipynb | Jupyter Notebook | notebooks/7. Triple extraction, Model Runner.ipynb | dginev/experiments-on-stack-exchange-data | 5b601c0f9e10953360b2217eb0b4a5701885b7ed | [
"MIT"
] | null | null | null | notebooks/7. Triple extraction, Model Runner.ipynb | dginev/experiments-on-stack-exchange-data | 5b601c0f9e10953360b2217eb0b4a5701885b7ed | [
"MIT"
] | null | null | null | notebooks/7. Triple extraction, Model Runner.ipynb | dginev/experiments-on-stack-exchange-data | 5b601c0f9e10953360b2217eb0b4a5701885b7ed | [
"MIT"
] | 1 | 2020-08-31T17:26:54.000Z | 2020-08-31T17:26:54.000Z | 43.966527 | 246 | 0.576608 | [
[
[
"import re\nimport json\nfrom transformers import DistilBertTokenizerFast, TFDistilBertModel, DistilBertConfig\nimport tensorflow as tf\nimport tensorflow_io as tfio\n\nimport numpy as np\nimport h5py\nfrom tensorflow.keras.utils import Sequence\n\nfrom pathlib import Path\ndata_path = Path('../data') / 'span_model_oie'\nmodel_path = Path('../models')\npath_hdf5 = str(data_path/'encoded_span_oie.hdf5')",
"_____no_output_____"
],
[
"# Let's quickly get the shapes from HDF5 for bookkeeping\nif 'fp' in locals():\n fp.close()\nfp = h5py.File(path_hdf5, \"r\")\nx_train = fp['x_train']\nx_test = fp['x_test']\ny_train = fp['y_train']\ny_test = fp['y_test']\nx_train_shape = x_train.shape\ny_train_shape = y_train.shape\nx_test_shape = x_test.shape\ny_test_shape = y_test.shape\nfp.close()\nprint(\"data sizes: x_train %s, y_train %s, x_test %s, y_test %s .\" % \\\n (x_train_shape, y_train_shape, x_test_shape, y_test_shape))\nvalidation_index = 1+int(0.9*x_train_shape[0])\n\nx_test = tfio.IODataset.from_hdf5(path_hdf5, dataset='/x_test')\ny_test = tfio.IODataset.from_hdf5(path_hdf5, dataset='/y_test')\nx_train = tfio.IODataset.from_hdf5(path_hdf5, dataset='/x_train')\ny_train = tfio.IODataset.from_hdf5(path_hdf5, dataset='/y_train')\n",
"data sizes: x_train (802100, 128), y_train (802100, 64), x_test (200526, 128), y_test (200526, 64) .\n"
],
[
"# Thanks to the excellent tutorial at:\n# https://towardsdatascience.com/working-with-hugging-face-transformers-and-tf-2-0-89bf35e3555a\n# Setup the config and embedding layer, then prep data.\ndistil_bert = 'distilbert-base-uncased'\ntokenizer = DistilBertTokenizerFast.from_pretrained(distil_bert)\nmax_input_size = 128\nmax_target_size = 64\n\nconfig = DistilBertConfig(dropout=0.2, attention_dropout=0.2, trainable=False)\nconfig.output_hidden_states = False\ntransformer_model = TFDistilBertModel.from_pretrained(distil_bert, config = config)",
"Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing TFDistilBertModel: ['activation_13', 'vocab_layer_norm', 'vocab_transform', 'vocab_projector']\n- This IS expected if you are initializing TFDistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\n- This IS NOT expected if you are initializing TFDistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nAll the weights of TFDistilBertModel were initialized from the model checkpoint at distilbert-base-uncased.\nIf your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFDistilBertModel for predictions without further training.\n"
],
[
"# Instantiate the full model.\n# The crucial part for our setup is \"teacher forcing\", so as to properly teach the full triple generation\n# Great starting example at: https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py#L159\nvocab_size = tokenizer.vocab_size\nnum_encoder_tokens = num_decoder_tokens = vocab_size\nlatent_dim = int(max_input_size)\n\nencoder_inputs = tf.keras.layers.Input(shape=(max_input_size,), name='input_token', dtype='int32')\nencoder_masks = tf.keras.layers.Input(shape=(max_input_size,), name='masked_token', dtype='int32')\n\nlm_embedding = transformer_model(encoder_inputs, attention_mask=encoder_masks)[0]\nencoder = tf.keras.layers.LSTM(latent_dim, return_state=True)\nencoder_outputs, state_h, state_c = encoder(lm_embedding)\n# We discard `encoder_outputs` and only keep the states.\nencoder_states = [state_h, state_c]\n\n# Set up the decoder, using `encoder_states` as initial state.\ndecoder_inputs = tf.keras.layers.Input(shape=(None, num_decoder_tokens))\n# We set up our decoder to return full output sequences,\n# and to return internal states as well. We don't use the\n# return states in the training model, but we will use them in inference.\ndecoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)\ndecoder_outputs, _, _ = decoder_lstm(decoder_inputs,\n initial_state=encoder_states)\ndecoder_dense = tf.keras.layers.Dense(num_decoder_tokens, activation='softmax')\ndecoder_outputs = decoder_dense(decoder_outputs)\n\nmodel = tf.keras.Model([encoder_inputs, encoder_masks, decoder_inputs], decoder_outputs)\n\nfor layer in model.layers[:3]:\n layer.trainable = False\n\nprint(model.summary())\n",
"Model: \"model_3\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_token (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\nmasked_token (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\ntf_distil_bert_model_1 (TFDisti ((None, 128, 768),) 66362880 input_token[0][0] \n__________________________________________________________________________________________________\ninput_4 (InputLayer) [(None, None, 30522) 0 \n__________________________________________________________________________________________________\nlstm_3 (LSTM) [(None, 128), (None, 459264 tf_distil_bert_model_1[0][0] \n__________________________________________________________________________________________________\nlstm_4 (LSTM) [(None, None, 128), 15693312 input_4[0][0] \n lstm_3[0][1] \n lstm_3[0][2] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, None, 30522) 3937338 lstm_4[0][0] \n==================================================================================================\nTotal params: 86,452,794\nTrainable params: 20,089,914\nNon-trainable params: 66,362,880\n__________________________________________________________________________________________________\nNone\n"
],
[
"# Prepare training data stream and iterate efficiently via tf.data\nlabel_dim = tokenizer.vocab_size",
"_____no_output_____"
],
[
"model.load_weights(str(model_path / 'checkpoint_sample_oie'))",
"_____no_output_____"
],
[
"\n# Next: inference mode (sampling).\n# Here's the drill:\n# 1) encode input and retrieve initial decoder state\n# 2) run one step of decoder with this initial state\n# and a \"start of sequence\" token as target.\n# Output will be the next target token\n# 3) Repeat with the current target token and current states\n\n# Define sampling models\nencoder_model = tf.keras.Model([encoder_inputs,encoder_masks], encoder_states)\nprint(encoder_model.summary())\n\nlm_embedding = transformer_model(encoder_inputs, attention_mask=encoder_masks)[0]\nencoder = tf.keras.layers.LSTM(latent_dim, return_state=True)\nencoder_outputs, state_h, state_c = encoder(lm_embedding)\n\n\ndecoder_state_input_h = tf.keras.layers.Input(shape=(latent_dim,))\ndecoder_state_input_c = tf.keras.layers.Input(shape=(latent_dim,))\ndecoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]\ndecoder_outputs, state_h, state_c = decoder_lstm(\n decoder_inputs, initial_state=decoder_states_inputs)\ndecoder_states = [state_h, state_c]\ndecoder_outputs = decoder_dense(decoder_outputs)\n\ndecoder_model = tf.keras.Model(\n [decoder_inputs] + decoder_states_inputs,\n [decoder_outputs] + decoder_states)\nprint(decoder_model.summary())",
"Model: \"model_4\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_token (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\nmasked_token (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\ntf_distil_bert_model_1 (TFDisti ((None, 128, 768),) 66362880 input_token[0][0] \n__________________________________________________________________________________________________\nlstm_3 (LSTM) [(None, 128), (None, 459264 tf_distil_bert_model_1[0][0] \n==================================================================================================\nTotal params: 66,822,144\nTrainable params: 459,264\nNon-trainable params: 66,362,880\n__________________________________________________________________________________________________\nNone\nModel: \"model_5\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_4 (InputLayer) [(None, None, 30522) 0 \n__________________________________________________________________________________________________\ninput_5 (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\ninput_6 (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\nlstm_4 (LSTM) [(None, None, 128), 15693312 input_4[0][0] \n input_5[0][0] \n input_6[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, None, 30522) 3937338 lstm_4[1][0] \n==================================================================================================\nTotal params: 19,630,650\nTrainable params: 19,630,650\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\n"
],
[
"def decode_sequence(input_seq):\n # Encode the input as state vectors.\n input_mask = np.array([0 if x==0 else 1 for x in input_seq[0]]).reshape(1,128)\n states_value = encoder_model.predict((input_seq,input_mask))\n\n # Generate empty target sequence of length 1.\n target_seq = np.zeros((1, 1, num_decoder_tokens))\n # Populate the first character of target sequence with the start character.\n CLS_id = 101 # tokenizer.encode('[CLS]') to check this\n SEP_id = 102\n PAD_id = 0 \n target_seq[0, 0, CLS_id] = 1.\n\n # Sampling loop for a batch of sequences\n # (to simplify, here we assume a batch of size 1).\n stop_condition = False\n sentence_ids = []\n \n # We will only sample from the words we were given in the input, drawing from\n # the specific goal of a \"triple extraction\" task w.r.t a sentence.\n # without the word restriction, we end up combining extraction with ~paraphrase.\n eligible_word_ids = list(input_seq[0])\n eligible_word_ids.append(SEP_id)\n eligible_word_ids.append(SEP_id)\n eligible_word_ids.append(PAD_id)\n \n while not stop_condition:\n sent_wid_set = set(eligible_word_ids)\n sent_mask = np.ones(tokenizer.vocab_size)\n sent_mask[list(sent_wid_set)] = 0\n\n output_tokens, h, c = decoder_model.predict([target_seq]+states_value)\n\n # Sample a token\n word_likelihoods = output_tokens[0, -1, :]\n sent_word_likelihoods = np.ma.masked_array(word_likelihoods, mask=sent_mask)\n sampled_token_index = np.argmax(sent_word_likelihoods)\n sentence_ids.append(sampled_token_index)\n # remove from eligible words, as each predicted word can be considered\n # a \"used\" element of the original input\n eligible_word_ids.remove(sampled_token_index)\n\n # Exit condition: either hit max length\n # or find stop character.\n if (sampled_token_index == PAD_id or\n len(sentence_ids) > max_target_size):\n stop_condition = True\n\n # Update the target sequence (of length 1).\n target_seq = np.zeros((1, 1, num_decoder_tokens))\n target_seq[0, 0, sampled_token_index] = 1.\n\n # Update states\n states_value = [h, c]\n\n return tokenizer.decode(sentence_ids)\n",
"_____no_output_____"
],
[
"def string_to_triples(text):\n text_ids = tokenizer.encode(text, max_length=max_input_size, padding='max_length')\n triples = decode_sequence(np.array([text_ids])).replace(' [PAD]','')\n print(\"---\")\n print(\"Input: %s\"%text)\n print(\"\")\n print(\"Extracted: %s\"% triples)\n \n \nstring_to_triples('The meeting was scheduled to start at half past six .')",
"---\nInput: The meeting was scheduled to start at half past six .\n\nExtracted: the meeting [SEP] was [SEP] to start at six [SEP]\n"
]
],
[
[
"### Discussion\n\n## Raw model output (6 epochs)\n\nHere is a sample of using the **raw** model after **6 epochs** of training on the full data:\n\n1. Cuisine\n> Input: a cake has multiple layers .\n>\n> Extracted: a bread [SEP] has [SEP] two pieces [SEP]\n\n2. Mathematics\n> Input: In mathematics, a monomial is, roughly speaking, a polynomial which has only one term.\n>\n> Extracted: a broad number [SEP] has [SEP] an infinite number [SEP]\n\n3. Biology\n>Input: A tiger is a cat with stripes .\n>\n>Extracted: a brother [SEP] is [SEP] a bear [SEP]\n\n4. Social\n> Input: The meeting was scheduled to start at half past six .\n> \n> Extracted: the event [SEP] was [SEP] to be answered on 12 january [SEP]\n\n## Sentence-only dictionary\n\nHere is the same checkpoint (after 6 epochs) on these examples when we first mask the predictions to only make visibles words that appeared in the original input. Much better!\n\n1. Cuisine\n> Input: a cake has multiple layers .\n>\n> Extracted: a cake [SEP] has [SEP] multiple layers [SEP]\n\n2. Mathematics\n> Input: In mathematics, a monomial is, roughly speaking, a polynomial which has only one term.\n>\n> Extracted: a monol [SEP] has [SEP] one term [SEP]\n\n3. Biology\n>Input: A tiger is a cat with stripes .\n>\n>Extracted: a tiger [SEP] is [SEP] a tiger [SEP]\n\n4. Social\n> Input: The meeting was scheduled to start at half past six .\n> \n> Extracted: the meeting [SEP] was [SEP] to start to the six [SEP]\n\n### Sentence-only with one-use-per-input-occurrence\n1. Cuisine\n> Input: a cake has multiple layers .\n>\n> Extracted: a cake [SEP] has [SEP] multiple layers [SEP]\n\n2. Mathematics\n> Input: In mathematics, a monomial is, roughly speaking, a polynomial which has only one term.\n>\n> Extracted: a monol [SEP] has [SEP] one term [SEP]\n\n3. Biology\n>Input: A tiger is a cat with stripes .\n>\n>Extracted: a tiger [SEP] is [SEP] a [SEP]\n\n4. Social\n> Input: The meeting was scheduled to start at half past six .\n> \n> Extracted: the meeting [SEP] was [SEP] to start at six [SEP]\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
eca144c54b7dff0162e69ceaaa23baa25da24094 | 2,374 | ipynb | Jupyter Notebook | src/deep_learning_basics/jupyter_notebook/practice3-boston.ipynb | 837477/keras_study | b200014bb4e2bb19ec6038bc37ab828b55b61ef5 | [
"MIT"
] | null | null | null | src/deep_learning_basics/jupyter_notebook/practice3-boston.ipynb | 837477/keras_study | b200014bb4e2bb19ec6038bc37ab828b55b61ef5 | [
"MIT"
] | null | null | null | src/deep_learning_basics/jupyter_notebook/practice3-boston.ipynb | 837477/keras_study | b200014bb4e2bb19ec6038bc37ab828b55b61ef5 | [
"MIT"
] | null | null | null | 2,374 | 2,374 | 0.645746 | [
[
[
"# 보스턴 집값 예측\n- github csv url: https://raw.githubusercontent.com/blackdew/tensorflow1/master/csv/boston.csv ",
"_____no_output_____"
]
],
[
[
"# 라이브러리 사용\nimport tensorflow as tf\nimport pandas as pd",
"_____no_output_____"
],
[
"# 1.과거의 데이터를 준비합니다.\n파일경로 = 'https://raw.githubusercontent.com/blackdew/tensorflow1/master/csv/boston.csv'\n보스턴 = pd.read_csv(파일경로)\nprint(보스턴.columns)\n보스턴.head()",
"_____no_output_____"
],
[
"# 독립변수, 종속변수 분리 \n독립 = 보스턴[['crim', 'zn', 'indus', 'chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax',\n 'ptratio', 'b', 'lstat']]\n종속 = 보스턴[['medv']]\nprint(독립.shape, 종속.shape)",
"_____no_output_____"
],
[
"# 2. 모델의 구조를 만듭니다\nX = tf.keras.layers.Input(shape=[13])\nY = tf.keras.layers.Dense(1)(X)\nmodel = tf.keras.models.Model(X, Y)\nmodel.compile(loss='mse')",
"_____no_output_____"
],
[
"# 3.데이터로 모델을 학습(FIT)합니다.\nmodel.fit(독립, 종속, epochs=1000, verbose=0)\nmodel.fit(독립, 종속, epochs=10)",
"_____no_output_____"
],
[
"# 4. 모델을 이용합니다\nprint(model.predict(독립[5:10]))\n# 종속변수 확인\nprint(종속[5:10])",
"_____no_output_____"
],
[
"# 모델의 수식 확인\nprint(model.get_weights())",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca1450964f6e30b0d2799d5df3a1e13abc63d45 | 6,124 | ipynb | Jupyter Notebook | site/en/r1/tutorials/_index.ipynb | Pandinosaurus/docs-2 | 3550667e06ea24580b6d907aaf09f0c8e0dfca23 | [
"Apache-2.0"
] | null | null | null | site/en/r1/tutorials/_index.ipynb | Pandinosaurus/docs-2 | 3550667e06ea24580b6d907aaf09f0c8e0dfca23 | [
"Apache-2.0"
] | null | null | null | site/en/r1/tutorials/_index.ipynb | Pandinosaurus/docs-2 | 3550667e06ea24580b6d907aaf09f0c8e0dfca23 | [
"Apache-2.0"
] | null | null | null | 30.929293 | 242 | 0.52776 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Get Started with TensorFlow 1.x",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"> Note: This is an archived TF1 notebook. These are configured\nto run in TF2's \n[compatibility mode](https://www.tensorflow.org/guide/migrate)\nbut will run in TF1 as well. To use TF1 in Colab, use the\n[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)\nmagic.",
"_____no_output_____"
],
[
"This is a [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To run the Colab notebook:\n\n1. Connect to a Python runtime: At the top-right of the menu bar, select *CONNECT*.\n2. Run all the notebook code cells: Select *Runtime* > *Run all*.\n\nFor more examples and guides (including details for this program), see [Get Started with TensorFlow](https://www.tensorflow.org/get_started/).\n\nLet's get started, import the TensorFlow library into your program:",
"_____no_output_____"
]
],
[
[
"import tensorflow.compat.v1 as tf",
"_____no_output_____"
]
],
[
[
"Load and prepare the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. Convert the samples from integers to floating-point numbers:",
"_____no_output_____"
]
],
[
[
"mnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0",
"_____no_output_____"
]
],
[
[
"Build the `tf.keras` model by stacking layers. Select an optimizer and loss function used for training:",
"_____no_output_____"
]
],
[
[
"model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"Train and evaluate model:",
"_____no_output_____"
]
],
[
[
"model.fit(x_train, y_train, epochs=5)\n\nmodel.evaluate(x_test, y_test, verbose=2)",
"_____no_output_____"
]
],
[
[
"You’ve now trained an image classifier with ~98% accuracy on this dataset. See [Get Started with TensorFlow](https://www.tensorflow.org/get_started/) to learn more.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca15c65c0414a09d5d69e89aa4b0b0aa7101bce | 40,706 | ipynb | Jupyter Notebook | 1_compare_pred/2_ks60/summary_ks.ipynb | justlotw/Deep-Learning-Prediction-and-Uncertainty-Quantification-of-High-Dimensional-Time-Series-Data | 4a991e4555e477a0298ad724c0edb752bec44ee8 | [
"MIT"
] | null | null | null | 1_compare_pred/2_ks60/summary_ks.ipynb | justlotw/Deep-Learning-Prediction-and-Uncertainty-Quantification-of-High-Dimensional-Time-Series-Data | 4a991e4555e477a0298ad724c0edb752bec44ee8 | [
"MIT"
] | null | null | null | 1_compare_pred/2_ks60/summary_ks.ipynb | justlotw/Deep-Learning-Prediction-and-Uncertainty-Quantification-of-High-Dimensional-Time-Series-Data | 4a991e4555e477a0298ad724c0edb752bec44ee8 | [
"MIT"
] | null | null | null | 174.703863 | 34,776 | 0.903258 | [
[
[
"# Package\nimport sys\nsys.path.append(\"../..\")",
"_____no_output_____"
],
[
"from create_data import load_data\nfrom utils import * # Number of testing samples\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom time import time\nfrom functools import partial\n\nimport jax\nimport jax.numpy as jnp\nfrom jax.nn.initializers import glorot_normal, normal\nfrom jax.example_libraries import optimizers",
"_____no_output_____"
],
[
"SEED = 42",
"_____no_output_____"
],
[
"train, test = load_data(\"KS, L = 60\", \"../../data/ks60\", 0.5)",
"_____no_output_____"
],
[
"print(f\"Train size: {train.data.shape}\")\nprint(f\"Test size: {test.data.shape}\")",
"Train size: (90000, 240)\nTest size: (90000, 240)\n"
]
],
[
[
"**Create test set**",
"_____no_output_____"
]
],
[
[
"L_forecast_test = 400 # steps to forecast forward (when testing)",
"_____no_output_____"
],
[
"np.random.seed(1)\n\ndata_test = test.data\n\nT_test, data_dim = data_test.shape\npossible_idx = T_test - (L_forecast_test + 1) # minus number of steps forward, and the warm-up period\nT_indices = np.random.randint(0, possible_idx, size = NUM_TEST)\n\nt_past_batch = np.repeat(T_indices[:, None], WARM_UP_TEST, axis = 1).astype(int) # 2000 warmup \nt_pred_batch = (T_indices[:, None] + np.arange(1, 1 + L_forecast_test)[None, :].astype(int))\n\nX_test = data_test[t_past_batch]\ny_test = data_test[t_pred_batch]",
"_____no_output_____"
],
[
"print(f\"Test input size: {X_test.shape}\") # Number of test points x input length x dim\nprint(f\"Test output size: {y_test.shape}\") # Number of test points x horizon x dim",
"Test input size: (100, 2000, 240)\nTest output size: (100, 400, 240)\n"
]
],
[
[
"# Load Predictions",
"_____no_output_____"
]
],
[
[
"rnn = load_obj(\"results/rnn_pred.pkl\")\nlstm = load_obj(\"results/lstm_pred.pkl\")\nrc = load_obj(\"results/rc_pred.pkl\")\nkoop = load_obj(\"results/koopman_pred.pkl\")",
"_____no_output_____"
],
[
"400*.01/LORENZ_LT/.25*KS_LT",
"_____no_output_____"
],
[
"plt.plot(np.arange(294) * 0.25 / KS_LT, np.median(np.sqrt(((rnn - y_test)**2).mean(axis = 2)), axis = 0)[:294], label = \"RNN\")\nplt.plot(np.arange(294) * 0.25 / KS_LT, np.median(np.sqrt(((lstm - y_test)**2).mean(axis = 2)), axis = 0)[:294], label = \"LSTM\")\nplt.plot(np.arange(294) * 0.25 / KS_LT, np.median(np.sqrt(((rc - y_test)**2).mean(axis = 2)), axis = 0)[:294], label = \"RC\")\nplt.plot(np.arange(294) * 0.25 / KS_LT, np.median(np.sqrt(((koop - y_test)**2).mean(axis = 2)), axis = 0)[:294], label = \"Koopman\")\nplt.ylabel(\"NRMSE\")\nplt.xlabel(\"Lyapunov Time\")\nplt.axhline(0.5, c = \"purple\", ls = \"--\")\nplt.grid()\nplt.legend()\nplt.savefig(\"ks_comparison_edited.png\", facecolor = \"white\", bbox_inches = \"tight\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
eca16b014b3a5da9db755b446503305a75a5c10b | 474,962 | ipynb | Jupyter Notebook | notebook/S15A_PyMC3.ipynb | ashnair1/sta-663-2019 | 17eb85b644c52978c2ef3a53a80b7fb031360e3d | [
"BSD-3-Clause"
] | 68 | 2019-01-09T21:53:55.000Z | 2022-02-16T17:14:22.000Z | notebook/S15A_PyMC3.ipynb | ashnair1/sta-663-2019 | 17eb85b644c52978c2ef3a53a80b7fb031360e3d | [
"BSD-3-Clause"
] | null | null | null | notebook/S15A_PyMC3.ipynb | ashnair1/sta-663-2019 | 17eb85b644c52978c2ef3a53a80b7fb031360e3d | [
"BSD-3-Clause"
] | 62 | 2019-01-09T21:43:48.000Z | 2021-11-15T04:26:25.000Z | 197.243355 | 94,224 | 0.903106 | [
[
[
"# Using PyMC3\n\nPyMC3 is a Python package for doing MCMC using a variety of samplers, including Metropolis, Slice and Hamiltonian Monte Carlo. See [Probabilistic Programming in Python using PyMC](http://arxiv.org/abs/1507.08050) for a description. The GitHub [site](https://github.com/pymc-devs/pymc3) also has many examples and links for further exploration.",
"_____no_output_____"
],
[
"```bash\n! pip install --quiet arviz\n! pip install --quiet pymc3\n! pip install --quiet daft\n! pip install --quiet seaborn\n```",
"_____no_output_____"
],
[
"```bash\n! conda install --yes --quiet mkl-service\n```",
"_____no_output_____"
]
],
[
[
"import warnings\n\nwarnings.simplefilter('ignore', UserWarning)",
"_____no_output_____"
]
],
[
[
"**Other resources**\n\nSome examples are adapted from:\n\n- [Probabilistic Programming & Bayesian Methods for Hackers](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/)\n- [MCMC tutorial series](https://theclevermachine.wordpress.com/2012/11/19/a-gentle-introduction-to-markov-chain-monte-carlo-mcmc/)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import numpy as np\nimport numpy.random as rng\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport pymc3 as pm\nimport scipy.stats as stats\nimport daft\nimport arviz as az",
"/Users/cliburn/anaconda3/lib/python3.6/site-packages/dask/config.py:168: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n data = yaml.load(f.read()) or {}\n/Users/cliburn/anaconda3/lib/python3.6/site-packages/distributed/config.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n defaults = yaml.load(f)\n"
],
[
"import theano\ntheano.config.warn.round=False",
"_____no_output_____"
],
[
"sns.set_context('notebook')\nplt.style.use('seaborn-darkgrid')",
"_____no_output_____"
]
],
[
[
"## Introduction to PyMC3",
"_____no_output_____"
],
[
"### Distributions in pymc3",
"_____no_output_____"
]
],
[
[
"print('\\n'.join([d for d in dir(pm.distributions) if d[0].isupper()]))",
"AR\nAR1\nBernoulli\nBeta\nBetaBinomial\nBinomial\nBound\nCategorical\nCauchy\nChiSquared\nConstant\nConstantDist\nContinuous\nDensityDist\nDirichlet\nDiscrete\nDiscreteUniform\nDiscreteWeibull\nDistribution\nExGaussian\nExponential\nFlat\nGARCH11\nGamma\nGaussianRandomWalk\nGeometric\nGumbel\nHalfCauchy\nHalfFlat\nHalfNormal\nHalfStudentT\nInterpolated\nInverseGamma\nKroneckerNormal\nKumaraswamy\nLKJCholeskyCov\nLKJCorr\nLaplace\nLogistic\nLogitNormal\nLognormal\nMatrixNormal\nMixture\nMultinomial\nMvGaussianRandomWalk\nMvNormal\nMvStudentT\nMvStudentTRandomWalk\nNegativeBinomial\nNoDistribution\nNormal\nNormalMixture\nOrderedLogistic\nPareto\nPoisson\nRice\nSkewNormal\nStudentT\nTensorType\nTriangular\nTruncatedNormal\nUniform\nVonMises\nWald\nWeibull\nWishart\nWishartBartlett\nZeroInflatedBinomial\nZeroInflatedNegativeBinomial\nZeroInflatedPoisson\n"
],
[
"d = pm.Normal.dist(mu=0, sd=1)",
"_____no_output_____"
],
[
"d.dist()",
"_____no_output_____"
]
],
[
[
"Random samples",
"_____no_output_____"
]
],
[
[
"d.random(size=5)",
"_____no_output_____"
]
],
[
[
"Log probability",
"_____no_output_____"
]
],
[
[
"d.logp(0).eval()",
"_____no_output_____"
]
],
[
[
"#### Custom distributions\n\nThe pymc3 algorithms generally only work with the log probability of a distribution. Hence it is easy to define custom distributions to use in your models by providing a `logp` function.",
"_____no_output_____"
]
],
[
[
"def logp(x, μ=0, σ=1):\n \"\"\"Normal distribtuion.\"\"\"\n return -0.5*np.log(2*np.pi) - np.log(σ) - (x-μ)**2/(2*σ**2)",
"_____no_output_____"
],
[
"d = pm.DensityDist.dist(logp)",
"_____no_output_____"
],
[
"d.logp(0)",
"_____no_output_____"
]
],
[
[
"## Example: Estimating coin bias\n\nWe start with the simplest model - that of determining the bias of a coin from observed outcomes.",
"_____no_output_____"
],
[
"### Setting up the model ",
"_____no_output_____"
]
],
[
[
"n = 100\nheads = 61",
"_____no_output_____"
]
],
[
[
"#### Analytical solution",
"_____no_output_____"
]
],
[
[
"a, b = 10, 10\nprior = stats.beta(a, b)\npost = stats.beta(heads+a, n-heads+b)\nci = post.interval(0.95)\n\nxs = np.linspace(0, 1, 100)\nplt.plot(prior.pdf(xs), label='Prior')\nplt.plot(post.pdf(xs), c='green', label='Posterior')\nplt.axvline(100*heads/n, c='red', alpha=0.4, label='MLE')\nplt.xlim([0, 100])\nplt.axhline(0.3, ci[0], ci[1], c='black', linewidth=2, label='95% CI');\nplt.legend()\npass",
"_____no_output_____"
]
],
[
[
"#### Graphical model",
"_____no_output_____"
]
],
[
[
"pgm = daft.PGM(shape=[2.5, 3.0], origin=[0, -0.5])\n\npgm.add_node(daft.Node(\"alpha\", r\"$\\alpha$\", 0.5, 2, fixed=True))\npgm.add_node(daft.Node(\"beta\", r\"$\\beta$\", 1.5, 2, fixed=True))\npgm.add_node(daft.Node(\"p\", r\"$p$\", 1, 1))\npgm.add_node(daft.Node(\"n\", r\"$n$\", 2, 0, fixed=True))\npgm.add_node(daft.Node(\"y\", r\"$y$\", 1, 0, observed=True))\n\npgm.add_edge(\"alpha\", \"p\")\npgm.add_edge(\"beta\", \"p\")\npgm.add_edge(\"n\", \"y\")\npgm.add_edge(\"p\", \"y\")\n\npgm.render()\npass",
"_____no_output_____"
]
],
[
[
"### The Model context\n\nWhen you specify a model, you are adding nodes to a computation graph. When executed, the graph is compiled via Theno. Hence, `pymc3` uses the Model context manager to automatically add new nodes.",
"_____no_output_____"
]
],
[
[
"niter = 2000\nwith pm.Model() as coin_context:\n p = pm.Beta('p', alpha=2, beta=2)\n y = pm.Binomial('y', n=n, p=p, observed=heads)\n trace = pm.sample(niter)",
"Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (2 chains in 2 jobs)\nNUTS: [p]\nSampling 2 chains: 100%|██████████| 5000/5000 [00:03<00:00, 1456.13draws/s]\n"
],
[
"coin_context",
"_____no_output_____"
]
],
[
[
"#### Transformed prior variables",
"_____no_output_____"
]
],
[
[
"coin_context.free_RVs",
"_____no_output_____"
]
],
[
[
"#### Prior variables",
"_____no_output_____"
]
],
[
[
"coin_context.deterministics",
"_____no_output_____"
]
],
[
[
"#### Variables in likelihood",
"_____no_output_____"
]
],
[
[
"coin_context.observed_RVs",
"_____no_output_____"
]
],
[
[
"### Under the hood",
"_____no_output_____"
],
[
"### Theano\n\nTheano builds functions as mathematical expression graphs and compiles them into C for actual computation, making use of GPU resources where available.\n\nPerforming calculations in Theano generally follows the following 3 steps (from official docs):\n\n- declare variables (a,b) and give their types\n- build expressions for how to put those variables together\n- compile expression graphs to functions in order to use them for computation.",
"_____no_output_____"
]
],
[
[
"import theano\nimport theano.tensor as tt\ntheano.config.compute_test_value = \"off\"",
"_____no_output_____"
]
],
[
[
"This part builds symbolic expressions.",
"_____no_output_____"
]
],
[
[
"a = tt.iscalar('a')\nb = tt.iscalar('x')\nc = a + b",
"_____no_output_____"
]
],
[
[
"This step compiles a function whose inputs are [a, b] and outputs are [c].",
"_____no_output_____"
]
],
[
[
"f = theano.function([a, b], [c])",
"_____no_output_____"
],
[
"f",
"_____no_output_____"
],
[
"f(3, 4)",
"_____no_output_____"
]
],
[
[
"Within a model context, \n\n- when you add an unbounded variable, it is defined as a Theano variable and added to the prior part of the log posterior function\n- when you add a bounded variable, a transformed version is defined as a Theano variable and and added to the log posterior function\n - The inverse transformation is used to define the original variable - this is a deterministic variable\n- when you add an observed variable bound to data, the data is added to the likelihood part of the log posterior\n\nSee [PyMC3 and Theano](https://docs.pymc.io/PyMC3_and_Theano.html) for details.",
"_____no_output_____"
]
],
[
[
"help(pm.sample)",
"Help on function sample in module pymc3.sampling:\n\nsample(draws=500, step=None, init='auto', n_init=200000, start=None, trace=None, chain_idx=0, chains=None, cores=None, tune=500, nuts_kwargs=None, step_kwargs=None, progressbar=True, model=None, random_seed=None, live_plot=False, discard_tuned_samples=True, live_plot_kwargs=None, compute_convergence_checks=True, use_mmap=False, **kwargs)\n Draw samples from the posterior using the given step methods.\n \n Multiple step methods are supported via compound step methods.\n \n Parameters\n ----------\n draws : int\n The number of samples to draw. Defaults to 500. The number of tuned samples are discarded\n by default. See discard_tuned_samples.\n step : function or iterable of functions\n A step function or collection of functions. If there are variables without a step methods,\n step methods for those variables will be assigned automatically.\n init : str\n Initialization method to use for auto-assigned NUTS samplers.\n \n * auto : Choose a default initialization method automatically.\n Currently, this is `'jitter+adapt_diag'`, but this can change in the future.\n If you depend on the exact behaviour, choose an initialization method explicitly.\n * adapt_diag : Start with a identity mass matrix and then adapt a diagonal based on the\n variance of the tuning samples. All chains use the test value (usually the prior mean)\n as starting point.\n * jitter+adapt_diag : Same as `adapt_diag`, but add uniform jitter in [-1, 1] to the\n starting point in each chain.\n * advi+adapt_diag : Run ADVI and then adapt the resulting diagonal mass matrix based on the\n sample variance of the tuning samples.\n * advi+adapt_diag_grad : Run ADVI and then adapt the resulting diagonal mass matrix based\n on the variance of the gradients during tuning. This is **experimental** and might be\n removed in a future release.\n * advi : Run ADVI to estimate posterior mean and diagonal mass matrix.\n * advi_map: Initialize ADVI with MAP and use MAP as starting point.\n * map : Use the MAP as starting point. This is discouraged.\n * nuts : Run NUTS and estimate posterior mean and mass matrix from the trace.\n n_init : int\n Number of iterations of initializer. Only works for 'nuts' and 'ADVI'.\n If 'ADVI', number of iterations, if 'nuts', number of draws.\n start : dict, or array of dict\n Starting point in parameter space (or partial point)\n Defaults to trace.point(-1)) if there is a trace provided and model.test_point if not\n (defaults to empty dict). Initialization methods for NUTS (see `init` keyword) can\n overwrite the default. For 'SMC' if should be a list of dict with length `chains`.\n trace : backend, list, or MultiTrace\n This should be a backend instance, a list of variables to track, or a MultiTrace object\n with past values. If a MultiTrace object is given, it must contain samples for the chain\n number `chain`. If None or a list of variables, the NDArray backend is used.\n Passing either \"text\" or \"sqlite\" is taken as a shortcut to set up the corresponding\n backend (with \"mcmc\" used as the base name). Ignored when using 'SMC'.\n chain_idx : int\n Chain number used to store sample in backend. If `chains` is greater than one, chain\n numbers will start here. Ignored when using 'SMC'.\n chains : int\n The number of chains to sample. Running independent chains is important for some\n convergence statistics and can also reveal multiple modes in the posterior. If `None`,\n then set to either `cores` or 2, whichever is larger. For SMC the number of chains is the\n number of draws.\n cores : int\n The number of chains to run in parallel. If `None`, set to the number of CPUs in the\n system, but at most 4 (for 'SMC' defaults to 1). Keep in mind that some chains might\n themselves be multithreaded via openmp or BLAS. In those cases it might be faster to set\n this to 1.\n tune : int\n Number of iterations to tune, defaults to 500. Ignored when using 'SMC'. Samplers adjust\n the step sizes, scalings or similar during tuning. Tuning samples will be drawn in addition\n to the number specified in the `draws` argument, and will be discarded unless\n `discard_tuned_samples` is set to False.\n nuts_kwargs : dict\n Options for the NUTS sampler. See the docstring of NUTS for a complete list of options.\n Common options are:\n \n * target_accept: float in [0, 1]. The step size is tuned such that we approximate this\n acceptance rate. Higher values like 0.9 or 0.95 often work better for problematic\n posteriors.\n * max_treedepth: The maximum depth of the trajectory tree.\n * step_scale: float, default 0.25\n The initial guess for the step size scaled down by `1/n**(1/4)`.\n \n If you want to pass options to other step methods, please use `step_kwargs`.\n step_kwargs : dict\n Options for step methods. Keys are the lower case names of the step method, values are\n dicts of keyword arguments. You can find a full list of arguments in the docstring of the\n step methods. If you want to pass arguments only to nuts, you can use `nuts_kwargs`.\n progressbar : bool\n Whether or not to display a progress bar in the command line. The bar shows the percentage\n of completion, the sampling speed in samples per second (SPS), and the estimated remaining\n time until completion (\"expected time of arrival\"; ETA).\n model : Model (optional if in `with` context)\n random_seed : int or list of ints\n A list is accepted if `cores` is greater than one.\n live_plot : bool\n Flag for live plotting the trace while sampling. Ignored when using 'SMC'.\n live_plot_kwargs : dict\n Options for traceplot. Example: live_plot_kwargs={'varnames': ['x']}.\n Ignored when using 'SMC'\n discard_tuned_samples : bool\n Whether to discard posterior samples of the tune interval. Ignored when using 'SMC'\n compute_convergence_checks : bool, default=True\n Whether to compute sampler statistics like gelman-rubin and effective_n.\n Ignored when using 'SMC'\n use_mmap : bool, default=False\n Whether to use joblib's memory mapping to share numpy arrays when sampling across multiple\n cores. Ignored when using 'SMC'\n \n Returns\n -------\n trace : pymc3.backends.base.MultiTrace\n A `MultiTrace` object that contains the samples.\n \n Examples\n --------\n .. code:: ipython\n \n >>> import pymc3 as pm\n ... n = 100\n ... h = 61\n ... alpha = 2\n ... beta = 2\n \n .. code:: ipython\n \n >>> with pm.Model() as model: # context management\n ... p = pm.Beta('p', alpha=alpha, beta=beta)\n ... y = pm.Binomial('y', n=n, p=p, observed=h)\n ... trace = pm.sample(2000, tune=1000, cores=4)\n >>> pm.summary(trace)\n mean sd mc_error hpd_2.5 hpd_97.5\n p 0.604625 0.047086 0.00078 0.510498 0.694774\n\n"
]
],
[
[
"### Specifying sampler (step) and multiple chains",
"_____no_output_____"
]
],
[
[
"with coin_context:\n step = pm.Metropolis()\n t = pm.sample(niter, step=step, chains=8, random_seed=123)",
"Multiprocess sampling (8 chains in 2 jobs)\nMetropolis: [p]\nSampling 8 chains: 100%|██████████| 20000/20000 [00:05<00:00, 3712.51draws/s]\nThe number of effective samples is smaller than 25% for some parameters.\n"
]
],
[
[
"### Samplers available\n\nFor continuous distributions, it is hard to beat NUTS and hence this is the default. To learn more, see [A Conceptual Introduction to Hamiltonian Monte Carlo](https://arxiv.org/pdf/1701.02434.pdf).",
"_____no_output_____"
]
],
[
[
"print('\\n'.join(m for m in dir(pm.step_methods) if m[0].isupper()))",
"BinaryGibbsMetropolis\nBinaryMetropolis\nCSG\nCategoricalGibbsMetropolis\nCauchyProposal\nCompoundStep\nDEMetropolis\nElemwiseCategorical\nEllipticalSlice\nHamiltonianMC\nLaplaceProposal\nMetropolis\nMultivariateNormalProposal\nNUTS\nNormalProposal\nPoissonProposal\nSGFS\nSMC\nSlice\n"
]
],
[
[
"Generally, the sampler will be automatically selected based on the type of the variable (discrete or continuous), but there are many samples that you can explicitly specify if you want to learn more about them or understand why an alternative would be better than the default for your problem.",
"_____no_output_____"
]
],
[
[
"niter = 2000\nwith pm.Model() as normal_context:\n mu = pm.Normal('mu', mu=0, sd=100)\n sd = pm.HalfCauchy('sd', beta=2)\n y = pm.Normal('y', mu=mu, sd=sd, observed=xs)\n \n step1 = pm.Slice(vars=mu)\n step2 = pm.Metropolis(vars=sd)\n \n t = pm.sample(niter, step=[step1, step2])",
"Multiprocess sampling (2 chains in 2 jobs)\nCompoundStep\n>Slice: [mu]\n>Metropolis: [sd]\nSampling 2 chains: 100%|██████████| 5000/5000 [00:02<00:00, 1691.48draws/s]\nThe number of effective samples is smaller than 10% for some parameters.\n"
],
[
"pm.traceplot(t)\npass",
"_____no_output_____"
]
],
[
[
"#### MAP estimate",
"_____no_output_____"
]
],
[
[
"with pm.Model() as m:\n p = pm.Beta('p', alpha=2, beta=2)\n y = pm.Binomial('y', n=n, p=p, observed=heads)\n θ = pm.find_MAP()",
"/Users/cliburn/anaconda3/lib/python3.6/site-packages/pymc3/tuning/starting.py:61: UserWarning: find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.\n warnings.warn('find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.')\nlogp = -4.5407, ||grad|| = 11: 100%|██████████| 6/6 [00:00<00:00, 563.70it/s]\n"
],
[
"θ",
"_____no_output_____"
]
],
[
[
"#### Getting values from the trace",
"_____no_output_____"
],
[
"All the information about the posterior is in the trace, and it also provides statistics about the sampler.",
"_____no_output_____"
]
],
[
[
"help(trace)",
"Help on MultiTrace in module pymc3.backends.base object:\n\nclass MultiTrace(builtins.object)\n | Main interface for accessing values from MCMC results\n | \n | The core method to select values is `get_values`. The method\n | to select sampler statistics is `get_sampler_stats`. Both kinds of\n | values can also be accessed by indexing the MultiTrace object.\n | Indexing can behave in four ways:\n | \n | 1. Indexing with a variable or variable name (str) returns all\n | values for that variable, combining values for all chains.\n | \n | >>> trace[varname]\n | \n | Slicing after the variable name can be used to burn and thin\n | the samples.\n | \n | >>> trace[varname, 1000:]\n | \n | For convenience during interactive use, values can also be\n | accessed using the variable as an attribute.\n | \n | >>> trace.varname\n | \n | 2. Indexing with an integer returns a dictionary with values for\n | each variable at the given index (corresponding to a single\n | sampling iteration).\n | \n | 3. Slicing with a range returns a new trace with the number of draws\n | corresponding to the range.\n | \n | 4. Indexing with the name of a sampler statistic that is not also\n | the name of a variable returns those values from all chains.\n | If there is more than one sampler that provides that statistic,\n | the values are concatenated along a new axis.\n | \n | For any methods that require a single trace (e.g., taking the length\n | of the MultiTrace instance, which returns the number of draws), the\n | trace with the highest chain number is always used.\n | \n | Methods defined here:\n | \n | __getattr__(self, name)\n | \n | __getitem__(self, idx)\n | \n | __init__(self, straces)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | __len__(self)\n | \n | __repr__(self)\n | Return repr(self).\n | \n | add_values(self, vals, overwrite=False)\n | add variables to traces.\n | \n | Parameters\n | ----------\n | vals : dict (str: array-like)\n | The keys should be the names of the new variables. The values are expected to be\n | array-like object. For traces with more than one chain the length of each value\n | should match the number of total samples already in the trace (chains * iterations),\n | otherwise a warning is raised.\n | overwrite : bool\n | If `False` (default) a ValueError is raised if the variable already exists.\n | Change to `True` to overwrite the values of variables\n | \n | get_sampler_stats(self, varname, burn=0, thin=1, combine=True, chains=None, squeeze=True)\n | Get sampler statistics from the trace.\n | \n | Parameters\n | ----------\n | varname : str\n | sampler_idx : int or None\n | burn : int\n | thin : int\n | \n | Returns\n | -------\n | If the `sampler_idx` is specified, return the statistic with\n | the given name in a numpy array. If it is not specified and there\n | is more than one sampler that provides this statistic, return\n | a numpy array of shape (m, n), where `m` is the number of\n | such samplers, and `n` is the number of samples.\n | \n | get_values(self, varname, burn=0, thin=1, combine=True, chains=None, squeeze=True)\n | Get values from traces.\n | \n | Parameters\n | ----------\n | varname : str\n | burn : int\n | thin : int\n | combine : bool\n | If True, results from `chains` will be concatenated.\n | chains : int or list of ints\n | Chains to retrieve. If None, all chains are used. A single\n | chain value can also be given.\n | squeeze : bool\n | Return a single array element if the resulting list of\n | values only has one element. If False, the result will\n | always be a list of arrays, even if `combine` is True.\n | \n | Returns\n | -------\n | A list of NumPy arrays or a single NumPy array (depending on\n | `squeeze`).\n | \n | point(self, idx, chain=None)\n | Return a dictionary of point values at `idx`.\n | \n | Parameters\n | ----------\n | idx : int\n | chain : int\n | If a chain is not given, the highest chain number is used.\n | \n | points(self, chains=None)\n | Return an iterator over all or some of the sample points\n | \n | Parameters\n | ----------\n | chains : list of int or N\n | The chains whose points should be inlcuded in the iterator. If\n | chains is not given, include points from all chains.\n | \n | remove_values(self, name)\n | remove variables from traces.\n | \n | Parameters\n | ----------\n | name : str\n | Name of the variable to remove. Raises KeyError if the variable is not present\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | chains\n | \n | nchains\n | \n | report\n | \n | stat_names\n | \n | varnames\n\n"
],
[
"trace.stat_names",
"_____no_output_____"
],
[
"sns.distplot(trace.get_sampler_stats('model_logp'))\npass",
"_____no_output_____"
],
[
"p = trace.get_values('p')\np.shape",
"_____no_output_____"
],
[
"trace['p'].shape",
"_____no_output_____"
]
],
[
[
"#### Convert to `pandas` data frame for downstream processing",
"_____no_output_____"
]
],
[
[
"df = pm.trace_to_dataframe(trace)\ndf.head()",
"_____no_output_____"
]
],
[
[
"#### Posterior distribution",
"_____no_output_____"
]
],
[
[
"sns.distplot(trace['p'])\npass",
"_____no_output_____"
]
],
[
[
"#### Autocorrelation plot",
"_____no_output_____"
]
],
[
[
"pm.autocorrplot(trace, varnames=['p'])\npass",
"_____no_output_____"
]
],
[
[
"#### Calculate effective sample size\n\n$$\n\\hat{n}_{eff} = \\frac{mn}{1 + 2 \\sum_{t=1}^T \\hat{\\rho}_t}\n$$\n\nwhere $m$ is the number of chains, $n$ the number of steps per chain, $T$ the time when the autocorrelation first becomes negative, and $\\hat{\\rho}_t$ the autocorrelation at lag $t$.",
"_____no_output_____"
]
],
[
[
"pm.effective_n(trace)",
"_____no_output_____"
]
],
[
[
"## Evaluate convergence",
"_____no_output_____"
],
[
"##### Gelman-Rubin\n\n$$\n\\hat{R} = \\frac{\\hat{V}}{W}\n$$\n\nwhere $W$ is the within-chain variance and $\\hat{V}$ is the posterior variance estimate for the pooled traces. Values greater than one indicate that one or more chains have not yet converged.\n\nDiscrad burn-in steps for each chain. The idea is to see if the starting values of each chain come from the same distribution as the stationary state. \n\n- $W$ is the number of chains $m \\times$ the variacne of each individual chain\n- $B$ is the number of steps $n \\times$ the variance of the chain means\n- $\\hat{V}$ is the weigthed average $(1 - \\frac{1}{n})W + \\frac{1}{n}B$\n\nThe idea is that $\\hat{V}$ is an unbiased estimator of $W$ if the starting values of each chain come from the same distribution as the stationary state. Hence if $\\hat{R}$ differs significantly from 1, there is probsbly no convergence and we need more iterations. This is done for each parameter $\\theta$.",
"_____no_output_____"
]
],
[
[
"pm.gelman_rubin(trace)",
"_____no_output_____"
]
],
[
[
"##### Geweke\n\nCompares mean of initial with later segments of a trace for a parameter. Should have absolute value less than 1 at convergence.",
"_____no_output_____"
]
],
[
[
"plt.plot(pm.geweke(trace['p'])[:,1], 'o')\nplt.axhline(1, c='red')\nplt.axhline(-1, c='red')\nplt.gca().margins(0.05)\npass",
"_____no_output_____"
]
],
[
[
"#### Textual summary",
"_____no_output_____"
]
],
[
[
"pm.summary(trace, varnames=['p'])",
"_____no_output_____"
]
],
[
[
"#### Visual summary",
"_____no_output_____"
]
],
[
[
"pm.traceplot(trace, varnames=['p'])\npass",
"_____no_output_____"
],
[
"pm.forestplot(trace)\npass",
"_____no_output_____"
],
[
"pm.plot_posterior(trace)\npass",
"_____no_output_____"
]
],
[
[
"#### Prior predictive samples",
"_____no_output_____"
]
],
[
[
"with coin_context:\n ps = pm.sample_prior_predictive(samples=1000)",
"_____no_output_____"
],
[
"sns.distplot(ps['y'])\nplt.axvline(heads, c='red')\npass",
"_____no_output_____"
]
],
[
[
"#### Posterior predictive samples",
"_____no_output_____"
]
],
[
[
"with coin_context:\n ps = pm.sample_posterior_predictive(trace, samples=1000)",
"100%|██████████| 1000/1000 [00:00<00:00, 1386.64it/s]\n"
],
[
"sns.distplot(ps['y'])\nplt.axvline(heads, c='red')\npass",
"_____no_output_____"
]
],
[
[
"## Saving traces",
"_____no_output_____"
]
],
[
[
"pm.save_trace(trace, 'my_trace', overwrite=True)",
"_____no_output_____"
]
],
[
[
"You need to re-initialize the model when reloading.",
"_____no_output_____"
]
],
[
[
"with pm.Model() as my_trace:\n p = pm.Beta('p', alpha=2, beta=2)\n y = pm.Binomial('y', n=n, p=p, observed=heads)\n tr = pm.load_trace('my_trace')",
"_____no_output_____"
],
[
"pm.summary(tr)",
"_____no_output_____"
]
],
[
[
"It is probably a good practice to make model reuse convenient",
"_____no_output_____"
]
],
[
[
"def build_model():\n with pm.Model() as m:\n p = pm.Beta('p', alpha=2, beta=2)\n y = pm.Binomial('y', n=n, p=p, observed=heads)\n return m",
"_____no_output_____"
],
[
"m = build_model()",
"_____no_output_____"
],
[
"with m:\n tr1 = pm.load_trace('my_trace')",
"_____no_output_____"
],
[
"pm.summary(tr1)",
"_____no_output_____"
]
],
[
[
"## Sampling from prior\n\nJust omit the `observed=` argument.",
"_____no_output_____"
]
],
[
[
"with pm.Model() as prior_context:\n sigma = pm.Gamma('sigma', alpha=2.0, beta=1.0)\n mu = pm.Normal('mu', mu=0, sd=sigma)\n trace = pm.sample(niter, step=pm.Metropolis())",
"Multiprocess sampling (2 chains in 2 jobs)\nCompoundStep\n>Metropolis: [mu]\n>Metropolis: [sigma]\nSampling 2 chains: 100%|██████████| 5000/5000 [00:01<00:00, 2753.43draws/s]\nThe number of effective samples is smaller than 10% for some parameters.\n"
],
[
"pm.traceplot(trace, varnames=['mu', 'sigma'])\npass",
"_____no_output_____"
]
],
[
[
"## Sampling from posterior",
"_____no_output_____"
]
],
[
[
"niter = 2000\nwith pm.Model() as normal_context:\n mu = pm.Normal('mu', mu=0, sd=100)\n sd = pm.HalfCauchy('sd', beta=2)\n y = pm.Normal('y', mu=mu, sd=sd, observed=xs)\n trace = pm.sample(niter)",
"Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (2 chains in 2 jobs)\nNUTS: [sd, mu]\nSampling 2 chains: 100%|██████████| 5000/5000 [00:03<00:00, 1307.56draws/s]\n"
]
],
[
[
"#### Find Highest Posterior Density (Credible intervals)",
"_____no_output_____"
]
],
[
[
"hpd = pm.hpd(trace['mu'], alpha=0.05)\nhpd",
"_____no_output_____"
],
[
"ax = pm.traceplot(trace, varnames=['mu'],)\n\nymin, ymax = ax[0,0].get_ylim()\ny = ymin + 0.05*(ymax-ymin)\nax[0, 0].plot(hpd, [y,y], c='red')\npass",
"_____no_output_____"
]
],
[
[
"## Evaluating goodness-of-fit\n\nDIC, WAIC and BPIC are approximations to the out-of-sample error and can be used for model comparison. Likelihood is dependent on model complexity and should not be used for model comparisons.",
"_____no_output_____"
]
],
[
[
"post_mean = pm.summary(trace, varnames=trace.varnames)['mean']\npost_mean",
"_____no_output_____"
]
],
[
[
"#### Likelihood",
"_____no_output_____"
]
],
[
[
"normal_context.logp(post_mean)",
"_____no_output_____"
]
],
[
[
"#### Cross-validation",
"_____no_output_____"
]
],
[
[
"with normal_context:\n print(pm.loo(trace))",
"/Users/cliburn/anaconda3/lib/python3.6/site-packages/pymc3/stats.py:167: FutureWarning: arrays to stack must be passed as a \"sequence\" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.\n return np.stack(logp)\n"
]
],
[
[
"#### WAIC",
"_____no_output_____"
]
],
[
[
"with normal_context:\n print(pm.waic(trace))",
"WAIC_r(WAIC=40.74205569255692, WAIC_se=8.891356706673374, p_WAIC=1.3894985238213904, var_warn=0)\n"
]
],
[
[
"## Using a custom likelihood",
"_____no_output_____"
]
],
[
[
"def logp(x, μ=0, σ=1):\n \"\"\"Normal distribtuion.\"\"\"\n return -0.5*np.log(2*np.pi) - np.log(σ) - (x-μ)**2/(2*σ**2)",
"_____no_output_____"
],
[
"with pm.Model() as prior_context:\n mu = pm.Normal('mu', mu=0, sd=100)\n sd = pm.HalfCauchy('sd', beta=2)\n y = pm.DensityDist('y', logp, observed=dict(x=xs, μ=mu, σ=sd))\n custom_trace = pm.sample(niter)",
"Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (2 chains in 2 jobs)\nNUTS: [sd, mu]\nSampling 2 chains: 100%|██████████| 5000/5000 [00:03<00:00, 1316.85draws/s]\n"
],
[
"pm.trace_to_dataframe(custom_trace).mean()",
"_____no_output_____"
]
],
[
[
"### Variational methods available\n\nTo use a variational method, use `pm.fit` instead of `pm.sample`. We'll see examples of usage in another notebook.",
"_____no_output_____"
]
],
[
[
"print('\\n'.join(m for m in dir(pm.variational) if m[0].isupper()))",
"ADVI\nASVGD\nApproximation\nEmpirical\nFullRank\nFullRankADVI\nGroup\nImplicitGradient\nInference\nKLqp\nMeanField\nNFVI\nNormalizingFlow\nSVGD\nStein\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca17a08d1ae1c544551ca1dbbfb36d3d09f79e1 | 273,906 | ipynb | Jupyter Notebook | lesson09/HuggingFace_demo.ipynb | simecek/dspracticum2021 | 5f5e88a78c439af65baaf19f854330e67e8e35aa | [
"Apache-2.0"
] | 2 | 2021-09-23T09:52:42.000Z | 2021-09-26T14:45:01.000Z | lesson09/HuggingFace_demo.ipynb | simecek/dspracticum2021 | 5f5e88a78c439af65baaf19f854330e67e8e35aa | [
"Apache-2.0"
] | null | null | null | lesson09/HuggingFace_demo.ipynb | simecek/dspracticum2021 | 5f5e88a78c439af65baaf19f854330e67e8e35aa | [
"Apache-2.0"
] | 1 | 2022-03-18T09:38:35.000Z | 2022-03-18T09:38:35.000Z | 36.711701 | 453 | 0.498562 | [
[
[
"<a href=\"https://colab.research.google.com/github/simecek/dspracticum2021/blob/main/lesson09/HuggingFace_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Install and load 🤗",
"_____no_output_____"
]
],
[
[
"!pip install transformers datasets",
"Collecting transformers\n Downloading transformers-4.12.5-py3-none-any.whl (3.1 MB)\n\u001b[K |████████████████████████████████| 3.1 MB 4.0 MB/s \n\u001b[?25hCollecting datasets\n Downloading datasets-1.15.1-py3-none-any.whl (290 kB)\n\u001b[K |████████████████████████████████| 290 kB 48.6 MB/s \n\u001b[?25hRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.2)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nCollecting pyyaml>=5.1\n Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)\n\u001b[K |████████████████████████████████| 596 kB 45.0 MB/s \n\u001b[?25hCollecting tokenizers<0.11,>=0.10.1\n Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)\n\u001b[K |████████████████████████████████| 3.3 MB 38.9 MB/s \n\u001b[?25hCollecting huggingface-hub<1.0,>=0.1.0\n Downloading huggingface_hub-0.1.2-py3-none-any.whl (59 kB)\n\u001b[K |████████████████████████████████| 59 kB 7.2 MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nCollecting sacremoses\n Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 32.8 MB/s \n\u001b[?25hRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.8.2)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.3)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.3.2)\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0,>=0.1.0->transformers) (3.10.0.2)\nRequirement already satisfied: pyparsing<3,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (2.4.7)\nRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets) (0.3.4)\nRequirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets) (0.70.12.2)\nCollecting xxhash\n Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)\n\u001b[K |████████████████████████████████| 243 kB 51.9 MB/s \n\u001b[?25hCollecting fsspec[http]>=2021.05.0\n Downloading fsspec-2021.11.0-py3-none-any.whl (132 kB)\n\u001b[K |████████████████████████████████| 132 kB 53.7 MB/s \n\u001b[?25hCollecting aiohttp\n Downloading aiohttp-3.8.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)\n\u001b[K |████████████████████████████████| 1.1 MB 42.9 MB/s \n\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from datasets) (1.1.5)\nRequirement already satisfied: pyarrow!=4.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (3.0.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.10.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (2.0.7)\nCollecting aiosignal>=1.1.2\n Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB)\nRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (21.2.0)\nCollecting async-timeout<5.0,>=4.0.0a3\n Downloading async_timeout-4.0.1-py3-none-any.whl (5.7 kB)\nCollecting frozenlist>=1.1.1\n Downloading frozenlist-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (192 kB)\n\u001b[K |████████████████████████████████| 192 kB 49.9 MB/s \n\u001b[?25hCollecting multidict<7.0,>=4.5\n Downloading multidict-5.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (160 kB)\n\u001b[K |████████████████████████████████| 160 kB 50.1 MB/s \n\u001b[?25hCollecting asynctest==0.13.0\n Downloading asynctest-0.13.0-py3-none-any.whl (26 kB)\nCollecting yarl<2.0,>=1.0\n Downloading yarl-1.7.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)\n\u001b[K |████████████████████████████████| 271 kB 52.2 MB/s \n\u001b[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.6.0)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2018.9)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2.8.2)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.15.0)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.1.0)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nInstalling collected packages: multidict, frozenlist, yarl, asynctest, async-timeout, aiosignal, pyyaml, fsspec, aiohttp, xxhash, tokenizers, sacremoses, huggingface-hub, transformers, datasets\n Attempting uninstall: pyyaml\n Found existing installation: PyYAML 3.13\n Uninstalling PyYAML-3.13:\n Successfully uninstalled PyYAML-3.13\nSuccessfully installed aiohttp-3.8.1 aiosignal-1.2.0 async-timeout-4.0.1 asynctest-0.13.0 datasets-1.15.1 frozenlist-1.2.0 fsspec-2021.11.0 huggingface-hub-0.1.2 multidict-5.2.0 pyyaml-6.0 sacremoses-0.0.46 tokenizers-0.10.3 transformers-4.12.5 xxhash-2.0.2 yarl-1.7.2\n"
],
[
"from transformers import pipeline",
"_____no_output_____"
]
],
[
[
"## Working with pipelines",
"_____no_output_____"
],
[
"### Sentiment analysis",
"_____no_output_____"
]
],
[
[
"classifier = pipeline(\"sentiment-analysis\")",
"No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)\n"
],
[
"classifier(\n [\n \"I've been waiting for a HuggingFace course my whole life.\",\n \"I hate this so much!\",\n ]\n)",
"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n cpuset_checked))\n"
]
],
[
[
"### Zero-shot classification",
"_____no_output_____"
]
],
[
[
"classifier = pipeline(\"zero-shot-classification\")\nclassifier(\n \"This is a course about the Transformers library\",\n candidate_labels=[\"education\", \"politics\", \"business\"],\n)",
"No model was supplied, defaulted to facebook/bart-large-mnli (https://huggingface.co/facebook/bart-large-mnli)\n"
],
[
"from transformers import ElectraForPreTraining, ElectraTokenizerFast",
"_____no_output_____"
]
],
[
[
"### Text generation",
"_____no_output_____"
]
],
[
[
"generator = pipeline(\"text-generation\", model=\"distilgpt2\")\ngenerator(\n \"In this course, we will teach you how to\",\n max_length=30,\n num_return_sequences=2,\n)",
"_____no_output_____"
]
],
[
[
"## Small-E-Czech",
"_____no_output_____"
]
],
[
[
"discriminator = ElectraForPreTraining.from_pretrained(\"Seznam/small-e-czech\")\ntokenizer = ElectraTokenizerFast.from_pretrained(\"Seznam/small-e-czech\")",
"_____no_output_____"
],
[
"sentence = \"Za hory, za doly, mé zlaté parohy\"\nfake_sentence = \"Za hory, za doly, kočka zlaté parohy\"\n\nfake_sentence_tokens = [\"[CLS]\"] + tokenizer.tokenize(fake_sentence) + [\"[SEP]\"]\nfake_inputs = tokenizer.encode(fake_sentence, return_tensors=\"pt\")\noutputs = discriminator(fake_inputs)\npredictions = 1 / (1 + np.exp(-outputs[0].detach().numpy()))\n\nfor token in fake_sentence_tokens:\n print(\"{:>7s}\".format(token), end=\"\")\nprint()\n\nfor prediction in predictions.squeeze():\n print(\"{:7.1f}\".format(prediction), end=\"\")\nprint()",
" [CLS] za hory , za dol ##y , kočka zlaté paro ##hy [SEP]\n 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.3 0.2 0.1 0.0\n"
],
[
"unmasker = pipeline(\"fill-mask\", model=\"Seznam/small-e-czech\")",
"Some weights of the model checkpoint at Seznam/small-e-czech were not used when initializing ElectraForMaskedLM: ['discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight', 'discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias']\n- This IS expected if you are initializing ElectraForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing ElectraForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nSome weights of ElectraForMaskedLM were not initialized from the model checkpoint at Seznam/small-e-czech and are newly initialized: ['generator_predictions.LayerNorm.bias', 'generator_predictions.dense.weight', 'generator_lm_head.weight', 'generator_predictions.LayerNorm.weight', 'generator_lm_head.bias', 'generator_predictions.dense.bias']\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
],
[
"# needs fine-tuning\nunmasker(\"Marečku, podejte mi [MASK].\", top_k=2)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
eca18454416fee42afe8a4116284c7813de65bf8 | 392,882 | ipynb | Jupyter Notebook | imgs_evolution.ipynb | netomap/PG-GAN-CELEBA | d975935c0dd701c51158c6f6e127eeb5d4aab703 | [
"MIT"
] | null | null | null | imgs_evolution.ipynb | netomap/PG-GAN-CELEBA | d975935c0dd701c51158c6f6e127eeb5d4aab703 | [
"MIT"
] | null | null | null | imgs_evolution.ipynb | netomap/PG-GAN-CELEBA | d975935c0dd701c51158c6f6e127eeb5d4aab703 | [
"MIT"
] | null | null | null | 3,168.403226 | 389,700 | 0.964969 | [
[
[
"import torch\nimport utils as u\nimport pathlib\nfrom models import Generator, Discriminator\nimport re\nimport matplotlib.pyplot as plt\nfrom IPython.display import clear_output",
"_____no_output_____"
],
[
"checkpoints_path = pathlib.Path('./models/').glob('*.pt')\ncheckpoints_path = [str(cp) for cp in checkpoints_path if str(cp).find('Generator') != -1]\ncheckpoints_path",
"_____no_output_____"
],
[
"nrow = 8\nnoise = u.get_noise(b_size=nrow, noise_dim=100)\n\nplt.figure(figsize=(25, 8))\nn = len(checkpoints_path)\n\nimages = []\nfor k, cp in enumerate(checkpoints_path):\n layer = int(re.findall(r'[0-9]{1,}', cp)[0])\n checkpoint = torch.load(cp, map_location=torch.device('cpu'))\n noise_dim, cfl = checkpoint['noise_dim'], checkpoint['cfl'], \n print (f'Instanciando e carregando generator com {layer} camadas.')\n generator = Generator(img_channels=3, noise_dim=noise_dim, n_camadas=layer, cfl=cfl)\n generator.load_checkpoint(cp)\n generator.eval()\n img_tensor = generator(noise)\n\n plt.subplot(n, 1, k+1)\n img_np = u.criar_grid(img_tensor, nrow, None, tipo='np')\n plt.imshow(img_np)\n plt.title('')\n plt.axis('off')\n plt.title(f'Imagens para {layer} camadas.')\n\nclear_output(wait=True)\nplt.suptitle('Treinamento de Progressive Growing Gan ao longo do acréscimo das camadas, para um mesmo conjunto de imagens')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
eca19aed058d4f03c1828e2bb90d91f1fd9b2836 | 1,709 | ipynb | Jupyter Notebook | 335-gathering-the-beans.ipynb | arkeros/projecteuler | c95db97583034af8fc61d5786692d82eabe50c12 | [
"MIT"
] | 2 | 2017-02-19T12:37:13.000Z | 2021-01-19T04:58:09.000Z | 335-gathering-the-beans.ipynb | arkeros/projecteuler | c95db97583034af8fc61d5786692d82eabe50c12 | [
"MIT"
] | null | null | null | 335-gathering-the-beans.ipynb | arkeros/projecteuler | c95db97583034af8fc61d5786692d82eabe50c12 | [
"MIT"
] | 4 | 2018-01-05T14:29:09.000Z | 2020-01-27T13:37:40.000Z | 33.509804 | 374 | 0.591574 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
eca19d1e287f4dc21bfbc808a887eba20e12310b | 1,667 | ipynb | Jupyter Notebook | Defensive_4.ipynb | olihit/Defensive-prgramming | 7233273547a2c73790f886cf03072c5199b62755 | [
"MIT"
] | null | null | null | Defensive_4.ipynb | olihit/Defensive-prgramming | 7233273547a2c73790f886cf03072c5199b62755 | [
"MIT"
] | null | null | null | Defensive_4.ipynb | olihit/Defensive-prgramming | 7233273547a2c73790f886cf03072c5199b62755 | [
"MIT"
] | null | null | null | 17.924731 | 71 | 0.512298 | [
[
[
"!ls",
"DefensiveProgramming.ipynb\nDefensiveProgramming_2.ipynb\nDefensiveProgramming_3.ipynb\nDefensiveProgramming_3.py\nDefensive_4.ipynb\nnotes\npython-overlapping-ranges.svg\n"
],
[
"import DefensiveProgramming_3 # Can import script as a module",
"_____no_output_____"
],
[
"DefensiveProgramming_3.range_overlap([(10.0, 11.0), (9.0, 10.5)])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
eca19f7579576a9875c6fa5ef858f63589ec9690 | 351,313 | ipynb | Jupyter Notebook | repro/elegy_poetics_accuracy.ipynb | bnagy/heroides-paper | f002e6c958da3702d2b21d4528e3cb7b37f49e49 | [
"CC-BY-4.0"
] | null | null | null | repro/elegy_poetics_accuracy.ipynb | bnagy/heroides-paper | f002e6c958da3702d2b21d4528e3cb7b37f49e49 | [
"CC-BY-4.0"
] | null | null | null | repro/elegy_poetics_accuracy.ipynb | bnagy/heroides-paper | f002e6c958da3702d2b21d4528e3cb7b37f49e49 | [
"CC-BY-4.0"
] | null | null | null | 154.017098 | 120,183 | 0.847136 | [
[
[
"# Poetic Feature Accuracy (Section 6.1)\n\nThis notebook investigates the general accuracy for supervised classification algorithms when they are applied to the poetic features. Multiple classifiers are compared to try to ensure that the demonstrated accuracy is a result of the feature universe and not simply that the data is suited to one style of algorithm. Since the feature space is low dimensional (43 features) all that is done is to normalise it so that features like length (in lines) do not dominate the bulk of the features in the range \\[0,1\\].\n\nIt was initially hypothesised that restricting the contextual corpus to longer poems would improve the classification accuracy; put another way there were concerns that the results on shorter poems would be too variable to be useful. This turned out to be mostly not the case, although there are problems distinguishing between Ovid's _Tristia_ and _Ex Ponto_ (which is to be expected, as discussed in the paper).\n\nOverall, the results show that the poetic features work at least as well as the LSA analysis. Although the classification accuracy by work is slightly lower, this is mainly due to the stylistic similarity between _Tristia_ and _Ex Ponto_. The classification results by author are very good, and commensurate with the LSA analysis (roughly 92-3%). Different algorithms work in different ways. Classifiers based on decision trees work well for the poetic features, with `ExtraTrees` being the best performer by a very slight margin.",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom mqdq import utils, babble, elegy, ngrams\nfrom mqdq import line_analyzer as la\nfrom mqdq import mahalanobis as maha\n\nimport bs4\nimport glob\n\nimport numpy as np\nimport pandas as pd\nimport scipy as sp\n\nfrom sklearn.metrics import classification_report, accuracy_score, confusion_matrix\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, cross_validate\nfrom sklearn.linear_model import PassiveAggressiveClassifier\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.neighbors import NearestCentroid\nfrom sklearn.preprocessing import Normalizer, StandardScaler\nfrom sklearn.pipeline import make_pipeline, Pipeline\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.decomposition import TruncatedSVD\n\nfrom xgboost import XGBClassifier",
"_____no_output_____"
],
[
"%load_ext rpy2.ipython",
"_____no_output_____"
]
],
[
[
"# Load Corpus\n\nThese XML files have all been downloaded from the [Pedecerto](http://www.pedecerto.eu/public/pagine/autori) collection, which offers fully scanned works from the [MQDQ](https://mizar.unive.it/mqdq/public/) corpus. Once again I'd like to thank those teams for providing such a fantastic resource under a permissive license. Some files in my corpus directory have been altered by me to remove a few problematic unicode characters that interfered with my own software.",
"_____no_output_____"
]
],
[
[
"collection = []\n\n# Several lines need to be manually deleted, because when we make wide vectors\n# we treat couplets as a unit (so we must have a matching number of H and P.)\n# In some poems, we have corrupt lines, and so we delete the H that matches\n# a corrupt P and vice versa.\n\nep = babble.bookbabs('corpus/OV-epis.xml', name=\"Ep.\")\nfor b in ep:\n b.author = 'Ovid'\ncollection.extend(ep)\n\ntr = babble.multi_bookbabs(sorted(glob.glob('corpus/OV-tri*.xml')), name=\"Tr.\")\nfor b in tr:\n b.author = 'Ovid'\ncollection.extend(tr)\n\nam = babble.multi_bookbabs(sorted(glob.glob('corpus/OV-amo*.xml')), name=\"Am.\")\nfor b in am:\n b.author = 'Ovid'\ncollection.extend(am)\n\ntib = babble.multi_bookbabs(sorted(glob.glob('corpus/TIB-ele*.xml')), name=\"Tib.\")\ndel tib[1].raw_source[24]\nfor b in tib:\n b.author = 'Tibullus'\ncollection.extend(tib)\n\nprop = babble.multi_bookbabs(sorted(glob.glob('corpus/PROP-ele*.xml')), name=\"Prop.\")\nfor b in prop:\n b.author = 'Propertius'\ndel prop[55].raw_source[28]\ncollection.extend(prop)\n\ncat = babble.bookbabs('corpus/CATVLL-carm.xml', name=\"Cat.\")\ncat_ele = [x for x in cat if x.elegiac and len(x) > 20]\nfor b in cat_ele:\n b.author = 'Catullus'\ndel cat_ele[3].raw_source[46]\ncollection.extend(cat_ele)\n\npon = babble.multi_bookbabs(sorted(glob.glob('corpus/OV-pon*.xml')), name=\"Pont.\")\nfor b in pon:\n b.author = 'Ovid'\ndel pon[1].raw_source[8]\ndel pon[7].raw_source[18]\ncollection.extend(pon)",
"_____no_output_____"
],
[
"vecs = elegy.vectorise_babs(collection)\nvecs",
"_____no_output_____"
]
],
[
[
"# The Poetic Features\n\nHere is a summary of the poetic features, from the paper.\n\n",
"_____no_output_____"
]
],
[
[
"# A quick look at the corpus. This table is rendered in LaTeX for the\n# paper.\n\npd.pivot_table(vecs, index=['Author','Work'], values=['LEN'],aggfunc=['count','min','max']).reset_index()",
"_____no_output_____"
]
],
[
[
"# Classifier Testing\n\nA range of different classifiers are tested. In the paper (and figures below) I only compare four, but these extra results are provided for the sake of interest. There is no particular strategy in terms of which classifiers to test; I just wanted a decent mix of different algorithm types (metric distance, decision trees, linear function fiting, SVM...). `XGBoost` is a classifier that is currently fashionable, but reliable old SVM continues to perform very well.",
"_____no_output_____"
]
],
[
[
"pa = lambda: make_pipeline(\n StandardScaler(copy=False),\n PassiveAggressiveClassifier(max_iter=1000, tol=1e-3, C=1.0, loss='squared_hinge')\n)\n\net = lambda: make_pipeline(\n StandardScaler(copy=False),\n ExtraTreesClassifier(n_estimators=5000, max_features=17, criterion='gini', n_jobs=-1)\n)\n\nsvm = lambda: make_pipeline(\n StandardScaler(copy=False),\n SVC(gamma='scale', kernel='rbf', C=128)\n)\n\nsvml = lambda: make_pipeline(\n StandardScaler(copy=False),\n LinearSVC()\n)\n\nnc = lambda: make_pipeline(\n StandardScaler(copy=False),\n NearestCentroid()\n)\n\nmnb = lambda: make_pipeline(\n # don't centre the data around 0, otherwise\n # the MultinomialNB classifier breaks\n StandardScaler(with_mean=False),\n MultinomialNB()\n)\n\nxgbl = lambda: make_pipeline(\n StandardScaler(copy=False),\n XGBClassifier(\n booster='gblinear',\n objective=\"multi:softprob\",\n eval_metric='merror',\n use_label_encoder=False\n )\n)\n\nxgbt = lambda: make_pipeline(\n StandardScaler(copy=False),\n XGBClassifier(\n booster='gbtree',\n objective=\"multi:softprob\",\n eval_metric='merror',\n use_label_encoder=False\n )\n)\n\nCLASSIFIERS = [\n ('PassiveAggressive', pa),\n ('MultinomialNB', mnb),\n ('ExtraTrees', et),\n ('XGBoost (Linear)',xgbl),\n ('XGBoost (Tree)',xgbt),\n ('SVM', svm),\n ('SVM (Linear)', svml),\n ('NearestCentroid', nc),\n\n]",
"_____no_output_____"
],
[
"def test_clfs(clfs, corp, feats, by, cutoff=0, seed=None, samps=20):\n \"\"\"\n Test the given classifiers on a corpus.\n \n Multi-label fitting strategy is up to each classifier.\n \"\"\"\n res = []\n # this makes a seeded rng, instead of seeding the ShuffleSplit\n # with the same value each time, which seems slightly cleaner.\n rng = np.random.RandomState(seed=seed)\n for (name, c) in clfs:\n trimmed = corp[corp.LEN >= cutoff]\n # XGBoost doesn't like string labels ¯\\_(ツ)_/¯ \n f, _ = trimmed[by].factorize()\n X,y = trimmed[feats], pd.Series(f)\n cv = StratifiedShuffleSplit(n_splits=samps, test_size=0.2, random_state=rng)\n # note to self: don't run parallel jobs if you have a seeded rng >:(\n jobs = -1\n if seed:\n jobs = 1\n samp_res = cross_val_score(c(), X, y, cv=cv, n_jobs=jobs, scoring='f1_weighted')\n m = sp.mean(samp_res)\n ci = sp.stats.t.interval(\n alpha=0.95,\n df=len(samp_res)-1,\n loc=m,\n scale=sp.stats.sem(samp_res)\n ) \n res.append({'Classifier':name, 'Score':m, 'Cutoff': cutoff, 'CILow':ci[0], 'CIHigh':ci[1]})\n return pd.DataFrame(res)",
"_____no_output_____"
],
[
"cols = list(vecs.columns[3:])",
"_____no_output_____"
],
[
"test_bywork = test_clfs(CLASSIFIERS, vecs, cols, 'Work', cutoff=0, samps=100)",
"_____no_output_____"
],
[
"test_bywork.sort_values(by=['Score'])",
"_____no_output_____"
],
[
"test_byauth = test_clfs(CLASSIFIERS, vecs, cols, 'Author', cutoff=0, samps=100)",
"_____no_output_____"
],
[
"test_byauth.sort_values(by=['Score'])",
"_____no_output_____"
],
[
"%%capture --no-display\n\n# How much better would the results be if we didn't have stylistic\n# confusion between Tristia and Ex Ponto? About 10%.\n\nno_ht_test = test_clfs(CLASSIFIERS, vecs[(vecs.Work != 'Pont.')], cols, 'Work', cutoff=0, samps=100)",
"_____no_output_____"
],
[
"no_ht_test.sort_values(by=['Score'])",
"_____no_output_____"
],
[
"# Based on the scores for both label sets (Work and Author) these\n# are the four classifiers I chose to graph for the minimum-size test\n\nGRAPH_CLASSIFIERS = [\n ('PassiveAggressive', pa),\n ('XGBoost',xgbl),\n ('SVM', svm),\n ('NearestCentroid', nc),\n]",
"_____no_output_____"
],
[
"dfs = []\nfor n in range(0,80,5):\n dfs.append(test_clfs(GRAPH_CLASSIFIERS, vecs, cols, 'Work', cutoff=n, samps=100))\nbywork = pd.concat(dfs)\nbywork",
"_____no_output_____"
],
[
"bywork",
"_____no_output_____"
],
[
"# How many works remain at each cutoff point?\n\ndicts = []\nfor n in range(0,80,10):\n dicts.append({'Cutoff':n, 'N':len(vecs[vecs.LEN >= n])})\nn_df = pd.DataFrame(dicts)\nn_df",
"_____no_output_____"
],
[
"%%R -i bywork,n_df -h 6 -w 6 -u in -r 144\nlibrary(ggplot2)\nlibrary(extrafont)\n\ncbbPaletteDark <- c(\"#009E73\", \"#e79f00\", \"#9ad0f3\", \"#0072B2\", \"#D55E00\", \n \"#CC79A7\", \"#F0E442\")\n\nggplot(data=bywork, aes(x=Cutoff, y=Score*100)) +\ngeom_label(\n data=n_df,\n label.size=NA,\n aes(x=Cutoff, y=53, label=sprintf(\"n=%d\",N)),\n family=\"Envy Code R\",\n size=3\n) +\ngeom_ribbon(\n aes(ymin=CILow*100, ymax=CIHigh*100, fill=Classifier), \n alpha=0.25, show.legend=FALSE) +\ngeom_line(aes(color=Classifier,linetype=Classifier), size=1.2) +\nguides(color = guide_legend(ncol=2)) +\nlabs(x=\"Minimum Poem Size (lines)\",y=expression(paste(\"% Accuracy (weighted \",F[1],\", mean of 100 trials + 95% CI)\"))) +\ntheme_bw() +\n\ntheme(\n text = element_text(size=9, family=\"Envy Code R\"),\n panel.grid.major=element_blank(),\n panel.grid.minor=element_blank(),\n legend.title=element_blank(),\n legend.position= c(0.25, 0.3),\n legend.text=element_text(size=8),\n axis.text.x = element_text(angle = 60, hjust = 1, vjust=1)\n) +\nscale_linetype_manual(values=c(\"solid\", \"solid\", \"twodash\", \"twodash\")) +\nscale_color_manual(values=cbbPaletteDark) +\nscale_fill_manual(values=cbbPaletteDark)\n\n# fn <- \"../paper/figures/poetic_acc_work.pdf\"\n# ggsave(fn, dpi=600, width=6, height=6, device=cairo_pdf)",
"_____no_output_____"
],
[
"dfs = []\nfor n in range(0,80,5):\n dfs.append(test_clfs(GRAPH_CLASSIFIERS, vecs, cols, 'Author', cutoff=n, samps=100))\nbyauth = pd.concat(dfs)\nbyauth",
"_____no_output_____"
],
[
"byauth",
"_____no_output_____"
],
[
"%%R -i byauth,n_df -h 6 -w 6 -u in -r 144\n\ncbbPaletteDark <- c(\"#009E73\", \"#e79f00\", \"#9ad0f3\", \"#0072B2\", \"#D55E00\", \n \"#CC79A7\", \"#F0E442\")\n\nggplot(data=byauth, aes(x=Cutoff, y=Score*100)) +\ngeom_label(\n data=n_df,\n label.size=NA,\n aes(x=Cutoff, y=85, label=sprintf(\"n=%d\",N)),\n family=\"Envy Code R\",\n size=3\n) +\ngeom_ribbon(\n aes(ymin=CILow*100, ymax=CIHigh*100, fill=Classifier), \n alpha=0.25, show.legend=FALSE) +\ngeom_line(aes(color=Classifier,linetype=Classifier), size=1.2) +\n\nlabs(x=\"Minimum Poem Size (lines)\",y=expression(paste(\"% Accuracy (weighted \",F[1],\", mean of 100 trials + 95% CI)\"))) +\ntheme_bw() +\nguides(color = guide_legend(ncol=2)) +\n\ntheme(\n text = element_text(size=9, family=\"Envy Code R\"),\n panel.grid.major=element_blank(),\n panel.grid.minor=element_blank(),\n legend.title=element_blank(),\n legend.position= c(0.25, 0.3),\n legend.text=element_text(size=8),\n axis.text.x = element_text(angle = 60, hjust = 1, vjust=1)\n) +\n\nscale_linetype_manual(values=c(\"solid\", \"solid\", \"twodash\", \"twodash\")) +\nscale_color_manual(values=cbbPaletteDark) +\nscale_fill_manual(values=cbbPaletteDark)\n\n# fn <- \"../paper/figures/poetic_acc_auth.pdf\"\n# ggsave(fn, dpi=600, width=6, height=6, device=cairo_pdf)",
"_____no_output_____"
]
],
[
[
"# Confusion Matrix\n\nFinally, it is often useful to draw a confusion matrix to see which labels are the most distinctive. Here I examine just the accuracy per work. The matrix itself is the mean of 100 `StratifiedShuffleSplit` trials with a 20% holdout.",
"_____no_output_____"
]
],
[
[
"conf_matrix_list_of_arrays = []\nX = vecs.drop(['Author','Work','Poem'], axis=1)\ny = vecs.Work\nnames = sorted(y.unique())\nsss = StratifiedShuffleSplit(n_splits=100, test_size=0.2, random_state=42)\nclf = make_pipeline(\n StandardScaler(),\n NearestCentroid(),\n)\n\nfor train_index, test_index in sss.split(X,y):\n X_train, X_test = X.iloc[train_index], X.iloc[test_index]\n y_train, y_test = y[train_index], y[test_index]\n clf.fit(X_train, y_train)\n conf_matrix = confusion_matrix(y_test, clf.predict(X_test))\n conf_matrix_list_of_arrays.append(conf_matrix)",
"_____no_output_____"
],
[
"conf_mat = np.mean(conf_matrix_list_of_arrays, axis=0)",
"_____no_output_____"
],
[
"# build the df for ggplot\ncm_pct = [x/x.sum()*100 for x in conf_mat]\nm = []\nnames = sorted(vecs['Work'].unique())\nfor y,arr in enumerate(cm_pct):\n for x,val in enumerate(arr):\n m.append({'x':names[x], 'y':names[y], 'val':val})\ncmdf = pd.DataFrame(m)\ncmdf.head()",
"_____no_output_____"
],
[
"%%R -i cmdf -h 7 -w 7 -u in -r 144\n\nlibrary(tidyverse)\nlibrary(extrafont)\n\ncmdf <- cmdf %>%\n mutate(x = factor(x, levels= unique(x)), # alphabetical order by default\n y = factor(y, levels = rev(unique(y)))) # force reverse alphabetical order\n \nggplot(cmdf, aes(x=x, y=y, fill=val)) +\n# slightly overlap the tiles to avoid a visible border line\ngeom_tile(width=1.01, height=1.01) +\nscale_fill_distiller(palette=\"Spectral\", direction=-1) +\nguides(fill='none') + # removing legend for `fill`\ntheme_minimal() +\n# force it to be square\ncoord_equal() +\n# supress output for 0\ngeom_text(\n aes(label=ifelse(round(val,digits=2)==0, \"\", round(val,digits=2))),\n color=\"black\",\n size=4,\n family=\"Envy Code R\") + \n\ntheme(\n text = element_text(size=16, family=\"Envy Code R\"),\n panel.grid.minor=element_blank(),\n panel.grid.major=element_blank(),\n panel.border=element_blank(),\n legend.title=element_blank(),\n axis.text.x = element_text(angle = 45, hjust = 1, vjust=0.95),\n plot.title = element_text(hjust = 0.5),\n plot.subtitle = element_text(hjust = 0.5)\n) +\nlabs(x=\"\", y=\"\")\n\n# fn <- \"../paper/figures/cm_poetics.pdf\"\n# ggsave(fn, dpi=600, width=7, height=7, device=cairo_pdf)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
eca1bc89b1084e2489deb5a3bf6f5b170a55abaf | 209,545 | ipynb | Jupyter Notebook | notebooks/exploratory/trouble_shooting_tidal_phase_bias.ipynb | kuechenrole/tidal_melting | b71eec6aa502e1eb0570e9fc4a9d0170aa4dc24b | [
"MIT"
] | 2 | 2020-08-11T09:23:27.000Z | 2021-04-25T01:30:07.000Z | notebooks/exploratory/trouble_shooting_tidal_phase_bias.ipynb | kuechenrole/tidal_melting | b71eec6aa502e1eb0570e9fc4a9d0170aa4dc24b | [
"MIT"
] | 4 | 2020-03-30T20:12:46.000Z | 2021-06-01T22:02:55.000Z | notebooks/exploratory/trouble_shooting_tidal_phase_bias.ipynb | kuechenrole/tidal_melting | b71eec6aa502e1eb0570e9fc4a9d0170aa4dc24b | [
"MIT"
] | 2 | 2019-09-29T16:40:30.000Z | 2021-04-25T01:30:09.000Z | 73.190709 | 21,792 | 0.582577 | [
[
[
"import datetime\nimport numpy as np\nimport sys\nimport os\nimport xarray as xr\nfrom dotenv import load_dotenv, find_dotenv\n\n# find .env automagically by walking up directories until it's found\ndotenv_path = find_dotenv()\nload_dotenv(dotenv_path)\n\nsys.path.append(os.environ.get('srcdir'))\n\n# always reload modules marked with \"%aimport\"\n%load_ext autoreload\n%autoreload 1\n\nimport features.compare_atg as atg\n%aimport features.compare_atg",
"_____no_output_____"
],
[
"file_path = os.path.join(os.environ.get('rawdir'),'waom10_v2.0_small','ocean_his_hourly_0001.nc')\nzeta = xr.open_dataset(file_path).zeta.sel(ocean_time=slice('2007-1-15','2007-2-15'))",
"_____no_output_____"
],
[
"file_path = os.path.join(os.environ.get('rawdir'),'gdata','waom10_v2.0_frc','waom10_small_grd.nc')\ngrid = xr.open_dataset(file_path)",
"_____no_output_____"
],
[
"sta,rms = atg.compare_atg(zeta,grid,stime=datetime.datetime(2007,1,15),station_list=np.arange(109))",
"stime = 2007-01-15 00:00:00 constits = ['M2', 'O1'] stations = [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\n 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35\n 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53\n 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71\n 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89\n 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107\n 108]\n"
],
[
"data",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\natg_M2_phases = []\nroms_M2_phases = []\natg_O1_phases = []\nroms_O1_phases = []\nfor station,data in sta.items():\n atg_M2_phases.append(data['atg'][\"M2\"][1])\n roms_M2_phases.append(data['tt']['M2'][2])\n atg_O1_phases.append(data['atg'][\"O1\"][1])\n roms_O1_phases.append(data['tt']['O1'][2])",
"_____no_output_____"
],
[
"plt.close()\nplt.plot(atg_M2_phases,'r.',label='atg')\nplt.plot(roms_M2_phases,'b.',label='roms')\nplt.show()\nplt.plot(atg_O1_phases,'r.',label='atg')\nplt.plot(roms_O1_phases,'b.',label='roms')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"len(roms_M2_phases) == len(atg_M2_phases)",
"_____no_output_____"
],
[
"diff_M2 = np.array(roms_M2_phases) - np.array(atg_M2_phases)\ndiff_O1 = np.array(roms_O1_phases) - np.array(atg_O1_phases)\nplt.close()\nplt.title('Phase difference at ATG stations (ROMS - ATG) in deg (upper M2, lower O1)',fontsize=16)\nfor diff in [diff_M2,diff_O1]:\n diff[diff>180.0]-=360\n diff[diff<-180.0]+=360\n plt.plot(diff,'r.')\n plt.xlabel('station')\n plt.ylabel('roms phase lag to atg in deg')\n plt.show()\n print('mean:',np.nanmean(diff),'; std: ',np.nanstd(diff),'; rms: ',np.sqrt(np.nanmean(diff**2)))",
"/home/ubuntu/bigStick/anaconda3/envs/tidal_melting/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in greater\n \n/home/ubuntu/bigStick/anaconda3/envs/tidal_melting/lib/python3.6/site-packages/ipykernel_launcher.py:7: RuntimeWarning: invalid value encountered in less\n import sys\n"
],
[
"signal = ds.zeta.isel(eta_rho=100,xi_rho=100)\nsignal.ocean_time[[0,-1]]",
"_____no_output_____"
],
[
"import ttide as tt\nttout = tt.t_tide?",
"_____no_output_____"
],
[
"ttout = tt.t_tide(signal.values,stime=datetime.datetime(2007,1,1),lat=-71,constitnames=['M2'])",
"-----------------------------------\nnobs = 1547 \nngood = 1547 \nrecord length (days) = 64.46\nstart time: 2007-01-01 00:00:00\nrayleigh criterion = 1.0\n\nGreenwich phase computed with nodal\n corrections applied to amplitude\n and phase relative to center time\n\nx0= -0.269 xtrend= 0\nvar(data)= 0.01 var(prediction)= 0.00 var(residual)= 0.00\nvar(prediction)/var(data) (%) = 33.7\n\n tidal amplitude and phase with 95 % CI estimates\n tide freq amp amp_err pha pha_err snr\n* M2 0.0805114 0.0657 0.025 222.74 20.16 7\n"
],
[
"import scipy.io as sio\nsio.savemat(\"test_signal\",{'signal':signal.values})",
"_____no_output_____"
],
[
"datetime.datetime.fromordinal(733043)",
"_____no_output_____"
],
[
"from datetime import datetime, timedelta\n\nmatlab_datenum = 733043\npython_datetime = datetime.fromordinal(int(matlab_datenum)) + timedelta(days=matlab_datenum%1) - timedelta(days = 366)",
"_____no_output_____"
],
[
"python_datetime",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca1c8f3148c9d50279fdce4bb5eb84100ce58f0 | 36,405 | ipynb | Jupyter Notebook | VacationPy.ipynb | angel-2022cal/World_Weather_Analysis | 01fe841c35a9ca0271824c5b5ed4904b329f7fb2 | [
"MIT"
] | null | null | null | VacationPy.ipynb | angel-2022cal/World_Weather_Analysis | 01fe841c35a9ca0271824c5b5ed4904b329f7fb2 | [
"MIT"
] | null | null | null | VacationPy.ipynb | angel-2022cal/World_Weather_Analysis | 01fe841c35a9ca0271824c5b5ed4904b329f7fb2 | [
"MIT"
] | null | null | null | 30.877863 | 134 | 0.406455 | [
[
[
"# Import the dependencies.\nimport pandas as pd\nimport gmaps\nimport requests\n# Import the API key.\nfrom config import g_key",
"_____no_output_____"
],
[
"# Store the CSV you saved created in part one into a DataFrame.\ncity_data_df = pd.read_csv(\"weather_data/cities.csv\")\ncity_data_df.head()",
"_____no_output_____"
],
[
"# Get the data types.\ncity_data_df.dtypes",
"_____no_output_____"
],
[
"# Configure gmaps to use your Google API key.\ngmaps.configure(api_key=g_key)",
"_____no_output_____"
],
[
"\n# Heatmap of temperature\n# Get the latitude and longitude.\nlocations = city_data_df[[\"Lat\", \"Lng\"]]\n# Get the maximum temperature.\nmax_temp = city_data_df[\"Max Temp\"]\n# Assign the figure variable.\nfig = gmaps.figure()\n# Assign the heatmap variable.\nheat_layer = gmaps.heatmap_layer(locations, weights=[max(temp, 0) for temp in max_temp]) #[max(0,temp) for temp in max_temp])\n# Add the heatmap layer.\nfig.add_layer(heat_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"fig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)",
"_____no_output_____"
],
[
"# Heatmap of \nlocations = city_data_df[[\"Lat\", \"Lng\"]]\nhumidity = city_data_df[\"Humidity\"]\nfig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)\nheat_layer = gmaps.heatmap_layer(locations, weights=humidity, dissipating=False, max_intensity=300, point_radius=4)\n\nfig.add_layer(heat_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"# Heatmap of percent humidity\nlocations = city_data_df[[\"Lat\", \"Lng\"]]\nhumidity = city_data_df[\"Humidity\"]\nfig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)\nheat_layer = gmaps.heatmap_layer(locations, weights=humidity, dissipating=False, max_intensity=300, point_radius=4)\n\nfig.add_layer(heat_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"# Heatmap of percent Cloudiness\n# Get the latitude and longitude.\nlocations = city_data_df[[\"Lat\", \"Lng\"]]\n# Get the percent Cloudiness.\nclouds = city_data_df[\"Cloudiness\"]\n# Assign the figure variable.\nfig = gmaps.figure()\n# Assign the heatmap variable.\nheat_layer = gmaps.heatmap_layer(locations, weights=clouds, dissipating=False, max_intensity=300, point_radius=4)\n# Add the heatmap layer.\nfig.add_layer(heat_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"# Heatmap of temperature\n# Get the latitude and longitude.\nlocations = city_data_df[[\"Lat\", \"Lng\"]]\n# Get the Wind Speed.\nwind = city_data_df[\"Wind Speed\"]\n# Assign the figure variable.\nfig = gmaps.figure()\n# Assign the heatmap variable.\nheat_layer = gmaps.heatmap_layer(locations, weights=wind, dissipating=False, max_intensity=300, point_radius=4)\n# Add the heatmap layer.\nfig.add_layer(heat_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"# Ask the customer to add a minimum and maximum temperature value.\nmin_temp = float(input(\"What is the minimum temperature you would like for your trip? \"))\nmax_temp = float(input(\"What is the maximum temperature you would like for your trip? \"))",
"What is the minimum temperature you would like for your trip? 50\nWhat is the maximum temperature you would like for your trip? 90\n"
],
[
"# Filter the dataset to find the cities that fit the criteria.\npreferred_cities_df = city_data_df.loc[(city_data_df[\"Max Temp\"] <= max_temp) & \\\n (city_data_df[\"Max Temp\"] >= min_temp)]\npreferred_cities_df.head(10)",
"_____no_output_____"
],
[
"preferred_cities_df.count()",
"_____no_output_____"
],
[
"# Create DataFrame called hotel_df to store hotel names along with city, country, max temp, and coordinates.\nhotel_df = preferred_cities_df[[\"City\", \"Country\", \"Max Temp\", \"Lat\", \"Lng\"]].copy()\nhotel_df[\"Hotel Name\"] = \"\"\nhotel_df.head(10)",
"_____no_output_____"
],
[
"# Set parameters to search for a hotel.\nparams = {\n \"radius\": 5000,\n \"type\": \"lodging\",\n \"key\": g_key\n}\n# Iterate through the DataFrame.\nfor index, row in hotel_df.iterrows():\n # Get the latitude and longitude.\n lat = row[\"Lat\"]\n lng = row[\"Lng\"]\n\n # Add the latitude and longitude to the params dictionary as values to the location key.\n params[\"location\"] = f\"{lat},{lng}\"\n\n # Use the search term: \"lodging\" and our latitude and longitude.\n base_url = \"https://maps.googleapis.com/maps/api/place/nearbysearch/json\"\n # Make request and get the JSON data from the search.\n hotels = requests.get(base_url, params=params).json()\n# Grab the first hotel from the results and store the name.\n try:\n hotel_df.loc[index, \"Hotel Name\"] = hotels[\"results\"][0][\"name\"]\n except (IndexError):\n print(\"Hotel not found... skipping.\")",
"Hotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\n"
],
[
"hotel_df.head(10)",
"_____no_output_____"
],
[
"# Add a heatmap of temperature for the vacation spots.\nlocations = hotel_df[[\"Lat\", \"Lng\"]]\nmax_temp = hotel_df[\"Max Temp\"]\nfig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)\nheat_layer = gmaps.heatmap_layer(locations, weights=max_temp, dissipating=False,\n max_intensity=300, point_radius=4)\n\nfig.add_layer(heat_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"# Add a heatmap of temperature for the vacation spots and marker for each city.\nlocations = hotel_df[[\"Lat\", \"Lng\"]]\nmax_temp = hotel_df[\"Max Temp\"]\nfig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)\nheat_layer = gmaps.heatmap_layer(locations, weights=max_temp, \n dissipating=False, max_intensity=300, point_radius=4)\nmarker_layer = gmaps.marker_layer(locations)\nfig.add_layer(heat_layer)\nfig.add_layer(marker_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
],
[
"info_box_template = \"\"\"\n<dl>\n<dt>Hotel Name</dt><dd>{Hotel Name}</dd>\n<dt>City</dt><dd>{City}</dd>\n<dt>Country</dt><dd>{Country}</dd>\n<dt>Max Temp</dt><dd>{Max Temp} °F</dd>\n</dl>\n\"\"\"\n# Store the DataFrame Row.\nhotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]",
"_____no_output_____"
],
[
"# Add a heatmap of temperature for the vacation spots and a pop-up marker for each city.\nlocations = hotel_df[[\"Lat\", \"Lng\"]]\nmax_temp = hotel_df[\"Max Temp\"]\nfig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)\nheat_layer = gmaps.heatmap_layer(locations, weights=max_temp,dissipating=False,\n max_intensity=300, point_radius=4)\nmarker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)\nfig.add_layer(heat_layer)\nfig.add_layer(marker_layer)\n\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca1cec816ad86757e7ada6dec070fb42816fa7e | 7,818 | ipynb | Jupyter Notebook | 06. Binary Serach.ipynb | saugatapaul1010/Python-Snippets-for-Quick-Reference | ba3938b80bb0f14de72f42a8347f427b076911f0 | [
"MIT"
] | null | null | null | 06. Binary Serach.ipynb | saugatapaul1010/Python-Snippets-for-Quick-Reference | ba3938b80bb0f14de72f42a8347f427b076911f0 | [
"MIT"
] | null | null | null | 06. Binary Serach.ipynb | saugatapaul1010/Python-Snippets-for-Quick-Reference | ba3938b80bb0f14de72f42a8347f427b076911f0 | [
"MIT"
] | null | null | null | 18.884058 | 85 | 0.347148 | [
[
[
"import numpy as np\nimport random\n\nl = list(range(100))\nrandom.shuffle(l)\n\nl",
"_____no_output_____"
],
[
"# search for an elemnt q in the list: O(n) where n is the length of the list\nq = 31\nisFound=False;\nfor ele in l:\n if ele==31:\n print(\"Found\")\n isFound=True\n break;\nif isFound == False:\n print(\"Not Found\")\n \n \n",
"Found\n"
],
[
"#What if the list is sorted? Can we search faster?\n# Show O(log n)\n\nimport math\n\n#Source: http://www.geeksforgeeks.org/binary-search/ \n#Returns index of x in arr if present, else -1\ndef binarySearch (arr, l, r, x):\n \n # Check base case\n if r >= l:\n \n mid = l + math.floor((r - l)/2)\n \n # If element is present at the middle itself\n if arr[mid] == x:\n return mid\n \n # If element is smaller than mid, then it can only\n # be present in left subarray\n elif arr[mid] > x:\n return binarySearch(arr, l, mid-1, x)\n \n # Else the element can only be present in right subarray\n else:\n return binarySearch(arr, mid+1, r, x)\n \n else:\n # Element is not present in the array\n return -1\n\n\nl.sort();\narr = l;\nq =31;\nbinarySearch(arr,0,len(arr)-1,q)\n\n",
"_____no_output_____"
],
[
"# Find elements common in two lists:\nl1 = list(range(100))\nrandom.shuffle(l1)\n\n\nl2 = list(range(50))\nrandom.shuffle(l2)\n\n# find common elements : O(n*m)\ncnt=0;\nfor i in l1:\n for j in l2:\n if i==j:\n print(i)\n cnt += 1;\nprint(\"Number of common elements:\", cnt) ",
"43\n44\n46\n38\n47\n35\n45\n36\n34\n41\n4\n28\n18\n19\n40\n29\n12\n7\n33\n23\n1\n30\n42\n20\n8\n6\n24\n22\n14\n27\n48\n26\n13\n39\n17\n11\n0\n25\n37\n10\n49\n2\n9\n31\n21\n32\n16\n3\n5\n15\nNumber of common elements: 50\n"
],
[
"# Find elements common in two lists:\nl1 = list(range(100))\nrandom.shuffle(l1)\n\n\nl2 = list(range(50))\nrandom.shuffle(l2)\n\n# find common elemnts in lists in O(n) time and O(m) space if m<n\n\n## add all elements in the smallest list into a hashtable/Dict: O(m) space\nsmallList = {}\nfor ele in l2:\n smallList[ele] = 1; # any value is OK. Key is important\n \n# Now find common element \ncnt=0;\nfor i in l1:\n if smallList.get(i) != None: # search happens in constant time.\n print(i);\n cnt += 1;\nprint(\"Number of common elements:\", cnt) ",
"21\n44\n17\n30\n23\n2\n15\n37\n24\n29\n45\n10\n32\n7\n19\n36\n47\n14\n41\n16\n5\n42\n34\n33\n39\n0\n31\n1\n27\n35\n22\n8\n46\n20\n18\n13\n25\n3\n26\n12\n40\n49\n4\n6\n11\n9\n38\n28\n48\n43\nNumber of common elements: 50\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
eca1d2a73b19bd24666a5223b737ea5abb14064f | 14,584 | ipynb | Jupyter Notebook | concurrency/subprocess.ipynb | scotthuang1989/Python-3-Module-of-the-Week | 5f45f4602f084c899924ebc9c6b0155a6dc76f56 | [
"Apache-2.0"
] | 2 | 2018-09-17T05:52:12.000Z | 2021-11-09T17:19:29.000Z | concurrency/subprocess.ipynb | scotthuang1989/Python-3-Module-of-the-Week | 5f45f4602f084c899924ebc9c6b0155a6dc76f56 | [
"Apache-2.0"
] | null | null | null | concurrency/subprocess.ipynb | scotthuang1989/Python-3-Module-of-the-Week | 5f45f4602f084c899924ebc9c6b0155a6dc76f56 | [
"Apache-2.0"
] | 2 | 2017-10-18T09:01:27.000Z | 2018-08-22T00:41:22.000Z | 26.089445 | 758 | 0.553415 | [
[
[
"The `subprocess` module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.\n\n",
"_____no_output_____"
],
[
"## Running External Command",
"_____no_output_____"
]
],
[
[
"import subprocess",
"_____no_output_____"
],
[
"completed = subprocess.run(['ls', '-l'])\ncompleted",
"_____no_output_____"
]
],
[
[
"### Capturing Output",
"_____no_output_____"
],
[
"The standard input and output channels for the process started by run() are bound to the parent’s input and output. That means the calling program cannot capture the output of the command. Pass PIPE for the stdout and stderr arguments to capture the output for later processing.",
"_____no_output_____"
]
],
[
[
"completed = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)\ncompleted",
"_____no_output_____"
]
],
[
[
"### Suppressing Output",
"_____no_output_____"
],
[
"For cases where the output should not be shown or captured, use DEVNULL to suppress an output stream. This example suppresses both the standard output and error streams.",
"_____no_output_____"
]
],
[
[
"import subprocess\n\ntry:\n completed = subprocess.run(\n 'echo to stdout; echo to stderr 1>&2; exit 1',\n shell=True,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )\nexcept subprocess.CalledProcessError as err:\n print('ERROR:', err)\nelse:\n print('returncode:', completed.returncode)\n print('stdout is {!r}'.format(completed.stdout))\n print('stderr is {!r}'.format(completed.stderr))",
"returncode: 1\nstdout is None\nstderr is None\n"
]
],
[
[
"### Execute on shell",
"_____no_output_____"
],
[
"setting the shell argument to a true value causes subprocess to spwan an intermediate shell process which runs the command. the default is to run the command directly",
"_____no_output_____"
]
],
[
[
"import subprocess\ncompleted = subprocess.run('echo $HOME', shell=True, stdout=subprocess.PIPE)\ncompleted",
"_____no_output_____"
]
],
[
[
"if you don't run this command on a shell, this is a error, because **HOME** is not defined",
"_____no_output_____"
]
],
[
[
"import subprocess\ntry:\n completed = subprocess.run('echo $HOME', stdout=subprocess.PIPE)\nexcept:\n print(\"Get Error if don't execute on shell\")",
"Get Error if don't execute on shell\n"
]
],
[
[
"## Working with Pipes Directly",
"_____no_output_____"
],
[
"The functions run(), call(), check_call(), and check_output() are wrappers around the Popen class. Using Popen directly gives more control over how the command is run, and how its input and output streams are processed. For example, by passing different arguments for stdin, stdout, and stderr it is possible to mimic the variations of os.popen().",
"_____no_output_____"
],
[
"### One-way communication With a process",
"_____no_output_____"
]
],
[
[
"import subprocess\nprint(\"read:\")\nproc = subprocess.Popen(['echo', '\"to stdout\"'],\n stdout = subprocess.PIPE)\nstdout_value = proc.communicate()[0].decode(\"utf-8\")\nprint('stdout', repr(stdout_value))",
"read:\nstdout '\"to stdout\"\\n'\n"
]
],
[
[
"## Connecting Segments of a Pipe",
"_____no_output_____"
]
],
[
[
"import subprocess\n\ncat = subprocess.Popen(\n ['cat', 'index.rst'],\n stdout=subprocess.PIPE,\n)\n\ngrep = subprocess.Popen(\n ['grep', '.. literalinclude::'],\n stdin=cat.stdout,\n stdout=subprocess.PIPE,\n)\n\ncut = subprocess.Popen(\n ['cut', '-f', '3', '-d:'],\n stdin=grep.stdout,\n stdout=subprocess.PIPE,\n)\n\nend_of_pipe = cut.stdout\n\nprint('Included files:')\nfor line in end_of_pipe:\n print(line.decode('utf-8').strip())",
"Included files:\n"
]
],
[
[
"## Interacting with Another Command",
"_____no_output_____"
],
[
"## Signaling Between Processes",
"_____no_output_____"
],
[
"The process management examples for the os module include a demonstration of signaling between processes using os.fork() and os.kill(). Since each Popen instance provides a pid attribute with the process id of the child process, it is possible to do something similar with subprocess. The next example combines two scripts. This child process sets up a signal handler for the USR signal.\n\n",
"_____no_output_____"
]
],
[
[
"# %load signal_child.py\nimport os\nimport signal\nimport time\nimport sys\n\npid = os.getpid()\nreceived = False\n\n\ndef signal_usr1(signum, frame):\n \"Callback invoked when a signal is received\"\n global received\n received = True\n print('CHILD {:>6}: Received USR1'.format(pid))\n sys.stdout.flush()\n\n\nprint('CHILD {:>6}: Setting up signal handler'.format(pid))\nsys.stdout.flush()\nsignal.signal(signal.SIGUSR1, signal_usr1)\nprint('CHILD {:>6}: Pausing to wait for signal'.format(pid))\nsys.stdout.flush()\ntime.sleep(3)\n\nif not received:\n print('CHILD {:>6}: Never received signal'.format(pid))",
"_____no_output_____"
],
[
"# %load signal_parent.py\nimport os\nimport signal\nimport subprocess\nimport time\nimport sys\n\nproc = subprocess.Popen(['python3', 'signal_child.py'])\nprint('PARENT : Pausing before sending signal...')\nsys.stdout.flush()\ntime.sleep(1)\nprint('PARENT : Signaling child')\nsys.stdout.flush()\nos.kill(proc.pid, signal.SIGUSR1)",
"_____no_output_____"
],
[
"!python signal_parent.py",
"PARENT : Pausing before sending signal...\nCHILD 19430: Setting up signal handler\nCHILD 19430: Pausing to wait for signal\nPARENT : Signaling child\nCHILD 19430: Received USR1\n"
]
],
[
[
"## Process Groups / Session",
"_____no_output_____"
],
[
"If the process created by Popen spawns sub-processes, those children will not receive any signals sent to the parent. That means when using the shell argument to Popen it will be difficult to cause the command started in the shell to terminate by sending SIGINT or SIGTERM.",
"_____no_output_____"
]
],
[
[
"import os\nimport signal\nimport subprocess\nimport tempfile\nimport time\nimport sys\n\nscript = '''#!/bin/sh\necho \"Shell script in process $$\"\nset -x\npython3 signal_child.py\n'''\nscript_file = tempfile.NamedTemporaryFile('wt')\nscript_file.write(script)\nscript_file.flush()\n\nproc = subprocess.Popen(['sh', script_file.name])\nprint('PARENT : Pausing before signaling {}...'.format(\n proc.pid))\nsys.stdout.flush()\ntime.sleep(1)\nprint('PARENT : Signaling child {}'.format(proc.pid))\nsys.stdout.flush()\nos.kill(proc.pid, signal.SIGUSR1)\ntime.sleep(3)",
"PARENT : Pausing before signaling 20004...\nPARENT : Signaling child 20004\n"
]
],
[
[
"The pid used to send the signal does not match the pid of the child of the shell script waiting for the signal, because in this example there are three separate processes interacting:\n\n* The program subprocess_signal_parent_shell.py\n* The shell process running the script created by the main python program\n* The program signal_child.py",
"_____no_output_____"
],
[
"To send signals to descendants without knowing their process id, use a process group to associate the children so they can be signaled together. The process group is created with os.setpgrp(), which sets process group id to the process id of the current process. All child processes inherit their process group from their parent, and since it should only be set in the shell created by Popen and its descendants, os.setpgrp() should not be called in the same process where the Popen is created. Instead, the function is passed to Popen as the preexec_fn argument so it is run after the fork() inside the new process, before it uses exec() to run the shell. To signal the entire process group, use os.killpg() with the pid value from the Popen instance.",
"_____no_output_____"
]
],
[
[
"import os\nimport signal\nimport subprocess\nimport tempfile\nimport time\nimport sys\n\n\ndef show_setting_prgrp():\n print('Calling os.setpgrp() from {}'.format(os.getpid()))\n os.setpgrp()\n print('Process group is now {}'.format(\n os.getpid(), os.getpgrp()))\n sys.stdout.flush()\n\n\nscript = '''#!/bin/sh\necho \"Shell script in process $$\"\nset -x\npython3 signal_child.py\n'''\nscript_file = tempfile.NamedTemporaryFile('wt')\nscript_file.write(script)\nscript_file.flush()\n\nproc = subprocess.Popen(\n ['sh', script_file.name],\n preexec_fn=show_setting_prgrp,\n)\nprint('PARENT : Pausing before signaling {}...'.format(\n proc.pid))\nsys.stdout.flush()\ntime.sleep(1)\nprint('PARENT : Signaling process group {}'.format(\n proc.pid))\nsys.stdout.flush()\nos.killpg(proc.pid, signal.SIGUSR1)\ntime.sleep(3)",
"Calling os.setpgrp() from 20382\nProcess group is now 20382\nPARENT : Pausing before signaling 20382...\nPARENT : Signaling process group 20382\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
eca20466e1b6342d5838d15ffc74a75b5f63ec44 | 1,483 | ipynb | Jupyter Notebook | examples/reference/elements/plotly/HLine.ipynb | ppwadhwa/holoviews | e8e2ec08c669295479f98bb2f46bbd59782786bf | [
"BSD-3-Clause"
] | 864 | 2019-11-13T08:18:27.000Z | 2022-03-31T13:36:13.000Z | examples/reference/elements/plotly/HLine.ipynb | ppwadhwa/holoviews | e8e2ec08c669295479f98bb2f46bbd59782786bf | [
"BSD-3-Clause"
] | 1,117 | 2019-11-12T16:15:59.000Z | 2022-03-30T22:57:59.000Z | examples/reference/elements/plotly/HLine.ipynb | ppwadhwa/holoviews | e8e2ec08c669295479f98bb2f46bbd59782786bf | [
"BSD-3-Clause"
] | 180 | 2019-11-19T16:44:44.000Z | 2022-03-28T22:49:18.000Z | 23.171875 | 167 | 0.548213 | [
[
[
"#### **Title**: HLine Element\n\n**Dependencies**: Plotly\n\n**Backends**: [Bokeh](../bokeh/HLine.ipynb), [Matplotlib](../matplotlib/HLine.ipynb), [Plotly](./HLine.ipynb)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport holoviews as hv\nfrom holoviews import opts\nhv.extension('plotly')",
"_____no_output_____"
]
],
[
[
"The ``HLine`` element is a type of annotation that marks a position along the y-axis. Here is an ``HLine`` element that marks the mean of a points distributions:",
"_____no_output_____"
]
],
[
[
"xs = np.random.normal(size=100)\nys = np.random.normal(size=100) * xs\noverlay = hv.Points((xs,ys)) * hv.HLine(ys.mean())\noverlay.opts(\n opts.HLine(line_color='blue', line_width=6), \n opts.Points(color='gray'))",
"_____no_output_____"
]
],
[
[
"For full documentation and the available style and plot options, use ``hv.help(hv.HLine).``",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca207cc637ce40ce7867f4e1682744c4bf52ada | 216,849 | ipynb | Jupyter Notebook | src/reddit/.ipynb_checkpoints/Reddit API-checkpoint.ipynb | berkeley-politics-capstone/politics-capstone | 503efa66a930dbf913bad67daa4b38876586949c | [
"MIT"
] | 1 | 2020-07-18T20:25:58.000Z | 2020-07-18T20:25:58.000Z | src/reddit/.ipynb_checkpoints/Reddit API-checkpoint.ipynb | berkeley-politics-capstone/politics-capstone | 503efa66a930dbf913bad67daa4b38876586949c | [
"MIT"
] | null | null | null | src/reddit/.ipynb_checkpoints/Reddit API-checkpoint.ipynb | berkeley-politics-capstone/politics-capstone | 503efa66a930dbf913bad67daa4b38876586949c | [
"MIT"
] | null | null | null | 46.049904 | 18,636 | 0.465494 | [
[
[
"# Reddit Datasets\n\n## Using [Pushshift](https://github.com/pushshift/api) to download Reddit data\n\nThere are currently 7 Reddit datasets available to download:\n\n### Article Text Parsing\n* `reddit_2016_05_31.pkl`: Contains the post id, number of comments, karma score, subreddit, number of subscribers in that subreddit, title of post link, and url of article links posted to /r/politics and a couple of the specific Republican candidates' subreddits from January 1, 2015 to May 31, 2016 (Trump won delegate majority in late May 2016).\n* `reddit_2019_06_15.pkl`: Contains the post id, number of comments, karma score, subreddit, number of subscribers in that subreddit, title of post link, and url of article links posted to /r/politics and a couple of the specific Democratic candidates; subreddits from January 1, 2019 to June 15, 2019\n\n### Headline mentions\n* `reddit_headline_counts.csv`: Contains how many times a candidate's name was mentioned in a /r/politics, /r/news, or /r/worldnews post from January 1, 2019 to June 2019.\n\nThose three files above can be found at https://berkeley-politics-capstone.s3.amazonaws.com/reddit.zip\n\n### Article Text Parsing Supplements\n* https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2016_dates.pkl : Used with the 2016 Article Text Parsing data to grab the dates of the article from Reddit\n* https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019_dates.pkl : Used with the 2019 Article Text Parsing data to grab the dates of the article from Reddit\n\n### Reddit comments\n\n* https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2016_comments.pkl : Comments left in subreddits that were gathered from `reddit_2016_05_31.pkl` (/r/politics and the subreddits for certain Republican candidates)\n\n* https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019_comments.pkl : Comments left in subreddits that were gathered from `reddit_2019_06_15.pkl` (/r/politics and the subreddits for certain Democratic candidates)\n\n### How to use the data\n\n* Within the data folder, make a new folder called Reddit and place the files in there.\n* The pickled files are pandas data frames when unpickled, so use the following command: `df = pd.read_pickle(file)`",
"_____no_output_____"
]
],
[
[
"# Libraries\n\nimport requests\nimport praw\nimport praw.models\nimport configparser\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport nltk\nimport json\nimport os\nimport html\nfrom bs4 import BeautifulSoup\nfrom markdown import markdown\nfrom datetime import datetime, timedelta, date\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.linear_model import LinearRegression\nfrom nltk.sentiment.vader import SentimentIntensityAnalyzer as SIA",
"/Users/kanithamann/anaconda/lib/python3.6/site-packages/nltk/twitter/__init__.py:20: UserWarning: The twython library has not been installed. Some functionality from the twitter package will not be available.\n warnings.warn(\"The twython library has not been installed. \"\n"
]
],
[
[
"## Pushshift API - a free API with no credentials needed\n\nPushshift API is more useful to gather aggregate data of a metric in a subreddit, such as number of times a query is mentioned, or to gather a field that meets certain conditions. Additionally, Pushshift is free to use and does not require any credentials.\n\nSee https://www.reddit.com/r/pushshift/comments/bcxguf/new_to_pushshift_read_this_faq/ for more helpful information.",
"_____no_output_____"
],
[
"### List of URLs\n\nMost of the time in /r/politics, users submit link posts to articles. We'd like to gather a list of these articles so that we may use the `news-please` library to grab the text and subsequently do NLP on it. To get this list, I've used Pushshift and used the following call: \n\nhttps://api.pushshift.io/reddit/submission/search/?subreddit=politics&after=2019-06-01&before=2019-06-10&is_self=false&filter=title,subreddit,url,score,num_comments,subreddit_subscribers,id&limit=1000\n\nNote about subreddits: Michael Bennet, Steve Bullock, Julián Castro, Bill de Blasio, John Delaney, John Hickenlooper, Seth Moulton, Tim Ryan, Eric Swalwell, Marianne Williamson, and Andrew Yang are candidates who do not have dedicated subreddits as of May 30, 2019.",
"_____no_output_____"
]
],
[
[
"# Initial step: Creating the list\n\noutput = pd.DataFrame(columns=['id','num_comments','score','subreddit','subreddit_subscribers','title','url'])",
"_____no_output_____"
],
[
"# Set the date range\n\nstart_date = date(2016, 5, 1)\n\n#day_count = (date(2019, 6, 15) - start_date).days + 1 # The last time I pulled the 2019 dataset together\n\nday_count = (date(2016, 5, 30) - start_date).days + 1 # For use with the 2015 dataset\n\n\n# Set the subreddits to go through\n\nsubreddits = {'Joe Biden': 'JoeBiden',\n 'Cory Booker': 'corybooker',\n 'Pete Buttigieg': 'Pete_Buttigieg',\n 'Tulsi Gabbard': 'tulsi',\n 'Kirsten Gillibrand': 'Kirsten_Gillibrand',\n 'Mike Gravel': 'gravelforpresident',\n 'Kamala Harris': 'Kamala',\n 'Jay Inslee': 'inslee2020',\n 'Amy Klobuchar': 'BaemyKlobaechar',\n 'Beto O\\'Rourke': 'Beto2020',\n 'Bernie Sanders': 'SandersForPresident',\n 'Donald Trump': 'The_Donald',\n 'Elizabeth Warren': 'ElizabethWarren',\n 'politics': 'politics'}\n\n# Subreddits for 2016 candidates\n\nsubreddits_2016 = {'Donald Trump': 'The_Donald',\n 'Ted Cruz': 'TedCruz',\n 'Jeb Bush': 'JebBush',\n 'Ben Carson': 'BenCarson',\n 'Chris Christie': 'ChrisChristie',\n 'Jon Kasich': 'KasichForPresident',\n 'Rand Paul': 'RandPaul',\n 'Marco Rubio': 'Marco_Rubio',\n 'politics': 'politics'}\n\n# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away\n\nfor single_date in (start_date + timedelta(n) for n in range(day_count)):\n \n after = single_date.strftime(\"%Y-%m-%d\")\n before = (single_date + timedelta(1)).strftime(\"%Y-%m-%d\")\n \n for subreddit in subreddits_2016.values():\n \n url = 'https://api.pushshift.io/reddit/submission/search/?subreddit={0}&after={1}&before={2}&is_self=false&filter=id,title,subreddit,url,score,num_comments,subreddit_subscribers&limit=1000'.format(subreddit,after,before)\n\n r = requests.get(url)\n \n if r.status_code != 200:\n continue\n \n else:\n response = r.json()\n \n if bool(response['data']):\n temp = pd.DataFrame.from_dict(response['data'], orient='columns')\n output = output.append(temp, ignore_index = True)\n",
"_____no_output_____"
],
[
"# Remove non article URLs\n\nremove = ['twitter.com','hbo.com','youtube.com','youtu.be','reddit.com','streamable.com','imgur.com',\n 'i.imgur.com','forgifs.com','i.redd.it']\nsearchrm = '|'.join(remove)\n\noutput = output[~output['url'].str.contains(searchrm)].reset_index(drop=True)",
"_____no_output_____"
],
[
"# Pickling the dataframe\n\n#output.to_pickle('reddit_2016_05_31.pkl')",
"_____no_output_____"
]
],
[
[
"## Getting the dates of a post\n\nNews Please has an issue with generating dates to the articles, so we have opted to use Reddit's post datetime instead. This will be done Pushshift again; the code below is the refactored version of what is above (I originally tried PRAW but it was too slow to feed in a post ID one by one).",
"_____no_output_____"
]
],
[
[
"# Instantiate the date dataframe\ndates = pd.DataFrame(columns=['created_utc','id','url'])",
"_____no_output_____"
],
[
"# Set the date range\n\nstart_date = date(2019, 1, 1)\n\nday_count = (date(2019, 6, 15) - start_date).days + 1 # The last time I pulled the 2019 dataset together\n\n#day_count = (date(2016, 5, 31) - start_date).days + 1 # For use with the 2015 dataset\n\n\n# Set the subreddits to go through\n\nsubreddits = {'Joe Biden': 'JoeBiden',\n 'Cory Booker': 'corybooker',\n 'Pete Buttigieg': 'Pete_Buttigieg',\n 'Tulsi Gabbard': 'tulsi',\n 'Kirsten Gillibrand': 'Kirsten_Gillibrand',\n 'Mike Gravel': 'gravelforpresident',\n 'Kamala Harris': 'Kamala',\n 'Jay Inslee': 'inslee2020',\n 'Amy Klobuchar': 'BaemyKlobaechar',\n 'Beto O\\'Rourke': 'Beto2020',\n 'Bernie Sanders': 'SandersForPresident',\n 'Donald Trump': 'The_Donald',\n 'Elizabeth Warren': 'ElizabethWarren',\n 'politics': 'politics'}\n\n# Subreddits for 2016 candidates\n\nsubreddits_2016 = {'Donald Trump': 'The_Donald',\n 'Ted Cruz': 'TedCruz',\n 'Jeb Bush': 'JebBush',\n 'Ben Carson': 'BenCarson',\n 'Chris Christie': 'ChrisChristie',\n 'Jon Kasich': 'KasichForPresident',\n 'Rand Paul': 'RandPaul',\n 'Marco Rubio': 'Marco_Rubio',\n 'politics': 'politics'}\n\n# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away\n\nfor single_date in (start_date + timedelta(n) for n in range(day_count)):\n \n after = single_date.strftime(\"%Y-%m-%d\")\n before = (single_date + timedelta(1)).strftime(\"%Y-%m-%d\")\n \n for subreddit in subreddits.values():\n \n url = 'https://api.pushshift.io/reddit/submission/search/?subreddit={0}&after={1}&before={2}&is_self=false&filter=id,created_utc,url&limit=1000'.format(subreddit,after,before)\n\n r = requests.get(url)\n \n if r.status_code != 200:\n continue\n \n else:\n response = r.json()\n \n if bool(response['data']):\n temp = pd.DataFrame.from_dict(response['data'], orient='columns')\n dates = dates.append(temp, ignore_index = True)\n",
"_____no_output_____"
],
[
"# Remove non article URLs\n\nremove = ['twitter.com','hbo.com','youtube.com','youtu.be','reddit.com','streamable.com','imgur.com',\n 'i.imgur.com','forgifs.com','i.redd.it']\nsearchrm = '|'.join(remove)\n\ndates = dates[~dates['url'].str.contains(searchrm)].reset_index(drop=True)",
"_____no_output_____"
],
[
"# Now remove the URL column, and convert the date column into datetime\ndates = dates.drop(\"url\", axis=1)\ndates['created_utc'] = dates['created_utc'].apply(lambda x: datetime.utcfromtimestamp(x))",
"_____no_output_____"
],
[
"dates",
"_____no_output_____"
],
[
"# Pickling the dataframe\n\ndates.to_pickle('reddit_2019_dates.pkl')",
"_____no_output_____"
]
],
[
[
"## Logistic Regression\n\nWill be switching over to Pushshift API for this process, which allows us to get data between dates\n\nExample call: https://api.pushshift.io/reddit/submission/search/?after=2019-06-02&before=2019-06-03&q=trump&sort_type=score&sort=desc&subreddit=politics&limit=500\n\nWhere the output is a JSON file, the after date is the date of interest, and the before date is one date in the future.\n\n### Headline mentions vs donations\n#### Consider calculating the cumulative karma score and comments along with this later on\n\nThe Pushshift API call used here will rely on `agg=subreddit` to get counts and will look like the following:\n\nhttps://api.pushshift.io/reddit/submission/search/?subreddit=politics,news,worldnews&aggs=subreddit&q=trump&size=0&after=2019-06-02&before=2019-06-03\n\nThis data will be stored in a csv file that can be pulled in later: `mentions = pd.read_csv('reddit_headline_counts.csv')`",
"_____no_output_____"
]
],
[
[
"politicians = ['williamson', 'harris', 'buttigieg', 'klobuchar', 'yang', 'gillibrand', 'delaney', 'inslee', \n 'hickenlooper', 'o\\%27rourke', 'warren', 'castro', 'sanders', 'gabbard', 'booker', 'trump', 'biden']\n\n# Set the date range\n\nstart_date = date(2019, 3, 8)\n\nday_count = (date(2019, 3, 14) - start_date).days + 1\n\n# Set the rows_list holder\nrows_list = []\n\n# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away\n\nfor single_date in (start_date + timedelta(n) for n in range(day_count)):\n \n after = single_date.strftime(\"%Y-%m-%d\")\n before = (single_date + timedelta(1)).strftime(\"%Y-%m-%d\")\n \n for candidate in politicians:\n url = 'https://api.pushshift.io/reddit/submission/search/?subreddit=politics,news,worldnews&aggs=subreddit&q={0}&size=0&after={1}&before={2}'.format(candidate,after,before)\n response = requests.get(url).json()\n \n for thing in response['aggs']['subreddit']:\n dict1 = {}\n dict1.update({'date': after, \n 'candidate': candidate, \n 'subreddit': thing['key'], \n 'doc_count': thing['doc_count']})\n rows_list.append(dict1)\n \nmentions = pd.DataFrame(rows_list)",
"_____no_output_____"
],
[
"#mentions\n\n#mentions.to_csv('mar8tomar14.csv',index=False)\n\n# To reload, use:\nmentions = pd.read_csv('../../data/reddit/reddit_headline_counts.csv')",
"_____no_output_____"
],
[
"# O'Rourke candidate name change\n\nmentions = mentions.replace('o\\%27rourke', 'orourke')\nby_politics = mentions[mentions['subreddit']=='politics']",
"_____no_output_____"
],
[
"# From Andrew:\n# find the path to each fec file, store paths in a nested dict\nfec_2020_paths = {}\nbase_path = os.path.join(\"..\",\"..\",\"data\",\"fec\",\"2020\") # This notebook was one more level down\nfor party_dir in os.listdir(base_path):\n if(party_dir[0]!=\".\"):\n fec_2020_paths[party_dir] = {}\n for cand_dir in os.listdir(os.path.join(base_path,party_dir)):\n if(cand_dir[0]!=\".\"):\n fec_2020_paths[party_dir][cand_dir] = {}\n for csv_path in os.listdir(os.path.join(base_path,party_dir,cand_dir)):\n if(csv_path.find(\"schedule_a\")>=0):\n fec_2020_paths[party_dir][cand_dir][\"donations\"] = \\\n os.path.join(base_path,party_dir,cand_dir,csv_path)\n elif(csv_path.find(\"schedule_b\")>=0):\n fec_2020_paths[party_dir][cand_dir][\"spending\"] = \\\n os.path.join(base_path,party_dir,cand_dir,csv_path)\nprint(json.dumps(fec_2020_paths, indent=4))",
"{\n \"republican\": {\n \"trump\": {\n \"spending\": \"../../data/fec/2020/republican/trump/schedule_b-2019-05-30T16_03_37.csv\",\n \"donations\": \"../../data/fec/2020/republican/trump/schedule_a-2019-05-30T16_03_37.csv\"\n }\n },\n \"democrat\": {\n \"williamson\": {\n \"spending\": \"../../data/fec/2020/democrat/williamson/schedule_b-2019-05-30T20_41_44.csv\",\n \"donations\": \"../../data/fec/2020/democrat/williamson/schedule_a-2019-05-30T20_41_32.csv\"\n },\n \"harris\": {\n \"spending\": \"../../data/fec/2020/democrat/harris/schedule_b-2019-05-30T17_22_09.csv\",\n \"donations\": \"../../data/fec/2020/democrat/harris/schedule_a-2019-05-30T17_19_58.csv\"\n },\n \"buttigieg\": {\n \"donations\": \"../../data/fec/2020/democrat/buttigieg/schedule_a-2019-05-30T17_32_11.csv\",\n \"spending\": \"../../data/fec/2020/democrat/buttigieg/schedule_b-2019-05-30T17_32_14.csv\"\n },\n \"klobuchar\": {\n \"spending\": \"../../data/fec/2020/democrat/klobuchar/schedule_b-2019-05-30T17_28_05.csv\",\n \"donations\": \"../../data/fec/2020/democrat/klobuchar/schedule_a-2019-05-30T17_26_34.csv\"\n },\n \"yang\": {\n \"donations\": \"../../data/fec/2020/democrat/yang/schedule_a-2019-05-30T17_36_18.csv\",\n \"spending\": \"../../data/fec/2020/democrat/yang/schedule_b-2019-05-30T17_36_30.csv\"\n },\n \"gillibrand\": {\n \"donations\": \"../../data/fec/2020/democrat/gillibrand/schedule_a-2019-05-30T17_22_03.csv\",\n \"spending\": \"../../data/fec/2020/democrat/gillibrand/schedule_b-2019-05-30T17_25_26.csv\"\n },\n \"delaney\": {\n \"spending\": \"../../data/fec/2020/democrat/delaney/schedule_b-2019-05-30T17_11_01.csv\",\n \"donations\": \"../../data/fec/2020/democrat/delaney/schedule_a-2019-05-30T17_10_21.csv\"\n },\n \"inslee\": {\n \"spending\": \"../../data/fec/2020/democrat/inslee/schedule_b-2019-05-30T17_39_27.csv\",\n \"donations\": \"../../data/fec/2020/democrat/inslee/schedule_a-2019-05-30T17_39_13.csv\"\n },\n \"hickenlooper\": {\n \"spending\": \"../../data/fec/2020/democrat/hickenlooper/schedule_b-2019-05-30T20_40_10.csv\",\n \"donations\": \"../../data/fec/2020/democrat/hickenlooper/schedule_a-2019-05-30T17_42_20.csv\"\n },\n \"orourke\": {\n \"donations\": \"../../data/fec/2020/democrat/orourke/schedule_a-2019-05-30T17_25_21.csv\",\n \"spending\": \"../../data/fec/2020/democrat/orourke/schedule_b-2019-05-30T17_26_40.csv\"\n },\n \"warren\": {\n \"spending\": \"../../data/fec/2020/democrat/warren/schedule_b-2019-05-30T17_20_23.csv\",\n \"donations\": \"../../data/fec/2020/democrat/warren/schedule_a-2019-05-30T17_19_58.csv\"\n },\n \"castro\": {\n \"donations\": \"../../data/fec/2020/democrat/castro/schedule_a-2019-05-30T20_42_42.csv\",\n \"spending\": \"../../data/fec/2020/democrat/castro/schedule_b-2019-05-30T20_42_54.csv\"\n },\n \"sanders\": {\n \"spending\": \"../../data/fec/2020/democrat/sanders/schedule_b-2019-05-30T17_06_38.csv\",\n \"donations\": \"../../data/fec/2020/democrat/sanders/schedule_a-2019-05-30T17_07_08.csv\"\n },\n \"gabbard\": {\n \"spending\": \"../../data/fec/2020/democrat/gabbard/schedule_b-2019-05-30T17_33_24.csv\",\n \"donations\": \"../../data/fec/2020/democrat/gabbard/schedule_a-2019-05-30T17_33_22.csv\"\n },\n \"booker\": {\n \"spending\": \"../../data/fec/2020/democrat/booker/schedule_b-2019-05-30T17_29_42.csv\",\n \"donations\": \"../../data/fec/2020/democrat/booker/schedule_a-2019-05-30T17_30_40.csv\"\n }\n }\n}\n"
],
[
"dataset = pd.DataFrame()\nfor candid in fec_2020_paths[\"democrat\"].keys():\n if(\"donations\" in fec_2020_paths[\"democrat\"][candid].keys()):\n \n # process donations dataset\n df1 = pd.read_csv(fec_2020_paths[\"democrat\"][candid][\"donations\"])\n df1[\"contribution_receipt_date\"] = pd.to_datetime(df1[\"contribution_receipt_date\"]).dt.date\n df1 = df1.loc[df1[\"entity_type\"]==\"IND\"]\n df1 = df1.loc[df1[\"contribution_receipt_amount\"]<=2800]\n df1 = df1.groupby(by=\"contribution_receipt_date\", as_index=False)[\"contribution_receipt_amount\"].sum()\n df1.name = \"individual_donations\"\n df1 = pd.DataFrame(df1)\n df1[\"candidate\"] = candid\n \n # attaching to the mentions dataset\n #result = mentions.merge(df1, how='inner', left_on=['', 'B'])\n\n \n # append to main df\n dataset = dataset.append(df1)",
"/Users/kanithamann/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2698: DtypeWarning: Columns (35,36,37,38) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n/Users/kanithamann/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2698: DtypeWarning: Columns (35,36,37,38,39,42,43,44,45) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n/Users/kanithamann/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2698: DtypeWarning: Columns (35,36,37,38,42,43,44,45) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n/Users/kanithamann/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2698: DtypeWarning: Columns (35) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
],
[
"dataset.rename(index=str, columns={'contribution_receipt_date':'date'}, inplace=True)\n\nres = pd.merge(by_politics, dataset, how='outer', left_on=['date','candidate'], \n right_on = ['date','candidate'])\n\nres",
"_____no_output_____"
],
[
"# Replace NaN with 0\n\nres['contribution_receipt_amount'].fillna(0, inplace = True) \nres['doc_count'].fillna(0, inplace = True) \nres",
"_____no_output_____"
],
[
"X_train = res[\"doc_count\"].reshape(-1, 1)\ny_train = res[\"contribution_receipt_amount\"]\nX_test = np.array(range(0,400)).reshape(-1, 1)\nlinear_fit = LinearRegression().fit(X_train, y_train)\ny_pred = linear_fit.predict(X_test)",
"/Users/kanithamann/anaconda/lib/python3.6/site-packages/ipykernel_launcher.py:1: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"linear_fit.score(X_train,y_train)",
"_____no_output_____"
],
[
"fix, ax = plt.subplots(figsize=(12,8))\nplt.scatter(X_train, y_train, color='black', alpha=0.2)\n#plt.plot(X_test, y_pred, color='blue', linewidth=3)",
"_____no_output_____"
]
],
[
[
"If we want an aggregate of comments and score for this particular dataset later, I've been playing around with the following Pushshift API call: https://api.pushshift.io/reddit/submission/search/?subreddit=politics,news,worldnews&aggs=subreddit&q=trump&after=2019-06-02&before=2019-06-03&filter=subreddit,score,num_comments",
"_____no_output_____"
],
[
"## Gathering comments from the subreddits above",
"_____no_output_____"
]
],
[
[
"comments = pd.DataFrame(columns=['body','created_utc','parent_id','score'])",
"_____no_output_____"
],
[
"# Set the date range\n\nstart_date = date(2015, 1, 1)\n\n#day_count = (date(2019, 6, 15) - start_date).days + 1 # The last time I pulled the 2019 dataset together\n\nday_count = (date(2016, 5, 31) - start_date).days + 1 # For use with the 2015 dataset\n\n\n# Set the subreddits to go through\n\nsubreddits = {'Joe Biden': 'JoeBiden',\n 'Cory Booker': 'corybooker',\n 'Pete Buttigieg': 'Pete_Buttigieg',\n 'Tulsi Gabbard': 'tulsi',\n 'Kirsten Gillibrand': 'Kirsten_Gillibrand',\n 'Mike Gravel': 'gravelforpresident',\n 'Kamala Harris': 'Kamala',\n 'Jay Inslee': 'inslee2020',\n 'Amy Klobuchar': 'BaemyKlobaechar',\n 'Beto O\\'Rourke': 'Beto2020',\n 'Bernie Sanders': 'SandersForPresident',\n 'Donald Trump': 'The_Donald',\n 'Elizabeth Warren': 'ElizabethWarren',\n 'politics': 'politics'}\n\n# Subreddits for 2016 candidates\n\nsubreddits_2016 = {'Donald Trump': 'The_Donald',\n 'Ted Cruz': 'TedCruz',\n 'Jeb Bush': 'JebBush',\n 'Ben Carson': 'BenCarson',\n 'Chris Christie': 'ChrisChristie',\n 'Jon Kasich': 'KasichForPresident',\n 'Rand Paul': 'RandPaul',\n 'Marco Rubio': 'Marco_Rubio',\n 'politics': 'politics'}\n\n# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away\n\nfor single_date in (start_date + timedelta(n) for n in range(day_count)):\n \n after = single_date.strftime(\"%Y-%m-%d\")\n before = (single_date + timedelta(1)).strftime(\"%Y-%m-%d\")\n \n for subreddit in subreddits_2016.values():\n \n url = 'https://api.pushshift.io/reddit/search/comment/?subreddit={0}&after={1}&before={2}&limit=1000&sort=desc&sort_type=score&filter=parent_id,score,body,created_utc'.format(subreddit,after,before)\n\n r = requests.get(url)\n \n if r.status_code != 200:\n continue\n \n else:\n response = r.json()\n \n temp = pd.DataFrame.from_dict(response['data'], orient='columns')\n comments = comments.append(temp, ignore_index = True)\n \n \n",
"_____no_output_____"
],
[
"# Cleaning up the dataframe by removing [removed], and converting created_utc to a date time\n\n#temp = temp[temp['parent_id'].str.startswith('t3')]\n\ncomments = comments[~comments['body'].str.startswith('[deleted]')]\n#comments['created_utc'] = comments['created_utc'].apply(lambda x:\n# datetime.utcfromtimestamp(x).strftime(\"%Y-%m-%d\"))\n\ncomments",
"_____no_output_____"
],
[
"# Pickling\n\ncomments.to_pickle('reddit_2016_comments.pkl')",
"_____no_output_____"
]
],
[
[
"# NLP\n\nBy day, we can gather:\n\n1. What the sentiment of all post titles (by subreddit, and by if the candidate was mentioned in the title)\n2. What the sentiment of the top 10 comments of each post is (by subreddit, and by if the candidate was mentioned in thetitle)\n3. How many posts in the subreddit or if the candidate was mentioned in the title are (more of a general feature, not NLP)\n\n* Day of post\n* Post ID\n* Headline\n* Subreddit\n* Entity recognition (candidates addressed in headline, binary as 1 or 0)\n* Topic recognition (This may be difficult to grab from a headline, but could try in the comments)\n* Sentiment of headline\n* Sentiment of comments",
"_____no_output_____"
],
[
"## Cleaning of comments\n\nThe Reddit comment dataset consists of the following:\n\n* body: Text itself\n* created_utc: Date of the comment\n* parent_id: The ID of the comment. **If the ID starts with t3, then it is a top-level comment, with the remainder alphanumeric characters indicating the post ID**\n* score: Karma score of the comment",
"_____no_output_____"
]
],
[
[
"# Let's begin by unpickling\n\ndf = pd.read_pickle('../../data/reddit/reddit_2019_comments.pkl')",
"_____no_output_____"
]
],
[
[
"### Convert Markdown to HTML to regular text",
"_____no_output_____"
]
],
[
[
"# Markdown clean up\n\nexample = df.iloc[12]['body']\nexample",
"_____no_output_____"
]
],
[
[
"#### For now, I will keep text inserted into blockquotes",
"_____no_output_____"
]
],
[
[
"#df['clean'] = df['body'].apply(lambda x: BeautifulSoup(markdown(html.unescape(x)),'lxml').get_text())\n#df = df.reset_index(drop=True)\n\n# Pickled it away: df.to_pickle('reddit_2019_comments_clean1.pkl')\ndf = pd.read_pickle('../../data/reddit/reddit_2019_comments_clean1.pkl')",
"_____no_output_____"
],
[
"# The above process has retained some of the newline (\\n) syntax, so let's remove those.\n\ndf['clean'] = df['clean'].apply(lambda x: x.replace('\\n\\n',' ').replace('\\n',' ').replace('\\'s','s'))\ndf",
"_____no_output_____"
]
],
[
[
"### Sentiment Analysis",
"_____no_output_____"
]
],
[
[
"sia = SIA()\n\ndf['sentiment'] = df['clean'].apply(lambda x: sia.polarity_scores(x))",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"## PRAW API - a Python wrapper to access Reddit \n\nThis Python wrapper API can look at a subreddit and pull information such as the title, url, and body of a post (if it's not a link post). We can also get the karma score, the number of comments, and when the post was created.\n\nThis API needs credentials in order to run. I have stored the credentials away in an INI file that will not be uploaded to Github.",
"_____no_output_____"
]
],
[
[
"# The following cells uses an INI file to pull in credentials needed to access the PRAW API.\n# This INI file is stored locally only\n\nconfig = configparser.RawConfigParser()\nconfig.read(\"config.txt\")\nreddit = praw.Reddit(client_id=config.get(\"reddit\",\"client_id\"),\n client_secret=config.get(\"reddit\",\"client_secret\"),\n password=config.get(\"reddit\",\"password\"),\n user_agent=\"Political exploration\",\n username=config.get(\"reddit\",\"username\"))\n",
"_____no_output_____"
],
[
"# Example data pull\n\nposts = []\n\nfor post in reddit.subreddit('politics').hot(limit=10):\n posts.append([post.title, \n post.score, \n post.id, \n post.subreddit, \n post.url, \n post.num_comments, \n post.selftext, \n datetime.utcfromtimestamp(post.created)\n ])\ndf = pd.DataFrame(posts,\n columns=['title', \n 'score', \n 'id', \n 'subreddit', \n 'url', \n 'num_comments', \n 'body', \n 'created'\n ])\n\ndf",
"_____no_output_____"
]
],
[
[
"## Sentiment Analysis",
"_____no_output_____"
]
],
[
[
"sia = SIA()\n\nfor i in range(0,len(posts)):\n line = posts[i][0]\n pol_score = sia.polarity_scores(line)\n pol_score['headline'] = line\n #results.append(pol_score)\n print(pol_score)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca20ff005ec8e87212b02205a615746cc1c253c | 6,125 | ipynb | Jupyter Notebook | docs/examples/advanced/serialization.ipynb | RudyVenguswamy/DALI | 1456689cbb06a6d6f2c46c3fd231d1c296808e00 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-05-20T18:49:34.000Z | 2019-05-20T18:49:34.000Z | docs/examples/advanced/serialization.ipynb | RudyVenguswamy/DALI | 1456689cbb06a6d6f2c46c3fd231d1c296808e00 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | docs/examples/advanced/serialization.ipynb | RudyVenguswamy/DALI | 1456689cbb06a6d6f2c46c3fd231d1c296808e00 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-02-04T14:45:17.000Z | 2021-02-04T14:45:17.000Z | 26.746725 | 211 | 0.508245 | [
[
[
"# Serialization\n\n## Overview\n\nThis sample shows how to serialize the pipeline to a string.\n\n## Serialization\n\nIn order to use C API or TensorFlow plugin (or just to save the pipeline with a model, so the training process is fully reproducible) we need to serialize the pipeline. \n\nLet us make a simple pipeline reading from MXNet recordIO format (for example of using other data formats please see other examples in [examples](.) directory.",
"_____no_output_____"
]
],
[
[
"from nvidia.dali.pipeline import Pipeline\nimport nvidia.dali.ops as ops\nimport nvidia.dali.types as types\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os.path\n\ntest_data_root = os.environ['DALI_EXTRA_PATH']\nbase = os.path.join(test_data_root, 'db', 'recordio')\n\nidx_files = [base + \"/train.idx\"]\nrec_files = [base + \"/train.rec\"]\n\n\nclass SerializedPipeline(Pipeline):\n def __init__(self, batch_size, num_threads, device_id, seed):\n super(SerializedPipeline, self).__init__(batch_size,\n num_threads,\n device_id,\n seed = seed)\n self.input = ops.MXNetReader(path = rec_files, index_path = idx_files)\n self.decode = ops.ImageDecoder(device = \"mixed\", output_type = types.RGB)\n self.resize = ops.Resize(device = \"gpu\",\n interp_type = types.INTERP_LINEAR)\n self.cmnp = ops.CropMirrorNormalize(device = \"gpu\",\n dtype = types.FLOAT,\n crop = (224, 224),\n mean = [0., 0., 0.],\n std = [1., 1., 1.])\n self.res_uniform = ops.random.Uniform(range = (256.,480.))\n\n def define_graph(self):\n inputs, labels = self.input(name=\"Reader\")\n images = self.decode(inputs)\n images = self.resize(images, resize_shorter = self.res_uniform())\n output = self.cmnp(images)\n return (output, labels)",
"_____no_output_____"
],
[
"batch_size = 16\n\npipe = SerializedPipeline(batch_size=batch_size, num_threads=2, device_id = 0, seed = 12)",
"_____no_output_____"
]
],
[
[
"We will now serialize this pipeline, using `serialize` function of the `Pipeline` class.",
"_____no_output_____"
]
],
[
[
"s = pipe.serialize()",
"_____no_output_____"
]
],
[
[
"In order to deserialize our pipeline in Python, we need to create another pipeline, this time using the generic `Pipeline` class. We give the same seed to the new pipeline, in order to compare the results.",
"_____no_output_____"
]
],
[
[
"pipe2 = Pipeline(batch_size = batch_size, num_threads = 2, device_id = 0, seed = 12)",
"_____no_output_____"
]
],
[
[
"Let us now use the serialized form of `pipe` object to make `pipe2` a copy of it.",
"_____no_output_____"
]
],
[
[
"pipe2.deserialize_and_build(s)",
"_____no_output_____"
]
],
[
[
"Now we can compare the results of the 2 pipelines - original and deserialized.",
"_____no_output_____"
]
],
[
[
"pipe.build()\noriginal_pipe_out = pipe.run()\nserialized_pipe_out = pipe2.run()",
"_____no_output_____"
],
[
"def check_difference(batch_1, batch_2):\n return [np.sum(np.abs(batch_1.at(i) - batch_2.at(i))) for i in range(batch_size)]",
"_____no_output_____"
],
[
"original_images, _ = original_pipe_out\nserialized_images, _ = serialized_pipe_out",
"_____no_output_____"
],
[
"check_difference(original_images.as_cpu(), serialized_images.as_cpu())",
"_____no_output_____"
]
],
[
[
"Both pipelines give exactly the same results.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
eca21d38460f76969cd86b3e6483f00195a40cdc | 4,554 | ipynb | Jupyter Notebook | Jupyter_Notes/Lecture10_Sec3-3_CofactorExpansion.ipynb | xiuquan0418/MAT341 | 2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59 | [
"MIT"
] | null | null | null | Jupyter_Notes/Lecture10_Sec3-3_CofactorExpansion.ipynb | xiuquan0418/MAT341 | 2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59 | [
"MIT"
] | null | null | null | Jupyter_Notes/Lecture10_Sec3-3_CofactorExpansion.ipynb | xiuquan0418/MAT341 | 2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59 | [
"MIT"
] | null | null | null | 20.061674 | 130 | 0.422486 | [
[
[
"# Section 3.3 $\\quad$ Cofactor Expansion",
"_____no_output_____"
],
[
"## Definition of Minor",
"_____no_output_____"
],
[
"Let $A = [a_{ij}]$ be an $n\\times n$ matrix. <br /><br /><br /><br />",
"_____no_output_____"
],
[
"## Definition of Cofactor",
"_____no_output_____"
],
[
"Let $A = [a_{ij}]$ be an $n\\times n$ matrix. <br /><br /><br /><br />",
"_____no_output_____"
],
[
"### Example 1",
"_____no_output_____"
],
[
"Find the cofactors $A_{12}$ and $A_{23}$ if\n\\begin{equation*}\n A =\n \\left[\n \\begin{array}{ccc}\n 4 & 3 & 2 \\\\\n 4 & -2 & 5 \\\\\n 2 & 4 & 6 \\\\\n \\end{array}\n \\right]\n\\end{equation*}",
"_____no_output_____"
]
],
[
[
"from sympy import *\n\nA = Matrix([[4, 3, 2], [4, -2, 5], [2, 4, 6]]);\n\nA.cofactor(0, 1), A.cofactor(1, 2)",
"_____no_output_____"
]
],
[
[
">**Theorem (cofactor expansion)** Let $A = [a_{ij}]$ be an $n\\times n$ matrix. Then<br /><br />\n\\begin{equation*}\ndet(A) = \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\\end{equation*}\n<br /><br />\nand<br /><br />\n\\begin{equation*}\ndet(A) = \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\\end{equation*}\n<br /><br /><br /><br />",
"_____no_output_____"
],
[
"### Example 2",
"_____no_output_____"
],
[
"Find the determinant of\n\\begin{equation*}\n A = \\left[\n \\begin{array}{cccc}\n 1 & 2 & -3 & 4 \\\\\n -4 & 2 & 1 & 3 \\\\\n 3 & 0 & 0 & -3 \\\\\n 2 & 0 & -2 & 3 \\\\\n \\end{array}\n \\right]\n\\end{equation*}",
"_____no_output_____"
]
],
[
[
"from sympy import *\n\nA = Matrix([[1, 2, -3, 4], [-4, 2, 1, 3], [3, 0, 0, -3], [2, 0, -2, 3]]);\n\nA.det()",
"_____no_output_____"
]
],
[
[
"### Example 3",
"_____no_output_____"
],
[
"Find all values of $t$ for which\n\\begin{equation*}\n det\\left[\n \\begin{array}{ccc}\n t-1 & 0 & 1 \\\\\n 2 & t+2 & -1 \\\\\n 0 & 0 & t+1 \\\\\n \\end{array}\n \\right] = 0\n\\end{equation*}",
"_____no_output_____"
]
],
[
[
"from sympy import *\n\nt = symbols('t');\nA = Matrix([[t-1, 0, 1], [2, t+2, -1], [0, 0, t+1]]);\n\nsolve(A.det())",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
eca222adb225c5bbc77c08e515da05a26e8e7f95 | 3,263 | ipynb | Jupyter Notebook | check_env.ipynb | jfcaballero/Tutorial-sobre-scikit-learn-abreviado | 1e2aa1f9132c277162135a5463068801edab8d15 | [
"CC0-1.0"
] | 4 | 2019-02-20T14:36:39.000Z | 2019-02-21T22:55:57.000Z | check_env.ipynb | jfcaballero/Tutorial-sobre-scikit-learn-abreviado | 1e2aa1f9132c277162135a5463068801edab8d15 | [
"CC0-1.0"
] | null | null | null | check_env.ipynb | jfcaballero/Tutorial-sobre-scikit-learn-abreviado | 1e2aa1f9132c277162135a5463068801edab8d15 | [
"CC0-1.0"
] | null | null | null | 29.93578 | 88 | 0.474717 | [
[
[
"from __future__ import print_function\nfrom distutils.version import LooseVersion as Version\nimport sys\n\n\ntry:\n import curses\n curses.setupterm()\n assert curses.tigetnum(\"colors\") > 2\n OK = \"\\x1b[1;%dm[ OK ]\\x1b[0m\" % (30 + curses.COLOR_GREEN)\n FAIL = \"\\x1b[1;%dm[FAIL]\\x1b[0m\" % (30 + curses.COLOR_RED)\nexcept:\n OK = '[ OK ]'\n FAIL = '[FAIL]'\n\ntry:\n import importlib\nexcept ImportError:\n print(FAIL, \"Python version 3.4 (or 2.7) is required,\"\n \" but %s is installed.\" % sys.version)\n\n \ndef import_version(pkg, min_ver, fail_msg=\"\"):\n mod = None\n try:\n mod = importlib.import_module(pkg)\n if pkg in {'PIL'}:\n ver = mod.VERSION\n else:\n ver = mod.__version__\n if Version(ver) < min_ver:\n print(FAIL, \"%s version %s or higher required, but %s installed.\"\n % (lib, min_ver, ver))\n else:\n print(OK, '%s version %s' % (pkg, ver))\n except ImportError:\n print(FAIL, '%s not installed. %s' % (pkg, fail_msg))\n return mod\n\n\n# first check the python version\nprint('Using python in', sys.prefix)\nprint(sys.version)\npyversion = Version(sys.version)\nif pyversion >= \"3\":\n if pyversion < \"3.4\":\n print(FAIL, \"Python version 3.4 (or 2.7) is required,\"\n \" but %s is installed.\" % sys.version)\nelif pyversion >= \"2\":\n if pyversion < \"2.7\":\n print(FAIL, \"Python version 2.7 is required,\"\n \" but %s is installed.\" % sys.version)\nelse:\n print(FAIL, \"Unknown Python version: %s\" % sys.version)\n\nprint()\nrequirements = {'numpy': \"1.6.1\", 'scipy': \"0.9\", 'matplotlib': \"1.0\",\n 'IPython': \"3.0\", 'sklearn': \"0.18\", 'pandas': \"0.19\",\n 'PIL': \"1.1.7\"}\n\n# now the dependencies\nfor lib, required_version in list(requirements.items()):\n import_version(lib, required_version)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
eca2371e079c4b23881296af1934dcdc078886a4 | 24,204 | ipynb | Jupyter Notebook | notebooks/viz.ipynb | PeppeSaccardi/cnn-with-pytorch | 52a3566cbaf1e99e4cb766bdd2cbd828de267838 | [
"MIT"
] | null | null | null | notebooks/viz.ipynb | PeppeSaccardi/cnn-with-pytorch | 52a3566cbaf1e99e4cb766bdd2cbd828de267838 | [
"MIT"
] | null | null | null | notebooks/viz.ipynb | PeppeSaccardi/cnn-with-pytorch | 52a3566cbaf1e99e4cb766bdd2cbd828de267838 | [
"MIT"
] | null | null | null | 77.826367 | 12,442 | 0.631425 | [
[
[
"import pandas as pd \nimport numpy as np \nimport matplotlib.pyplot as plt \n%matplotlib inline",
"_____no_output_____"
],
[
"train_data = pd.read_csv(\"../input/train_dataset.csv\")",
"_____no_output_____"
],
[
"import torch",
"_____no_output_____"
],
[
"train_data.target.value_counts()",
"_____no_output_____"
],
[
"img = np.array(train_data.iloc[0,1:-1])",
"_____no_output_____"
],
[
"img.shape",
"_____no_output_____"
],
[
"t = torch.tensor(img)",
"_____no_output_____"
],
[
"t",
"_____no_output_____"
],
[
"t.reshape(1,3,28,28)",
"_____no_output_____"
],
[
"t = t.reshape(28,28,3)",
"_____no_output_____"
],
[
"t= np.array(t)",
"_____no_output_____"
],
[
"plt.imshow(t)",
"_____no_output_____"
],
[
"_",
"_____no_output_____"
],
[
"import torch\n\nimport pandas as pd\nimport numpy as np\nimport torch.nn as nn\nimport torch.nn.functional as F \nimport torch.utils.data.dataset as dataset\n\n\nclass ShapeDataset(dataset.Dataset):\n def __init__(self, features, targets):\n self.features = features\n self.targets = targets\n \n def __len__(self):\n return len(self.targets)\n \n def __getitem__(self, item):\n target = torch.tensor(self.targets[item], dtype = torch.int)\n image = torch.tensor(\n np.array(self.features.iloc[item]), dtype = torch.double\n )\n image = image.reshape(\n 3,\n 44,\n 44,\n )\n return (image, target)\n\nclass Model(nn.Module):\n def __init__(self):\n super(Model, self).__init__()\n self.conv1 = nn.Conv2d(3, 44, 3)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(44, 16, 3)\n self.fc1 = nn.Linear(16 * 3 * 3, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 3)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 16*3*3)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca24727959da30a5396ac0cefc793853ce48f1b | 1,915 | ipynb | Jupyter Notebook | Sesi-1-matematika-string-kondisi-perulangan/code/Latihan10.ipynb | cupiz/Belajar-Bareng-Python | 83e1ef4c65e003a8f586ac44279528e95882f149 | [
"MIT"
] | 2 | 2020-11-29T10:47:47.000Z | 2020-11-30T00:24:14.000Z | Sesi-1-matematika-string-kondisi-perulangan/code/Latihan10.ipynb | cupiz/Belajar-Bareng-Python | 83e1ef4c65e003a8f586ac44279528e95882f149 | [
"MIT"
] | null | null | null | Sesi-1-matematika-string-kondisi-perulangan/code/Latihan10.ipynb | cupiz/Belajar-Bareng-Python | 83e1ef4c65e003a8f586ac44279528e95882f149 | [
"MIT"
] | null | null | null | 15.696721 | 36 | 0.448564 | [
[
[
"name = 'Iqbal'",
"_____no_output_____"
],
[
"name.lower()",
"_____no_output_____"
],
[
"name.capitalize()",
"_____no_output_____"
],
[
"name.upper()",
"_____no_output_____"
],
[
"name.count('a')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
eca24810d58135fe85acf0a01636558f50976ccc | 4,022 | ipynb | Jupyter Notebook | SVM_trainclass/notebook/T103_modified_version.ipynb | Keerthic4/Imbalance_Class_Test | 8726ea0663dd403ebfdceadd7eef09b0f2a823e7 | [
"MIT"
] | 1 | 2021-08-09T19:55:15.000Z | 2021-08-09T19:55:15.000Z | SVM_trainclass/notebook/T103_modified_version.ipynb | Keerthic4/Imbalance_Class_Test | 8726ea0663dd403ebfdceadd7eef09b0f2a823e7 | [
"MIT"
] | null | null | null | SVM_trainclass/notebook/T103_modified_version.ipynb | Keerthic4/Imbalance_Class_Test | 8726ea0663dd403ebfdceadd7eef09b0f2a823e7 | [
"MIT"
] | null | null | null | 29.144928 | 252 | 0.572601 | [
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn import svm\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import f1_score",
"_____no_output_____"
],
[
"class Model():\n # reading train and test data and scaling them\n # CV=k value in cross validation technique\n def __init__(self,train_path,test_path,CV=5,label='Class'):\n train = pd.read_csv(train_path) #reading train data\n test = pd.read_csv(test_path) #reading test data\n self.X_train= train.iloc[:,:-1]\n self.X_test= test.iloc[:,:-1]\n self.y_train = train.iloc[:,:][label] \n self.y_test = test.iloc[:,:][label]\n sc = StandardScaler() #scaling data\n self.scaled_X_train = sc.fit_transform(self.X_train)\n self.scaled_X_test = sc.transform(self.X_test)\n self.CV=CV\n\n #definig the SVM model and corresponding parameters\n #At the end, f1 score of model will be calculated\n # hyper parameters: type of kernel, C, gamma,..\n def svm(self,hyper_parameter={}):\n \n clf = svm.SVC(**hyper_parameter, class_weight = 'balanced',\\\n random_state=0)\n clf.fit(self.scaled_X_train, self.y_train)\n self.y_pred = clf.predict(self.scaled_X_test)\n scores= cross_val_score(clf, self.scaled_X_train, self.y_train, cv=self.CV, scoring='f1_macro')\n return np. mean(scores)\n \n ",
"_____no_output_____"
],
[
"# applying model on data file\nmodel =Model(\"data_transformed.csv\",\"data_transformed.csv\")",
"_____no_output_____"
],
[
"# applting SVM on data\nmodel.svm(hyper_parameter={'kernel':'linear','C':1.0,'gamma' : 'auto'})",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_split.py:657: Warning: The least populated class in y has only 3 members, which is too few. The minimum number of members in any class cannot be less than n_splits=5.\n % (min_groups, self.n_splits)), Warning)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\metrics\\classification.py:1437: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
eca24b12ccd1c442a849f8abdb058f1bc12e4200 | 5,609 | ipynb | Jupyter Notebook | playbook/tactics/privilege-escalation/T1547.006.ipynb | haresudhan/The-AtomicPlaybook | 447b1d6bca7c3750c5a58112634f6bac31aff436 | [
"MIT"
] | 8 | 2021-05-25T15:25:31.000Z | 2021-11-08T07:14:45.000Z | playbook/tactics/privilege-escalation/T1547.006.ipynb | haresudhan/The-AtomicPlaybook | 447b1d6bca7c3750c5a58112634f6bac31aff436 | [
"MIT"
] | 1 | 2021-08-23T17:38:02.000Z | 2021-10-12T06:58:19.000Z | playbook/tactics/privilege-escalation/T1547.006.ipynb | haresudhan/The-AtomicPlaybook | 447b1d6bca7c3750c5a58112634f6bac31aff436 | [
"MIT"
] | 2 | 2021-05-29T20:24:24.000Z | 2021-08-05T23:44:12.000Z | 65.22093 | 1,628 | 0.71314 | [
[
[
"# T1547.006 - Boot or Logon Autostart Execution: Kernel Modules and Extensions\nAdversaries may modify the kernel to automatically execute programs on system boot. Loadable Kernel Modules (LKMs) are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system. For example, one type of module is the device driver, which allows the kernel to access hardware connected to the system. (Citation: Linux Kernel Programming) \n\nWhen used maliciously, LKMs can be a type of kernel-mode [Rootkit](https://attack.mitre.org/techniques/T1014) that run with the highest operating system privilege (Ring 0). (Citation: Linux Kernel Module Programming Guide) Common features of LKM based rootkits include: hiding itself, selective hiding of files, processes and network activity, as well as log tampering, providing authenticated backdoors and enabling root access to non-privileged users. (Citation: iDefense Rootkit Overview)\n\nKernel extensions, also called kext, are used for macOS to load functionality onto a system similar to LKMs for Linux. They are loaded and unloaded through <code>kextload</code> and <code>kextunload</code> commands.\n\nAdversaries can use LKMs and kexts to covertly persist on a system and elevate privileges. Examples have been found in the wild and there are some open source projects. (Citation: Volatility Phalanx2) (Citation: CrowdStrike Linux Rootkit) (Citation: GitHub Reptile) (Citation: GitHub Diamorphine)(Citation: RSAC 2015 San Francisco Patrick Wardle) (Citation: Synack Secure Kernel Extension Broken)(Citation: Securelist Ventir) (Citation: Trend Micro Skidmap)",
"_____no_output_____"
],
[
"## Atomic Tests",
"_____no_output_____"
]
],
[
[
"#Import the Module before running the tests.\n# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.\nImport-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force",
"_____no_output_____"
]
],
[
[
"### Atomic Test #1 - Linux - Load Kernel Module via insmod\nThis test uses the insmod command to load a kernel module for Linux.\n\n**Supported Platforms:** linux\nElevation Required (e.g. root or admin)\n#### Dependencies: Run with `bash`!\n##### Description: The kernel module must exist on disk at specified location\n\n##### Check Prereq Commands:\n```bash\nif [ -f /tmp/T1547.006/T1547006.ko ]; then exit 0; else exit 1; fi;\n\n```\n##### Get Prereq Commands:\n```bash\nif [ ! -d /tmp/T1547.006 ]; then mkdir /tmp/T1547.006; touch /tmp/T1547.006/safe_to_delete; fi;\ncp PathToAtomicsFolder/T1547.006/src/* /tmp/T1547.006/\ncd /tmp/T1547.006; make\nif [ ! -f /tmp/T1547.006/T1547006.ko ]; then mv /tmp/T1547.006/T1547006.ko /tmp/T1547.006/T1547006.ko; fi;\n\n```",
"_____no_output_____"
]
],
[
[
"Invoke-AtomicTest T1547.006 -TestNumbers 1 -GetPreReqs",
"_____no_output_____"
]
],
[
[
"#### Attack Commands: Run with `bash`\n```bash\nsudo insmod /tmp/T1547.006/T1547006.ko\n```",
"_____no_output_____"
]
],
[
[
"Invoke-AtomicTest T1547.006 -TestNumbers 1",
"_____no_output_____"
]
],
[
[
"## Detection\nLoading, unloading, and manipulating modules on Linux systems can be detected by monitoring for the following commands:<code>modprobe</code>, <code>insmod</code>, <code>lsmod</code>, <code>rmmod</code>, or <code>modinfo</code> (Citation: Linux Loadable Kernel Module Insert and Remove LKMs) LKMs are typically loaded into <code>/lib/modules</code> and have had the extension .ko (\"kernel object\") since version 2.6 of the Linux kernel. (Citation: Wikipedia Loadable Kernel Module)\n\nFor macOS, monitor for execution of <code>kextload</code> commands and correlate with other unknown or suspicious activity.\n\nAdversaries may run commands on the target system before loading a malicious module in order to ensure that it is properly compiled. (Citation: iDefense Rootkit Overview) Adversaries may also execute commands to identify the exact version of the running Linux kernel and/or download multiple versions of the same .ko (kernel object) files to use the one appropriate for the running system.(Citation: Trend Micro Skidmap) Many LKMs require Linux headers (specific to the target kernel) in order to compile properly. These are typically obtained through the operating systems package manager and installed like a normal package. On Ubuntu and Debian based systems this can be accomplished by running: <code>apt-get install linux-headers-$(uname -r)</code> On RHEL and CentOS based systems this can be accomplished by running: <code>yum install kernel-devel-$(uname -r)</code>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca25978c8fc2bdc23652f8733c35426588343cc | 13,242 | ipynb | Jupyter Notebook | site/zh-cn/tutorials/distribute/multi_worker_with_estimator.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | 1 | 2021-09-23T09:56:29.000Z | 2021-09-23T09:56:29.000Z | site/zh-cn/tutorials/distribute/multi_worker_with_estimator.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | null | null | null | site/zh-cn/tutorials/distribute/multi_worker_with_estimator.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | 1 | 2020-06-23T07:43:49.000Z | 2020-06-23T07:43:49.000Z | 35.886179 | 280 | 0.546519 | [
[
[
"##### Copyright 2019 The TensorFlow Authors.\n",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# 利用 Estimator 进行多工作器训练\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/distribute/multi_worker_with_estimator\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\" />在 tensorflow.google.cn 上查看</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/multi_worker_with_estimator.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\" />在 Google Colab 运行</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/multi_worker_with_estimator.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\" />在 Github 上查看源代码</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/multi_worker_with_estimator.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\" />下载此 notebook</a>\n </td>\n</table>\n",
"_____no_output_____"
],
[
"Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的\n[官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到\n[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入\n[[email protected] Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。",
"_____no_output_____"
],
[
"## 概述\n本教程展示了在训练分布式多工作器(worker)时,如何使用 `tf.distribute.Strategy`。如果你的代码使用了 `tf.estimator`,而且你也对拓展单机以获取高性能有兴趣,那么这个教程就是为你准备的。\n\n在开始之前,请先阅读 [`tf.distribute.Strategy` 指南](../../guide/distribute_strategy.ipynb)。同样相关的还有 [使用多 GPU 训练教程](./keras.ipynb),因为在这个教程里也使用了相同的模型。",
"_____no_output_____"
],
[
"## 创建\n\n首先,设置好 TensorFlow 以及将会用到的输入模块。",
"_____no_output_____"
]
],
[
[
"import tensorflow_datasets as tfds\nimport tensorflow as tf\ntfds.disable_progress_bar()\n\nimport os, json",
"_____no_output_____"
]
],
[
[
"## 输入函数\n\n本教程里我们使用的是 [TensorFlow 数据集(TensorFlow Datasets)](https://tensorflow.google.cn/datasets)里的 MNIST 数据集。本教程里的代码和 [使用多 GPU 训练教程](./keras.ipynb) 类似,但有一个主要区别:当我们使用 Estimator 进行多工作器训练时,需要根据工作器的数量对数据集进行拆分,以确保模型收敛。输入的数据根据工作器其自身的索引来拆分,因此每个工作器各自负责处理该数据集 `1/num_workers` 个不同部分。",
"_____no_output_____"
]
],
[
[
"BUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef input_fn(mode, input_context=None):\n datasets, info = tfds.load(name='mnist',\n with_info=True,\n as_supervised=True)\n mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else\n datasets['test'])\n\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255\n return image, label\n\n if input_context:\n mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines,\n input_context.input_pipeline_id)\n return mnist_dataset.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)",
"_____no_output_____"
]
],
[
[
"使模型收敛的另一种合理方式是在每个工作器上设置不同的随机种子,然后对数据集进行随机重排。",
"_____no_output_____"
],
[
"## 多工作器配置\n\n本教程主要的不同(区别于[使用多 GPU 训练教程](./keras.ipynb))在于多工作器的创建。明确集群中每个工作器的配置的标准方式是设置环境变量 `TF_CONFIG` 。\n\n`TF_CONFIG` 里包括了两个部分:`cluster` 和 `task`。`cluster` 提供了关于整个集群的信息,也就是集群中的工作器和参数服务器(parameter server)。`task` 提供了关于当前任务的信息。在本例中,任务的类型(type)是 worker 且该任务的索引(index)是 0。\n\n出于演示的目的,本教程展示了怎么将 `TF_CONFIG` 设置成两个本地的工作器。在实践中,你可以在外部的IP地址和端口上创建多个工作器,并为每个工作器正确地配置好 `TF_CONFIG` 变量,也就是更改任务的索引。\n\n警告:不要在 Colab 里执行以下代码。TensorFlow 的运行程序会试图在指定的 IP 地址和端口创建 gRPC 服务器,这会导致创建失败。\n\n```\nos.environ['TF_CONFIG'] = json.dumps({\n 'cluster': {\n 'worker': [\"localhost:12345\", \"localhost:23456\"]\n },\n 'task': {'type': 'worker', 'index': 0}\n})\n```",
"_____no_output_____"
],
[
"## 定义模型\n\n定义训练中用到的层,优化器和损失函数。本教程使用 Keras layers 定义模型,同[使用多 GPU 训练教程](./keras.ipynb)类似。",
"_____no_output_____"
]
],
[
[
"LEARNING_RATE = 1e-4\ndef model_fn(features, labels, mode):\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n logits = model(features, training=False)\n\n if mode == tf.estimator.ModeKeys.PREDICT:\n predictions = {'logits': logits}\n return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions)\n\n optimizer = tf.compat.v1.train.GradientDescentOptimizer(\n learning_rate=LEARNING_RATE)\n loss = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, reduction=tf.keras.losses.Reduction.NONE)(labels, logits)\n loss = tf.reduce_sum(loss) * (1. / BATCH_SIZE)\n if mode == tf.estimator.ModeKeys.EVAL:\n return tf.estimator.EstimatorSpec(mode, loss=loss)\n\n return tf.estimator.EstimatorSpec(\n mode=mode,\n loss=loss,\n train_op=optimizer.minimize(\n loss, tf.compat.v1.train.get_or_create_global_step()))",
"_____no_output_____"
]
],
[
[
"注意:尽管在本例中学习率是固定的,但是通常情况下可能有必要基于全局的批次大小对学习率进行调整。",
"_____no_output_____"
],
[
"## MultiWorkerMirroredStrategy\n\n为训练模型,需要使用 `tf.distribute.experimental.MultiWorkerMirroredStrategy` 实例。`MultiWorkerMirroredStrategy` 创建了每个设备中模型层里所有变量的拷贝,且是跨工作器的。其用到了 `CollectiveOps`,这是 TensorFlow 里的一种操作,用来整合梯度以及确保变量同步。该策略的更多细节可以在 [`tf.distribute.Strategy` 指南](../../guide/distribute_strategy.ipynb)中找到。",
"_____no_output_____"
]
],
[
[
"strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()",
"_____no_output_____"
]
],
[
[
"## 训练和评估模型\n接下来,在 `RunConfig` 中为 estimator 指明分布式策略,同时通过调用 `tf.estimator.train_and_evaluate` 训练和评估模型。本教程只通过指明 `train_distribute` 进行分布式训练。但是也同样也可以通过指明 `eval_distribute` 来进行分布式评估。",
"_____no_output_____"
]
],
[
[
"config = tf.estimator.RunConfig(train_distribute=strategy)\n\nclassifier = tf.estimator.Estimator(\n model_fn=model_fn, model_dir='/tmp/multiworker', config=config)\ntf.estimator.train_and_evaluate(\n classifier,\n train_spec=tf.estimator.TrainSpec(input_fn=input_fn),\n eval_spec=tf.estimator.EvalSpec(input_fn=input_fn)\n)",
"_____no_output_____"
]
],
[
[
"# 优化训练后的模型性能\n\n现在你已经有了由 `tf.distribute.Strategy` 的模型和能支持多工作器的 Estimator。你可以尝试使用下列技巧来优化多工作器训练的性能。\n\n* *增加单批次的大小:* 此处的批次大小指的是每个 GPU 上的批次大小。通常来说,最大的批次大小应该适应 GPU 的内存大小。\n* *变量转换:* 尽可能将变量转换成 `tf.float`。官方的 ResNet 模型包括了如何完成的[样例](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.py#L466)。\n* *使用集群通信:* `MultiWorkerMirroredStrategy` 提供了好几种[集群通信的实现](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). \n * `RING` 实现了基于环状的集群,使用了 gRPC 作为跨主机通讯层。 \n * `NCCL` 使用了 [英伟达的 NCCL](https://developer.nvidia.com/nccl) 来实现集群。\n * `AUTO` 将选择延后至运行时。\n\n集群实现的最优选择不仅基于 GPU 的数量和种类,也基于集群间的通信网络。想要覆盖自动的选项,需要指明 `MultiWorkerMirroredStrategy` 的构造器里的 `communication` 参数,例如让 `communication=tf.distribute.experimental.CollectiveCommunication.NCCL` 。\n",
"_____no_output_____"
],
[
"## 更多的代码示例\n\n1. [端到端的示例](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy)里使用了 Kubernetes 模板。在这个例子里我们一开始使用了 Keras 模型,并使用了 `tf.keras.estimator.model_to_estimator` API 将其转换成了 Estimator。\n2. 官方的 [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py) 模型,我们可以使用 `MirroredStrategy` 或 `MultiWorkerMirroredStrategy` 来训练它。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
eca25a7ae26ad513317c9cc903c7f98a5cd0e1cf | 167,592 | ipynb | Jupyter Notebook | examples/sample2.ipynb | Ken529n/Lassolver | f9f6997bf065622fe462b329c5cc99bd20f7d68b | [
"MIT"
] | null | null | null | examples/sample2.ipynb | Ken529n/Lassolver | f9f6997bf065622fe462b329c5cc99bd20f7d68b | [
"MIT"
] | null | null | null | examples/sample2.ipynb | Ken529n/Lassolver | f9f6997bf065622fe462b329c5cc99bd20f7d68b | [
"MIT"
] | null | null | null | 236.04507 | 48,646 | 0.919471 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"from lassolver.utils.func import *\nfrom lassolver.utils.signal import *\nfrom lassolver.utils.utils import *\n\nfrom lassolver.matrices.iid_gauss import iidGaussian\nfrom lassolver.matrices.uni_invar import UniInvar\n\nfrom lassolver.solver.amp import AMP\nfrom lassolver.solver.oamp import OAMP\n\nfrom lassolver.dsolver.d_amp import D_AMP\nfrom lassolver.dsolver.d_oamp import D_OAMP",
"_____no_output_____"
]
],
[
[
"# 数値実験の設定",
"_____no_output_____"
]
],
[
[
"N = 4000 # 列数\nalpha = 0.5 # 圧縮率\nM = int(alpha*N) # 行数\nrho = 0.2 # 非零成分の割合",
"_____no_output_____"
],
[
"SNR = 60 # 信号対雑音比\nkappa = 5 # 条件数\nP = 10 # ノード数\nT = 30 # 反復回数\nsim = 100 # 実験数",
"_____no_output_____"
],
[
"x = [bernouli_gaussian(N, rho) for _ in range(sim)]",
"_____no_output_____"
]
],
[
[
"# i.i.d.ガウス行列での信号再構成",
"_____no_output_____"
]
],
[
[
"MSE_iidG = np.empty((sim, 4, T+1))\nCommCost_iidG = np.empty((sim, 2, T))",
"_____no_output_____"
],
[
"for i in tqdm(range(sim)):\n iidG = iidGaussian(M, N, m=0, v=1/M)\n algo = [AMP(iidG.A, x[i], SNR), \n D_AMP(iidG.A, x[i], SNR, P), \n OAMP(iidG.A, x[i], SNR), \n D_OAMP(iidG.A, x[i], SNR, P)]\n\n algo[0].estimate(T) # AMP\n algo[1].estimate(T) # D-AMP\n algo[2].estimate(T, ord='LMMSE') # OAMP\n algo[3].estimate(T, ord='LMMSE') # D-OAMP\n \n for j in range(4):\n MSE_iidG[i, j] = algo[j].mse\n if j % 2 == 1:\n k = int(j/2)\n CommCost_iidG[i, k] = algo[j].communication_cost",
"100%|██████████| 100/100 [4:54:28<00:00, 176.69s/it]\n"
],
[
"MSE_iidG_mean = np.empty((4, T+1))\nCommCost_iidG_mean = np.empty((2, T))\n\nfor i in range(4):\n MSE_iidG_mean[i] = np.mean(MSE_iidG[:, i], axis=0)\n if i % 2 == 1:\n j = int(i/2)\n CommCost_iidG_mean[j] = np.mean(CommCost_iidG[:, j], axis=0)",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 5.5))\n\nplt.subplot(121)\nplt.title('Communication Cost(i.i.d. Gaussian)')\nplt_CC(CommCost_iidG_mean[0], 'DAMP', T, N, P, 'tab:blue')\nplt_CC(CommCost_iidG_mean[1], 'DOAMP', T, N, P, 'tab:red')\nplt.grid()\n\nplt.subplot(122)\nplt.title('MSE(i.i.d. Gaussian)')\nplt_MSE(MSE_iidG_mean[0], 'AMP', T, 'tab:cyan')\nplt_MSE(MSE_iidG_mean[1], 'DAMP', T, 'tab:blue')\nplt_MSE(MSE_iidG_mean[2], 'OAMP', T, 'tab:orange')\nplt_MSE(MSE_iidG_mean[3], 'DOAMP', T, 'tab:red')\nplt.grid()",
"_____no_output_____"
],
[
"plt_CC(CommCost_iidG_mean[0], 'DAMP', T, N, P, 'tab:blue')\nplt_CC(CommCost_iidG_mean[1], 'DOAMP', T, N, P, 'tab:red')\nplt.grid()",
"_____no_output_____"
],
[
"plt_MSE(MSE_iidG_mean[0], 'AMP', T, 'tab:cyan')\nplt_MSE(MSE_iidG_mean[1], 'DAMP', T, 'tab:blue')\nplt_MSE(MSE_iidG_mean[2], 'OAMP', T, 'tab:orange')\nplt_MSE(MSE_iidG_mean[3], 'DOAMP', T, 'tab:red')\nplt.grid()",
"_____no_output_____"
]
],
[
[
"# ユニタリ不変行列での信号再構成",
"_____no_output_____"
]
],
[
[
"MSE_UniInv = np.empty((sim, 4, T+1))\nCommCost_UniInv = np.empty((sim, 2, T))",
"_____no_output_____"
],
[
"for i in tqdm(range(sim)):\n UniInv = UniInvar(M, N, kappa)\n algo = [AMP(UniInv.A, x[i], SNR), \n D_AMP(UniInv.A, x[i], SNR, P), \n OAMP(UniInv.A, x[i], SNR), \n D_OAMP(UniInv.A, x[i], SNR, P)]\n \n algo[0].estimate(T) # AMP\n algo[1].estimate(T) # D-AMP\n algo[2].estimate(T, ord='LMMSE') # OAMP\n algo[3].estimate(T, ord='LMMSE') # D-OAMP\n \n for j in range(4):\n MSE_UniInv[i, j] = algo[j].mse\n if j % 2 == 1:\n k = int(j/2)\n CommCost_UniInv[i, k] = algo[j].communication_cost",
"100%|██████████| 100/100 [10:11:37<00:00, 366.98s/it]\n"
],
[
"MSE_UniInv_mean = np.empty((4, T+1))\nCommCost_UniInv_mean = np.empty((2, T))\n\nfor i in range(4):\n MSE_UniInv_mean[i] = np.mean(MSE_UniInv[:, i], axis=0)\n if i % 2 == 1:\n j = int(i/2)\n CommCost_UniInv_mean[j] = np.mean(CommCost_UniInv[:, j], axis=0)",
"_____no_output_____"
],
[
"plt.figure(figsize=(14, 5.5))\n\nplt.subplot(121)\nplt.title('Communication Cost(Unitary-Invariant Matrix)')\n#plt_CC(CommCost_UniInv_mean[0], 'DAMP', T, N, P, 'tab:blue')\nplt_CC(CommCost_UniInv_mean[1], 'GCOAMP', T, N, P, 'tab:red')\nplt.grid()\n\nplt.subplot(122)\nplt.title('MSE(Unitary-Invariant Matrix)')\n#plt_MSE(MSE_UniInv_mean[0], 'AMP', T, 'tab:cyan')\n#plt_MSE(MSE_UniInv_mean[1], 'DAMP', T, 'tab:blue')\nplt_MSE(MSE_UniInv_mean[2], 'OAMP', T, 'tab:orange')\nplt_MSE(MSE_UniInv_mean[3], 'GCOAMP', T, 'tab:red')\nplt.grid()",
"_____no_output_____"
],
[
"#plt_CC(CommCost_UniInv_mean[0], 'DAMP', T, N, P, 'tab:blue')\nplt_CC(CommCost_UniInv_mean[1], 'GCOAMP', T, N, P, 'tab:red')\nplt.grid()",
"_____no_output_____"
],
[
"#plt_MSE(MSE_UniInv_mean[0], 'AMP', T, 'tab:cyan')\n#plt_MSE(MSE_UniInv_mean[1], 'DAMP', T, 'tab:blue')\nplt_MSE(MSE_UniInv_mean[2], 'OAMP', T, 'tab:orange')\nplt_MSE(MSE_UniInv_mean[3], 'GCOAMP', T, 'tab:red')\nplt.grid()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca260b1775c6bc2f99de0e83efc220d1dd001e3 | 587 | ipynb | Jupyter Notebook | notebooks/eda.ipynb | dwszai/news-summarizer | 0a5583b979d8751a2af6af235b03d0ca2fa770ad | [
"MIT"
] | 1 | 2021-03-08T06:59:41.000Z | 2021-03-08T06:59:41.000Z | notebooks/eda.ipynb | dwszai/news-summarizer | 0a5583b979d8751a2af6af235b03d0ca2fa770ad | [
"MIT"
] | null | null | null | notebooks/eda.ipynb | dwszai/news-summarizer | 0a5583b979d8751a2af6af235b03d0ca2fa770ad | [
"MIT"
] | null | null | null | 17.787879 | 35 | 0.497445 | [] | [] | [] |
eca26c728006365604e7d70398e2a19d211e01c5 | 8,300 | ipynb | Jupyter Notebook | lab06/02 Fancy softmax classifier.ipynb | gomsun2/mlZeroToAll | f9c59417c3f9f6ab4bc9fcbdb937b40e7c39d487 | [
"MIT"
] | null | null | null | lab06/02 Fancy softmax classifier.ipynb | gomsun2/mlZeroToAll | f9c59417c3f9f6ab4bc9fcbdb937b40e7c39d487 | [
"MIT"
] | null | null | null | lab06/02 Fancy softmax classifier.ipynb | gomsun2/mlZeroToAll | f9c59417c3f9f6ab4bc9fcbdb937b40e7c39d487 | [
"MIT"
] | null | null | null | 39.903846 | 118 | 0.537711 | [
[
[
"import tensorflow as tf\nimport numpy as np\n\nxy = np.loadtxt('data-04-zoo.csv', delimiter=',', dtype=np.float32)\nx_data = xy[:, 0:-1]\ny_data = xy[:, [-1]]\nnb_classes = 7 # 0..6, nb_classes is represent of one-hot encoding values\n\nX=tf.placeholder(tf.float32, [None, 16])\nY = tf.placeholder(tf.int32, [None, 1]) # range of 0..6, shape=(?, 1)\nY_one_hot = tf.one_hot(Y, nb_classes) # one hot shape is (?,1,7), but need to the shape is looks like (?, 7).\nY_one_hot = tf.reshape(Y_one_hot, [-1, nb_classes]) # finally, the \"Y_one_hot \" is (?, 7) form.\n\n# input is 16, output is 7, bias is 7(because, it use to matrix adding operation)\nW = tf.Variable(tf.random_normal([16, nb_classes]), name='weight')\nb = tf.Variable(tf.random_normal([nb_classes]), name='bias')\n\n# tf.nn.softmax computes softmax activations\n# softmax = exp(logits) / reduce_sum(exp(logitis), dim)\nlogits = tf.matmul(X, W) + b\nhypothesis = tf.nn.softmax(logits)\n\n# cross entropy cost or loss\ncost_i = tf.nn.softmax_cross_entropy_with_logits_v2(logits = logits, labels=Y_one_hot)\ncost = tf.reduce_mean(cost_i)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)\n\n# the following values are used for checking to an accuracy of prediction\nprediction = tf.argmax(hypothesis, 1)\ncorrect_prediction = tf.equal(prediction, tf.argmax(Y_one_hot, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n# launch graph\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n for step in range(2001):\n sess.run(optimizer, feed_dict={X: x_data, Y: y_data})\n if step % 100 == 0:\n loss, acc = sess.run([cost, accuracy], feed_dict={X: x_data, Y: y_data})\n print('step: {:5}\\tloss: {:.3f}\\tacc: {:.2%}'.format(step, loss, acc))\n \n pred = sess.run(prediction, feed_dict={X: x_data})\n for p, y in zip(pred, y_data.flatten()):\n print('[{}] prediction: {} True Y: {}'.format(p == int(y), p, int(y)))",
"step: 0\tloss: 5.555\tacc: 1.98%\nstep: 100\tloss: 0.749\tacc: 81.19%\nstep: 200\tloss: 0.488\tacc: 87.13%\nstep: 300\tloss: 0.373\tacc: 90.10%\nstep: 400\tloss: 0.302\tacc: 94.06%\nstep: 500\tloss: 0.254\tacc: 95.05%\nstep: 600\tloss: 0.217\tacc: 96.04%\nstep: 700\tloss: 0.189\tacc: 96.04%\nstep: 800\tloss: 0.166\tacc: 96.04%\nstep: 900\tloss: 0.147\tacc: 96.04%\nstep: 1000\tloss: 0.132\tacc: 97.03%\nstep: 1100\tloss: 0.119\tacc: 97.03%\nstep: 1200\tloss: 0.108\tacc: 99.01%\nstep: 1300\tloss: 0.099\tacc: 99.01%\nstep: 1400\tloss: 0.091\tacc: 99.01%\nstep: 1500\tloss: 0.084\tacc: 99.01%\nstep: 1600\tloss: 0.078\tacc: 100.00%\nstep: 1700\tloss: 0.073\tacc: 100.00%\nstep: 1800\tloss: 0.069\tacc: 100.00%\nstep: 1900\tloss: 0.065\tacc: 100.00%\nstep: 2000\tloss: 0.061\tacc: 100.00%\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 3 True Y: 3\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 3 True Y: 3\n[True] prediction: 3 True Y: 3\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 3 True Y: 3\n[True] prediction: 6 True Y: 6\n[True] prediction: 6 True Y: 6\n[True] prediction: 6 True Y: 6\n[True] prediction: 1 True Y: 1\n[True] prediction: 0 True Y: 0\n[True] prediction: 3 True Y: 3\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 1 True Y: 1\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 5 True Y: 5\n[True] prediction: 4 True Y: 4\n[True] prediction: 4 True Y: 4\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 5 True Y: 5\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 3 True Y: 3\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 3 True Y: 3\n[True] prediction: 5 True Y: 5\n[True] prediction: 5 True Y: 5\n[True] prediction: 1 True Y: 1\n[True] prediction: 5 True Y: 5\n[True] prediction: 1 True Y: 1\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 6 True Y: 6\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 5 True Y: 5\n[True] prediction: 4 True Y: 4\n[True] prediction: 6 True Y: 6\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 1 True Y: 1\n[True] prediction: 1 True Y: 1\n[True] prediction: 1 True Y: 1\n[True] prediction: 3 True Y: 3\n[True] prediction: 3 True Y: 3\n[True] prediction: 2 True Y: 2\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 6 True Y: 6\n[True] prediction: 3 True Y: 3\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 2 True Y: 2\n[True] prediction: 6 True Y: 6\n[True] prediction: 1 True Y: 1\n[True] prediction: 1 True Y: 1\n[True] prediction: 2 True Y: 2\n[True] prediction: 6 True Y: 6\n[True] prediction: 3 True Y: 3\n[True] prediction: 1 True Y: 1\n[True] prediction: 0 True Y: 0\n[True] prediction: 6 True Y: 6\n[True] prediction: 3 True Y: 3\n[True] prediction: 1 True Y: 1\n[True] prediction: 5 True Y: 5\n[True] prediction: 4 True Y: 4\n[True] prediction: 2 True Y: 2\n[True] prediction: 2 True Y: 2\n[True] prediction: 3 True Y: 3\n[True] prediction: 0 True Y: 0\n[True] prediction: 0 True Y: 0\n[True] prediction: 1 True Y: 1\n[True] prediction: 0 True Y: 0\n[True] prediction: 5 True Y: 5\n[True] prediction: 0 True Y: 0\n[True] prediction: 6 True Y: 6\n[True] prediction: 1 True Y: 1\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
eca2744e01a599581c197d4b929c6963c3f79bea | 5,899 | ipynb | Jupyter Notebook | DS_Algo/EPI_Python.ipynb | Maruf789/interview_recipes | c12c0d96908c51db9bf18411a17212d2c56e5375 | [
"MIT"
] | null | null | null | DS_Algo/EPI_Python.ipynb | Maruf789/interview_recipes | c12c0d96908c51db9bf18411a17212d2c56e5375 | [
"MIT"
] | null | null | null | DS_Algo/EPI_Python.ipynb | Maruf789/interview_recipes | c12c0d96908c51db9bf18411a17212d2c56e5375 | [
"MIT"
] | null | null | null | 29.495 | 135 | 0.454992 | [
[
[
"\"\"\" Find h-index \"\"\"\n\ndef h_index(citations):\n citations.sort()\n n = len(citations)\n print(\"citations: \", citations)\n print(\"length of citations: \", n)\n for i, c in enumerate(citations):\n print(\"i: \" + str(i) + \", c:\" + str(c))\n if c >= n - i:\n print(\"n - i: \", n-i)\n return n - i\n return 0\n\ncited = [1,4,1,4,2,1,3,5,6]\nh_index(cited)",
"_____no_output_____"
],
[
"\"\"\"Recursion:\ngcd (greatest common divisor)\"\"\"\n\ndef gcd(x: int, y: int) -> int:\n return x if y ==0 else gcd(y, x%y)\ngcd(12, 56)",
"_____no_output_____"
],
[
"\"\"\"Phone Numbers Mnemonic\"\"\"\nimport itertools\n\ndef phone_mnemonic(phone_number): # iterative\n mapping = dict(enumerate(['0', '1', 'ABC', 'DEF', 'GHI', 'JKL', 'MNO', 'PQRS', 'TUV', 'WXYZ']))\n combination = []\n if len(phone_number) > 0:\n for digit in phone_number:\n combination.append(mapping[int(digit)])\n return [''.join(word) for word in itertools.product(*combination)]\n# return [''.join(word) for word in itertools.product(*(mapping[int(digit)] for digit in phone_number))]\n \n return []\n\nprint(phone_mnemonic('2976'))",
"_____no_output_____"
],
[
"\"\"\"Recursion:\nsolve n-queens in n X n chessboard ????\"\"\"\n\ndef n_queens(n): \n def solve_n_queens(row):\n if row == n:\n # All queens are legally placed.\n result.append(list(col_placement))\n return\n for col in range(n):\n for i, c in list(enumerate(col_placement[:row])):\n print(\"enumerate(col_placement[:{row}]):{a} i: {i} c: {b} col: {c} abs({c}-{col}): {d} (0, {row}-{i}): {e} \"\n .format(row=row, a=list(enumerate(col_placement[:row])), \n i=i, b=c, c=c, col=col,\n d=abs(c-col), \n e=list(range(0, row -i))))\n # Test if a newly placed queen will conflict any earlier queens\n if all(\n abs(c - col) not in (0, row - i)\n for i, c in enumerate(col_placement[:row])):\n col_placement[row] = col\n solve_n_queens(row + 1)\n\n result = []\n col_placement = [0] * n\n solve_n_queens(0)\n return result\n\nprint(n_queens(4))",
"_____no_output_____"
],
[
"def count_bits(x): \n num_bits = 0\n while x:\n print(\"x & 1: \", x&1)\n num_bits += x & 1\n print(\"num_bits: \", num_bits)\n x >>= 1\n print(\"x: \", x)\n return num_bits\n\nprint(count_bits(101))",
"_____no_output_____"
],
[
"\"\"\"Recursion: Tower of Hanoi\"\"\"\n\ndef compute_tower_hanoi(num_pegs, num_rings):\n def compute_tower_hanoi_steps(num_rings_to_move, from_peg, to_peg,\n use_peg):\n if num_rings_to_move > 0:\n compute_tower_hanoi_steps(num_rings_to_move - 1, from_peg, use_peg,\n to_peg)\n pegs[to_peg].append(pegs[from_peg].pop())\n print(\"pegs: \", pegs)\n result.append([from_peg, to_peg])\n #print(\"result: \", result)\n compute_tower_hanoi_steps(num_rings_to_move - 1, use_peg, to_peg,\n from_peg)\n\n # Initialize pegs.\n result = []\n pegs = [list(reversed(range(1, num_rings + 1)))] + [[] for _ in range(1, num_pegs)]\n print(\"\\npegs0: \", pegs)\n compute_tower_hanoi_steps(num_rings, 0, 1, 2)\n return result\n\nnum_pegs = 3\nnum_rings = 3\nprint(compute_tower_hanoi(num_pegs, num_rings))\n\nnum_rings = 4\nprint(compute_tower_hanoi(num_pegs, num_rings))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca2a914157172173a2612059caa5f97d8ca5c62 | 11,235 | ipynb | Jupyter Notebook | notebooks/ExpectError.ipynb | GonzaloCastroO/fuzzingbook | 9c9ec1d0c97d1ca9c4487f0edc885362204d6559 | [
"MIT"
] | 3 | 2019-04-03T09:59:39.000Z | 2022-03-18T09:20:22.000Z | notebooks/ExpectError.ipynb | GonzaloCastroO/fuzzingbook | 9c9ec1d0c97d1ca9c4487f0edc885362204d6559 | [
"MIT"
] | null | null | null | notebooks/ExpectError.ipynb | GonzaloCastroO/fuzzingbook | 9c9ec1d0c97d1ca9c4487f0edc885362204d6559 | [
"MIT"
] | 1 | 2019-12-18T08:47:27.000Z | 2019-12-18T08:47:27.000Z | 23.955224 | 287 | 0.512773 | [
[
[
"# Error Handling\n\nThe code in this notebook helps with handling errors. Normally, an error in notebook code causes the execution of the code to stop; while an infinite loop in notebook code causes the notebook to run without end. This notebook provides two classes to help address these concerns.",
"_____no_output_____"
],
[
"**Prerequisites**\n\n* This notebook needs some understanding on advanced concepts in Python, notably \n * classes\n * the Python `with` statement\n * tracing\n * measuring time\n * exceptions",
"_____no_output_____"
],
[
"## Catching Errors\n\nThe class `ExpectError` allows to express that some code produces an exception. A typical usage looks as follows:\n\n```Python\nfrom ExpectError import ExpectError\n\nwith ExpectError():\n function_that_is_supposed_to_fail()\n```\n\nIf an exception occurs, it is printed on standard error; yet, execution continues.",
"_____no_output_____"
]
],
[
[
"import fuzzingbook_utils",
"_____no_output_____"
],
[
"import traceback\nimport sys",
"_____no_output_____"
],
[
"class ExpectError(object):\n def __init__(self, print_traceback=True, mute=False):\n self.print_traceback = print_traceback\n self.mute = mute\n\n # Begin of `with` block\n def __enter__(self):\n return self\n\n # End of `with` block\n def __exit__(self, exc_type, exc_value, tb):\n if exc_type is None:\n # No exception\n return\n\n # An exception occurred\n if self.print_traceback:\n lines = ''.join(\n traceback.format_exception(\n exc_type,\n exc_value,\n tb)).strip()\n else:\n lines = traceback.format_exception_only(\n exc_type, exc_value)[-1].strip()\n\n if not self.mute:\n print(lines, \"(expected)\", file=sys.stderr)\n return True # Ignore it",
"_____no_output_____"
]
],
[
[
"Here's an example:",
"_____no_output_____"
]
],
[
[
"def fail_test():\n # Trigger an exception\n x = 1 / 0",
"_____no_output_____"
],
[
"with ExpectError():\n fail_test()",
"_____no_output_____"
],
[
"with ExpectError(print_traceback=False):\n fail_test()",
"_____no_output_____"
]
],
[
[
"## Catching Timeouts\n\nThe class `ExpectTimeout(seconds)` allows to express that some code may run for a long or infinite time; execution is thus interrupted after `seconds` seconds. A typical usage looks as follows:\n\n```Python\nfrom ExpectError import ExpectTimeout\n\nwith ExpectTimeout(2) as t:\n function_that_is_supposed_to_hang()\n```\n\nIf an exception occurs, it is printed on standard error (as with `ExpectError`); yet, execution continues.\n\nShould there be a need to cancel the timeout within the `with` block, `t.cancel()` will do the trick.\n\nThe implementation uses `sys.settrace()`, as this seems to be the most portable way to implement timeouts. It is not very efficient, though. Also, it only works on individual lines of Python code and will not interrupt a long-running system function.",
"_____no_output_____"
]
],
[
[
"import sys\nimport time",
"_____no_output_____"
],
[
"try:\n # Should be defined in Python 3\n x = TimeoutError\nexcept:\n # For Python 2\n class TimeoutError(Exception):\n def __init__(self, value=\"Timeout\"):\n self.value = value\n\n def __str__(self):\n return repr(self.value)\n",
"_____no_output_____"
],
[
"class ExpectTimeout(object):\n def __init__(self, seconds, print_traceback=True, mute=False):\n self.seconds_before_timeout = seconds\n self.original_trace_function = None\n self.end_time = None\n self.print_traceback = print_traceback\n self.mute = mute\n\n # Tracing function\n def check_time(self, frame, event, arg):\n if self.original_trace_function is not None:\n self.original_trace_function(frame, event, arg)\n\n current_time = time.time()\n if current_time >= self.end_time:\n raise TimeoutError\n\n return self.check_time\n\n # Begin of `with` block\n def __enter__(self):\n start_time = time.time()\n self.end_time = start_time + self.seconds_before_timeout\n\n self.original_trace_function = sys.gettrace()\n sys.settrace(self.check_time)\n return self\n\n # End of `with` block\n def __exit__(self, exc_type, exc_value, tb):\n self.cancel()\n\n if exc_type is None:\n return\n\n # An exception occurred\n if self.print_traceback:\n lines = ''.join(\n traceback.format_exception(\n exc_type,\n exc_value,\n tb)).strip()\n else:\n lines = traceback.format_exception_only(\n exc_type, exc_value)[-1].strip()\n\n if not self.mute:\n print(lines, \"(expected)\", file=sys.stderr)\n return True # Ignore it\n\n def cancel(self):\n sys.settrace(self.original_trace_function)",
"_____no_output_____"
]
],
[
[
"Here's an example:",
"_____no_output_____"
]
],
[
[
"def long_running_test():\n print(\"Start\")\n for i in range(10):\n time.sleep(1)\n print(i, \"seconds have passed\")\n print(\"End\")",
"_____no_output_____"
],
[
"with ExpectTimeout(5, print_traceback=False):\n long_running_test()",
"_____no_output_____"
]
],
[
[
"Note that it is possible to nest multiple timeouts.",
"_____no_output_____"
]
],
[
[
"with ExpectTimeout(5):\n with ExpectTimeout(3):\n long_running_test()\n long_running_test()",
"_____no_output_____"
]
],
[
[
"That's it, folks – enjoy!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca2abe6c4adad01baae64de40156e111af258e2 | 76,648 | ipynb | Jupyter Notebook | class/demo01/lecture1.ipynb | AvijeetPrasad/jb_course | d52ce9e14781bcac3db9e736140fb8895bf9162b | [
"MIT"
] | 50 | 2020-09-01T06:35:34.000Z | 2022-03-23T09:53:39.000Z | class/demo01/lecture1.ipynb | AvijeetPrasad/jb_course | d52ce9e14781bcac3db9e736140fb8895bf9162b | [
"MIT"
] | 5 | 2020-09-01T04:58:39.000Z | 2021-01-10T04:59:23.000Z | class/demo01/lecture1.ipynb | AvijeetPrasad/jb_course | d52ce9e14781bcac3db9e736140fb8895bf9162b | [
"MIT"
] | 12 | 2020-09-01T06:26:34.000Z | 2022-01-08T20:38:10.000Z | 24.027586 | 443 | 0.413631 | [
[
[
"# Introduction to Python",
"_____no_output_____"
],
[
"In this class, we will watch the first of four lectures by Dr. Mike Gelbart, option co-director of the UBC-Vancouver MDS program.",
"_____no_output_____"
],
[
"<div class=\"container youtube\">\n<iframe class=\"responsive-iframe\" height=\"350px\" width=\"622px\" src=\"https://www.youtube-nocookie.com/embed/yBAYduexjuA\" frameborder=\"0\" allow=\"accelerometer; autoplay=\"0\"; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n</div>",
"_____no_output_____"
],
[
"## Attribution\n\n- The original version of these Python lectures were by [Patrick Walls](https://www.math.ubc.ca/~pwalls/).\n- These lectures were delivered by [Mike Gelbart](https://mikegelbart.com) and are [available publicly here](https://www.youtube.com/watch?v=yBAYduexjuA).",
"_____no_output_____"
],
[
"## About this course (5 min)\n\n### High-level overview:\n\n- The MDS program has a programming prerequisite.\n- Therefore, this course does not start from \"no programming knowledge\".\n - You should know what an `if` statement is.\n - You should know what a `for` loop is.\n - You should know what a function is.\n- However, not all of you have used Python/R.\n- So, this course is about _Python-specific_ and _R-specific_ syntax/knowledge.\n- We will cover things like loops, but just the syntax, not the concept of a loop.\n- Weeks 1&2: Python, lectures by Mike Gelbart\n- Weeks 3&4: R, lectures by Tiffany Timbers",
"_____no_output_____"
],
[
"## Lecture Outline:\n\n- Basic datatypes (20 min)\n- Lists and tuples (20 min)\n- Break (5 min)\n- String methods (5 min)\n- Dictionaries (10 min)\n- Conditionals (10 min)",
"_____no_output_____"
],
[
"## Basic datatypes (20 min)\n\n- A **value** is a piece of data that a computer program works with such as a number or text. \n- There are different **types** of values: `42` is an integer and `\"Hello!\"` is a string. \n- A **variable** is a name that refers to a value. \n - In mathematics and statistics, we usually use variables names like $x$ and $y$. \n - In Python, we can use any word as a variable name (as long as it starts with a letter and is not a [reserved word](https://docs.python.org/3.3/reference/lexical_analysis.html#keywords) in Python such as `for`, `while`, `class`, `lambda`, etc.). \n- And we use the **assignment operator** `=` to assign a value to a variable.\n\nSee the [Python 3 documentation](https://docs.python.org/3/library/stdtypes.html) for a summary of the standard built-in Python datatypes. See [Think Python (Chapter 2)](http://greenteapress.com/thinkpython/html/thinkpython003.html) for a discussion of variables, expressions and statements in Python.",
"_____no_output_____"
],
[
"### Common built-in Python data types\n\n| English name | Type name | Description | Example |\n| :--- | :--- | :--- | :--- |\n| integer | `int` | positive/negative whole numbers | `42` |\n| floating point number | `float` | real number in decimal form | `3.14159` |\n| boolean | `bool` | true or false | `True` |\n| string | `str` | text | `\"I Can Has Cheezburger?\"` |\n| list | `list` | a collection of objects - mutable & ordered | `['Ali','Xinyi','Miriam']` |\n| tuple | `tuple` | a collection of objects - immutable & ordered | `('Thursday',6,9,2018)` |\n| dictionary | `dict` | mapping of key-value pairs | `{'name':'DSCI','code':511,'credits':2}` |\n| none | `NoneType` | represents no value | `None` |",
"_____no_output_____"
],
[
"#### Numeric Types",
"_____no_output_____"
]
],
[
[
"x = 42",
"_____no_output_____"
],
[
"type(x)",
"_____no_output_____"
],
[
"print(x)",
"42\n"
],
[
"x # in Jupyter/IPython we don't need to explicitly print for the last line of a cell",
"_____no_output_____"
],
[
"pi = 3.14159",
"_____no_output_____"
],
[
"print(pi)",
"3.14159\n"
],
[
"type(pi)",
"_____no_output_____"
],
[
"λ = 2",
"_____no_output_____"
]
],
[
[
"#### Arithmetic Operators\n\nThe syntax for the arithmetic operators are:\n\n| Operator | Description |\n| :---: | :---: |\n| `+` | addition |\n| `-` | subtraction |\n| `*` | multiplication |\n| `/` | division |\n| `**` | exponentiation |\n| `//` | integer division |\n| `%` | modulo |\n\nLet's apply these operators to numeric types and observe the results.",
"_____no_output_____"
]
],
[
[
"1 + 2 + 3 + 4 + 5",
"_____no_output_____"
],
[
"0.1 + 0.2",
"_____no_output_____"
]
],
[
[
"```{tip}\nFrom Firas: This is floating point arithmetic. For an explanation of what's going on, [see this tutorial](https://docs.python.org/3/tutorial/floatingpoint.html).\n```",
"_____no_output_____"
]
],
[
[
"2 * 3.14159",
"_____no_output_____"
],
[
"2**10",
"_____no_output_____"
],
[
"type(2**10)",
"_____no_output_____"
],
[
"2.0**10",
"_____no_output_____"
],
[
"int_2 = 2",
"_____no_output_____"
],
[
"float_2 = 2.0",
"_____no_output_____"
],
[
"float_2_again = 2.",
"_____no_output_____"
],
[
"101 / 2",
"_____no_output_____"
],
[
"101 // 2 # \"integer division\" - always rounds down",
"_____no_output_____"
],
[
"101 % 2 # \"101 mod 2\", or the remainder when 101 is divided by 2",
"_____no_output_____"
]
],
[
[
"#### None\n\n- `NoneType` is its own type in Python.\n- It only has one possible value, `None`",
"_____no_output_____"
]
],
[
[
"x = None",
"_____no_output_____"
],
[
"print(x)",
"None\n"
],
[
"type(x)",
"_____no_output_____"
]
],
[
[
"You may have seen similar things in other languages, like `null` in Java, etc.",
"_____no_output_____"
],
[
"#### Strings\n\n- Text is stored as a type called a string. \n- We think of a string as a sequence of characters. \n- We write strings as characters enclosed with either:\n - single quotes, e.g., `'Hello'` \n - double quotes, e.g., `\"Goodbye\"`\n - triple single quotes, e.g., `'''Yesterday'''`\n - triple double quotes, e.g., `\"\"\"Tomorrow\"\"\"`",
"_____no_output_____"
]
],
[
[
"my_name = \"Mike Gelbart\"",
"_____no_output_____"
],
[
"print(my_name)",
"Mike Gelbart\n"
],
[
"type(my_name)",
"_____no_output_____"
],
[
"course = 'DSCI 511'",
"_____no_output_____"
],
[
"print(course)",
"DSCI 511\n"
],
[
"type(course)",
"_____no_output_____"
]
],
[
[
"If the string contains a quotation or apostrophe, we can use double quotes or triple quotes to define the string.",
"_____no_output_____"
]
],
[
[
"sentence = \"It's a rainy day.\"",
"_____no_output_____"
],
[
"print(sentence)",
"It's a rainy day.\n"
],
[
"type(sentence)",
"_____no_output_____"
],
[
"saying = '''They say: \n\"It's a rainy day!\"'''",
"_____no_output_____"
],
[
"print(saying)",
"They say: \n\"It's a rainy day!\"\n"
]
],
[
[
"#### Boolean\n\n- The Boolean (`bool`) type has two values: `True` and `False`. ",
"_____no_output_____"
]
],
[
[
"the_truth = True",
"_____no_output_____"
],
[
"print(the_truth)",
"True\n"
],
[
"type(the_truth)",
"_____no_output_____"
],
[
"lies = False",
"_____no_output_____"
],
[
"print(lies)",
"False\n"
],
[
"type(lies)",
"_____no_output_____"
]
],
[
[
"#### Comparison Operators\n\nCompare objects using comparison operators. The result is a Boolean value.\n\n| Operator | Description |\n| :---: | :--- |\n| `x == y ` | is `x` equal to `y`? |\n| `x != y` | is `x` not equal to `y`? |\n| `x > y` | is `x` greater than `y`? |\n| `x >= y` | is `x` greater than or equal to `y`? |\n| `x < y` | is `x` less than `y`? |\n| `x <= y` | is `x` less than or equal to `y`? |\n| `x is y` | is `x` the same object as `y`? |",
"_____no_output_____"
]
],
[
[
"2 < 3",
"_____no_output_____"
],
[
"\"Data Science\" != \"Deep Learning\"",
"_____no_output_____"
],
[
"2 == \"2\"",
"_____no_output_____"
],
[
"2 == 2.0",
"_____no_output_____"
]
],
[
[
"Note: we will discuss `is` next week.",
"_____no_output_____"
],
[
"Operators on Boolean values.\n\n| Operator | Description |\n| :---: | :--- |\n|`x and y`| are `x` and `y` both true? |\n|`x or y` | is at least one of `x` and `y` true? |\n| `not x` | is `x` false? | ",
"_____no_output_____"
]
],
[
[
"True and True",
"_____no_output_____"
],
[
"True and False",
"_____no_output_____"
],
[
"False or False",
"_____no_output_____"
],
[
"(\"Python 2\" != \"Python 3\") and (2 <= 3)",
"_____no_output_____"
],
[
"not True",
"_____no_output_____"
],
[
"not not True",
"_____no_output_____"
]
],
[
[
"#### Casting\n\n- Sometimes (but rarely) we need to explicitly **cast** a value from one type to another.\n- Python tries to do something reasonable, or throws an error if it has no ideas.",
"_____no_output_____"
]
],
[
[
"x = int(5.0)\nx",
"_____no_output_____"
],
[
"type(x)",
"_____no_output_____"
],
[
"x = str(5.0)\nx",
"_____no_output_____"
],
[
"type(x)",
"_____no_output_____"
],
[
"str(5.0) == 5.0",
"_____no_output_____"
],
[
"# list(5.0) # there is no reasonable thing to do here",
"_____no_output_____"
],
[
"int(5.3)",
"_____no_output_____"
]
],
[
[
"## Lists and Tuples (20 min)\n\n- Lists and tuples allow us to store multiple things (\"elements\") in a single object.\n- The elements are _ordered_.",
"_____no_output_____"
]
],
[
[
"my_list = [1, 2, \"THREE\", 4, 0.5]",
"_____no_output_____"
],
[
"print(my_list)",
"[1, 2, 'THREE', 4, 0.5]\n"
],
[
"type(my_list)",
"_____no_output_____"
]
],
[
[
"You can get the length of the list with `len`:",
"_____no_output_____"
]
],
[
[
"len(my_list)",
"_____no_output_____"
],
[
"today = (1, 2, \"THREE\", 4, 0.5)",
"_____no_output_____"
],
[
"print(today)",
"(1, 2, 'THREE', 4, 0.5)\n"
],
[
"type(today)",
"_____no_output_____"
],
[
"len(today)",
"_____no_output_____"
]
],
[
[
"### Indexing and Slicing Sequences\n\n- We can access values inside a list, tuple, or string using the backet syntax. \n- Python uses zero-based indexing, which means the first element of the list is in position 0, not position 1. \n- Sadly, R uses one-based indexing, so get ready to be confused.",
"_____no_output_____"
]
],
[
[
"my_list",
"_____no_output_____"
],
[
"my_list[0]",
"_____no_output_____"
],
[
"my_list[4]",
"_____no_output_____"
],
[
"my_list[5]",
"_____no_output_____"
],
[
"today[4]",
"_____no_output_____"
]
],
[
[
"We use negative indices to count backwards from the end of the list.",
"_____no_output_____"
]
],
[
[
"my_list",
"_____no_output_____"
],
[
"my_list[-1]",
"_____no_output_____"
]
],
[
[
"We use the colon `:` to access a subsequence. This is called \"slicing\".",
"_____no_output_____"
]
],
[
[
"my_list[1:4]",
"_____no_output_____"
]
],
[
[
"- Above: note that the start is inclusive and the end is exclusive.\n- So `my_list[1:3]` fetches elements 1 and 2, but not 3.\n- In other words, it gets the 2nd and 3rd elements in the list.",
"_____no_output_____"
],
[
"We can omit the start or end:",
"_____no_output_____"
]
],
[
[
"my_list[:3]",
"_____no_output_____"
],
[
"my_list[3:]",
"_____no_output_____"
],
[
"my_list[:] # *almost* same as my_list - more details next week",
"_____no_output_____"
]
],
[
[
"Strings behave the same as lists and tuples when it comes to indexing and slicing.",
"_____no_output_____"
]
],
[
[
"alphabet = \"abcdefghijklmnopqrstuvwxyz\"",
"_____no_output_____"
],
[
"alphabet[0]",
"_____no_output_____"
],
[
"alphabet[-1]",
"_____no_output_____"
],
[
"alphabet[-3]",
"_____no_output_____"
],
[
"alphabet[:5]",
"_____no_output_____"
],
[
"alphabet[12:20]",
"_____no_output_____"
]
],
[
[
"#### List Methods\n\n- A list is an object and it has methods for interacting with its data. \n- For example, `list.append(item)` appends an item to the end of the list. \n- See the documentation for more [list methods](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists).",
"_____no_output_____"
]
],
[
[
"primes = [2,3,5,7,11]\nprimes",
"_____no_output_____"
],
[
"len(primes)",
"_____no_output_____"
],
[
"primes.append(13)",
"_____no_output_____"
],
[
"primes",
"_____no_output_____"
],
[
"len(primes)",
"_____no_output_____"
],
[
"max(primes)",
"_____no_output_____"
],
[
"min(primes)",
"_____no_output_____"
],
[
"sum(primes)",
"_____no_output_____"
],
[
"[1,2,3] + [\"Hello\", 7]",
"_____no_output_____"
]
],
[
[
"#### Sets\n\n- Another built-in Python data type is the `set`, which stores an _un-ordered_ list of _unique_ items.\n- More on sets in DSCI 512.",
"_____no_output_____"
]
],
[
[
"s = {2,3,5,11}\ns",
"_____no_output_____"
],
[
"{1,2,3} == {3,2,1}",
"_____no_output_____"
],
[
"[1,2,3] == [3,2,1]",
"_____no_output_____"
],
[
"s.add(2) # does nothing\ns",
"_____no_output_____"
],
[
"s[0]",
"_____no_output_____"
]
],
[
[
"Above: throws an error because elements are not ordered.",
"_____no_output_____"
],
[
"#### Mutable vs. Immutable Types\n\n- Strings and tuples are immutable types which means they cannot be modified. \n- Lists are mutable and we can assign new values for its various entries. \n- This is the main difference between lists and tuples.",
"_____no_output_____"
]
],
[
[
"names_list = [\"Indiana\",\"Fang\",\"Linsey\"]\nnames_list",
"_____no_output_____"
],
[
"names_list[0] = \"Cool guy\"\nnames_list",
"_____no_output_____"
],
[
"names_tuple = (\"Indiana\",\"Fang\",\"Linsey\")\nnames_tuple",
"_____no_output_____"
],
[
"names_tuple[0] = \"Not cool guy\"",
"_____no_output_____"
]
],
[
[
"Same goes for strings. Once defined we cannot modifiy the characters of the string.",
"_____no_output_____"
]
],
[
[
"my_name = \"Mike\"",
"_____no_output_____"
],
[
"my_name[-1] = 'q'",
"_____no_output_____"
],
[
"x = ([1,2,3],5)",
"_____no_output_____"
],
[
"x[1] = 7",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"x[0][1] = 4",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
"## Break (5 min)",
"_____no_output_____"
],
[
"## String Methods (5 min)\n\n- There are various useful string methods in Python.\n- MDS-CL students will soon be the experts we can go to for help!",
"_____no_output_____"
]
],
[
[
"all_caps = \"HOW ARE YOU TODAY?\"\nprint(all_caps)",
"HOW ARE YOU TODAY?\n"
],
[
"new_str = all_caps.lower()\nnew_str",
"_____no_output_____"
]
],
[
[
"Note that the method lower doesn't change the original string but rather returns a new one.\n",
"_____no_output_____"
]
],
[
[
"all_caps",
"_____no_output_____"
]
],
[
[
"There are *many* string methods. Check out the [documentation](https://docs.python.org/3/library/stdtypes.html#string-methods).",
"_____no_output_____"
]
],
[
[
"all_caps.split()",
"_____no_output_____"
],
[
"all_caps.count(\"O\")",
"_____no_output_____"
]
],
[
[
"One can explicitly cast a string to a list:",
"_____no_output_____"
]
],
[
[
"caps_list = list(all_caps)\ncaps_list",
"_____no_output_____"
],
[
"len(all_caps)",
"_____no_output_____"
],
[
"len(caps_list)",
"_____no_output_____"
]
],
[
[
"### String formatting\n\n- Python has ways of creating strings by \"filling in the blanks\" and formatting them nicely. \n- There are a few ways of doing this. See [here](https://realpython.com/python-string-formatting/) and [here](https://stackoverflow.com/questions/5082452/string-formatting-vs-format) for some discussion.",
"_____no_output_____"
],
[
"Old formatting style (borrowed from the C programming language):",
"_____no_output_____"
]
],
[
[
"template = \"Hello, my name is %s. I am %.2f years old.\"",
"_____no_output_____"
],
[
"template % (\"Newborn Baby\", 4/12)",
"_____no_output_____"
]
],
[
[
"New formatting style (see [documentation](https://docs.python.org/3/library/stdtypes.html#str.format)):",
"_____no_output_____"
]
],
[
[
"template_new = \"Hello, my name is {}. I am {:.2f} years old.\"",
"_____no_output_____"
],
[
"template_new.format('Newborn Baby', 4/12)",
"_____no_output_____"
]
],
[
[
"Newer formatting style (see [here](https://realpython.com/python-f-strings/#f-strings-a-new-and-improved-way-to-format-strings-in-python)) - note the `f` before the start of the string:",
"_____no_output_____"
]
],
[
[
"name = \"Newborn Baby\"\nage = 4/12\ntemplate_new = f'Hello, my name is {name}. I am {age:.2f} years old.'\ntemplate_new",
"_____no_output_____"
]
],
[
[
"## Dictionaries (10 min)\n\nA dictionary is a mapping between key-values pairs.",
"_____no_output_____"
]
],
[
[
"house = {'bedrooms': 3, 'bathrooms': 2, 'city': 'Vancouver', 'price': 2499999, 'date_sold': (1,3,2015)}\n\ncondo = {'bedrooms' : 2, \n 'bathrooms': 1, \n 'city' : 'Burnaby', \n 'price' : 699999, \n 'date_sold': (27,8,2011)\n }",
"_____no_output_____"
]
],
[
[
"We can access a specific field of a dictionary with square brackets:",
"_____no_output_____"
]
],
[
[
"house['price']",
"_____no_output_____"
],
[
"condo['city']",
"_____no_output_____"
]
],
[
[
"We can also edit dictionaries (they are mutable):",
"_____no_output_____"
]
],
[
[
"condo['price'] = 5 # price already in the dict\ncondo",
"_____no_output_____"
],
[
"condo['flooring'] = \"wood\"",
"_____no_output_____"
],
[
"condo",
"_____no_output_____"
]
],
[
[
"We can delete fields entirely (though I rarely use this):",
"_____no_output_____"
]
],
[
[
"del condo[\"city\"]",
"_____no_output_____"
],
[
"condo",
"_____no_output_____"
],
[
"condo[5] = 443345",
"_____no_output_____"
],
[
"condo",
"_____no_output_____"
],
[
"condo[(1,2,3)] = 777\ncondo",
"_____no_output_____"
],
[
"condo[\"nothere\"]",
"_____no_output_____"
]
],
[
[
"A sometimes useful trick about default values:",
"_____no_output_____"
]
],
[
[
"condo[\"bedrooms\"]",
"_____no_output_____"
]
],
[
[
"is shorthand for",
"_____no_output_____"
]
],
[
[
"condo.get(\"bedrooms\")",
"_____no_output_____"
]
],
[
[
"With this syntax you can also use default values:",
"_____no_output_____"
]
],
[
[
"condo.get(\"bedrooms\", \"unknown\")",
"_____no_output_____"
],
[
"condo.get(\"fireplaces\", \"unknown\")",
"_____no_output_____"
]
],
[
[
"- A common operation is finding the maximum dictionary key by value.\n- There are a few ways to do this, see [this StackOverflow page](https://stackoverflow.com/questions/268272/getting-key-with-maximum-value-in-dictionary).\n- One way of doing it:",
"_____no_output_____"
]
],
[
[
"max(word_lengths, key=word_lengths.get)",
"_____no_output_____"
]
],
[
[
"We saw `word_lengths.get` above - it is saying that we should call this function on each key of the dict to decide how to sort.",
"_____no_output_____"
],
[
"### Empties",
"_____no_output_____"
]
],
[
[
"lst = list() # empty list\nlst",
"_____no_output_____"
],
[
"lst = [] # empty list\nlst",
"_____no_output_____"
],
[
"tup = tuple() # empty tuple\ntup",
"_____no_output_____"
],
[
"tup = () # empty tuple\ntup",
"_____no_output_____"
],
[
"dic = dict() # empty dict\ndic",
"_____no_output_____"
],
[
"dic = {} # empty dict\ndic",
"_____no_output_____"
],
[
"st = set() # emtpy set\nst",
"_____no_output_____"
],
[
"st = {} # NOT an empty set!\ntype(st)",
"_____no_output_____"
],
[
"st = {1}\ntype(st)",
"_____no_output_____"
]
],
[
[
"## Conditionals (10 min)",
"_____no_output_____"
],
[
"- [Conditional statements](https://docs.python.org/3/tutorial/controlflow.html) allow us to write programs where only certain blocks of code are executed depending on the state of the program. \n- Let's look at some examples and take note of the keywords, syntax and indentation. \n- Check out the [Python documentation](https://docs.python.org/3/tutorial/controlflow.html) and [Think Python (Chapter 5)](http://greenteapress.com/thinkpython/html/thinkpython006.html) for more information about conditional execution.",
"_____no_output_____"
]
],
[
[
"name = input(\"What's your name?\")\n\nif name.lower() == 'mike':\n print(\"That's my name too!\")\nelif name.lower() == 'santa':\n print(\"That's a funny name.\")\nelse:\n print(\"Hello {}! That's a cool name.\".format(name))\n\n print('Nice to meet you!')",
"What's your name?mike\nThat's my name too!\n"
],
[
"bool(None)",
"_____no_output_____"
]
],
[
[
"The main points to notice:\n\n* Use keywords `if`, `elif` and `else`\n* The colon `:` ends each conditional expression\n* Indentation (by 4 empty space) defines code blocks\n* In an `if` statement, the first block whose conditional statement returns `True` is executed and the program exits the `if` block\n* `if` statements don't necessarily need `elif` or `else`\n* `elif` lets us check several conditions\n* `else` lets us evaluate a default block if all other conditions are `False`\n* the end of the entire `if` statement is where the indentation returns to the same level as the first `if` keyword",
"_____no_output_____"
],
[
"If statements can also be **nested** inside of one another:",
"_____no_output_____"
]
],
[
[
"name = input(\"What's your name?\")\n\nif name.lower() == 'mike':\n print(\"That's my name too!\")\nelif name.lower() == 'santa':\n print(\"That's a funny name.\")\nelse:\n print(\"Hello {0}! That's a cool name.\".format(name))\n if name.lower().startswith(\"super\"):\n print(\"Do you have superpowers?\")\n\nprint('Nice to meet you!')",
"What's your name?supersam\nHello supersam! That's a cool name.\nDo you have superpowers?\nNice to meet you!\n"
]
],
[
[
"### Inline if/else",
"_____no_output_____"
]
],
[
[
"words = [\"the\", \"list\", \"of\", \"words\"]\n\nx = \"long list\" if len(words) > 10 else \"short list\"\nx",
"_____no_output_____"
],
[
"if len(words) > 10:\n x = \"long list\"\nelse:\n x = \"short list\"",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
"#### (optional) short-circruiting",
"_____no_output_____"
]
],
[
[
"BLAH # not defined",
"_____no_output_____"
],
[
"True or BLAH",
"_____no_output_____"
],
[
"True and BLAH",
"_____no_output_____"
],
[
"False and BLAH",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
eca2be7f2a94995e76872f40993dfcb742a0a548 | 8,392 | ipynb | Jupyter Notebook | assignments/assignment06/InteractEx05.ipynb | sthuggins/phys202-2015-work | 0930ee8bb059dd518134c63f6fa8a0da3a53e753 | [
"MIT"
] | null | null | null | assignments/assignment06/InteractEx05.ipynb | sthuggins/phys202-2015-work | 0930ee8bb059dd518134c63f6fa8a0da3a53e753 | [
"MIT"
] | null | null | null | assignments/assignment06/InteractEx05.ipynb | sthuggins/phys202-2015-work | 0930ee8bb059dd518134c63f6fa8a0da3a53e753 | [
"MIT"
] | null | null | null | 22.991781 | 324 | 0.50858 | [
[
[
"# Interact Exercise 5",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
],
[
"Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.",
"_____no_output_____"
]
],
[
[
"from IPython.display import display, SVG\nimport numpy as np\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nfrom IPython.html.widgets import interact, interactive, fixed",
":0: FutureWarning: IPython widgets are experimental and may change in the future.\n"
]
],
[
[
"## Interact with SVG display",
"_____no_output_____"
],
[
"[SVG](http://en.wikipedia.org/wiki/Scalable_Vector_Graphics) is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:",
"_____no_output_____"
]
],
[
[
"s = \"\"\"<svg width=\"100\" height=\"100\">\n <circle cx=\"50\" cy=\"50\" r=\"20\" fill=\"aquamarine\" />\n</svg>\"\"\" ",
"_____no_output_____"
],
[
"SVG(s) ",
"_____no_output_____"
]
],
[
[
"Write a function named `draw_circle` that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the `IPython.display.SVG` object and `IPython.display.display` function.",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML",
"_____no_output_____"
],
[
"def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):\n \"\"\"Draw an SVG circle.\n \n Parameters\n ----------\n width : int\n The width of the svg drawing area in px.\n height : int\n The height of the svg drawing area in px.\n cx : int\n The x position of the center of the circle in px.\n cy : int\n The y position of the center of the circle in px.\n r : int\n The radius of the circle in px.\n fill : str\n The fill color of the circle.\"\"\"\n \n x = \"\"\"<svg width=\"%d\" height=\"%d\">\n <circle cx=\"%d\" cy=\"%d\" r=\"%d\" fill=\"%s\" />\n </svg>\n \"\"\" % (width, height, cx, cy, r, fill)\n display(SVG(x))",
"_____no_output_____"
],
[
"draw_circle(cx=10, cy=10, r=10, fill='blue')",
"_____no_output_____"
],
[
"assert True # leave this to grade the draw_circle function",
"_____no_output_____"
]
],
[
[
"Use `interactive` to build a user interface for exploing the `draw_circle` function:\n\n* `width`: a fixed value of 300px\n* `height`: a fixed value of 300px\n* `cx`/`cy`: a slider in the range [0,300]\n* `r`: a slider in the range [0,50]\n* `fill`: a text area in which you can type a color's name\n\nSave the return value of `interactive` to a variable named `w`.",
"_____no_output_____"
]
],
[
[
"w = interactive(draw_circle, width=fixed(300),height=fixed(300),cx=(0,300), cy=(0,300), r=(0,50), fill=\"red\")\n",
"_____no_output_____"
],
[
"c = w.children\nassert c[0].min==0 and c[0].max==300\nassert c[1].min==0 and c[1].max==300\nassert c[2].min==0 and c[2].max==50\nassert c[3].value=='red'",
"_____no_output_____"
]
],
[
[
"Use the `display` function to show the widgets created by `interactive`:",
"_____no_output_____"
]
],
[
[
"display(w)",
"_____no_output_____"
],
[
"assert True # leave this to grade the display of the widget",
"_____no_output_____"
]
],
[
[
"Play with the sliders to change the circles parameters interactively.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
eca2ca87cf3b65b4c0ca331b672ef50c314a0de7 | 3,101 | ipynb | Jupyter Notebook | input_files/fvcom27_write_riv.ipynb | bedaro/ssm-analysis | 09880dbfa5733d6301b84accc8f42a5ee320d698 | [
"MIT"
] | null | null | null | input_files/fvcom27_write_riv.ipynb | bedaro/ssm-analysis | 09880dbfa5733d6301b84accc8f42a5ee320d698 | [
"MIT"
] | null | null | null | input_files/fvcom27_write_riv.ipynb | bedaro/ssm-analysis | 09880dbfa5733d6301b84accc8f42a5ee320d698 | [
"MIT"
] | null | null | null | 26.732759 | 188 | 0.507256 | [
[
[
"in_file = \"data/fvcom_riv_2014.nc\"\ncomment = \"Auto-generated from BDR's ssm-analysis notebooks\"\nout_file = \"data/2014/ssm_riv.dat\"\n\nimport numpy as np\nfrom netCDF4 import Dataset",
"_____no_output_____"
],
[
"cdf = Dataset(in_file)\ncdf",
"_____no_output_____"
],
[
"siglayers = cdf.dimensions['siglay'].size\nnodes = cdf.dimensions['node'].size\ntimes = cdf.dimensions['time'].size\n\nwith open(out_file,\"w\") as f:\n print(\"{0} {1} ! {2}\".format(cdf.inflow_type, cdf.point_st_type, comment), file=f)\n print(nodes, file=f)\n for n in range(nodes):\n print(\"{:7d}\".format(cdf['node'][n]), file=f)\n\n # VQDIST\n nodes_digits = int(np.ceil(np.log10(nodes+0.1)))\n siglay_formatstr = \"{:%dd} \" % nodes_digits + \" \".join([\"{:.4f}\" for l in range(siglayers)])\n\n for n in range(nodes):\n print(siglay_formatstr.format(*[n+1] + list(cdf['vqdist'][n,:])), file=f)\n \n # Nqtime\n print(\"{:8d}\".format(times), file=f)\n \n formatstr = \"\".join([\" {:9.3e}\" for l in range(nodes)])\n for t in range(times):\n print(\"{:9.2f}\".format(cdf['time'][t]), file=f)\n for v in ('discharge','temp','salt'):\n print(formatstr.format(*cdf[v][t,:]), file=f)",
"_____no_output_____"
],
[
"cdf.close()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
eca2cd7252602a72f2ad3a1b89c4a32b35133885 | 27,594 | ipynb | Jupyter Notebook | lectures/ch11-data-structures.ipynb | CMSC6950/CMSC6950.github.io | 69fbcc24430431986fd983861c4f2eb3638119a7 | [
"CC-BY-4.0"
] | null | null | null | lectures/ch11-data-structures.ipynb | CMSC6950/CMSC6950.github.io | 69fbcc24430431986fd983861c4f2eb3638119a7 | [
"CC-BY-4.0"
] | null | null | null | lectures/ch11-data-structures.ipynb | CMSC6950/CMSC6950.github.io | 69fbcc24430431986fd983861c4f2eb3638119a7 | [
"CC-BY-4.0"
] | 1 | 2020-05-23T00:59:08.000Z | 2020-05-23T00:59:08.000Z | 22.655172 | 315 | 0.385337 | [
[
[
"# Data Structures\n\nIn this course we will focus on the use of **Data Frames** for data wrangling, analysis and visualization.\n\nThe books chapter 11 also gives a good introduction to Hash Tables, B-Trees and K-d trees and anyone interested in these, is encouraged to read these subsections.",
"_____no_output_____"
],
[
"## Pandas Data Frames and Series\n\n### Series\n\nThe Pandas Series class is effectively a 1-dimentsional NumPy array with an associated index.",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"Series objects can be created from lists or NumPy arrays:",
"_____no_output_____"
]
],
[
[
"pd.Series([42, 43, 44], dtype='f8')",
"_____no_output_____"
]
],
[
[
"An list of values can be supplied as an index. If no external index is given, an integer index is generated.",
"_____no_output_____"
]
],
[
[
"s = pd.Series([42, 43, 44], \n index=[\"electron\", \n \"proton\", \n \"neutron\"])",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
]
],
[
[
"Values can be accessed by their index:",
"_____no_output_____"
]
],
[
[
"s['electron']",
"_____no_output_____"
],
[
"# inclusive bounds\ns['electron':'proton']",
"_____no_output_____"
],
[
"# integer indexing still OK\ns[1:]",
"_____no_output_____"
]
],
[
[
"Alternatively one can create a Series from a dict. In this case the dict's keys are used as index for the Series.",
"_____no_output_____"
]
],
[
[
"t = pd.Series({'electron': 6, \n 'neutron': 28, \n 'proton': 496, \n 'neutrino': 8128})",
"_____no_output_____"
],
[
"t",
"_____no_output_____"
]
],
[
[
"Arithmetic with two Series objects works element-by-element. The elements are automatically aligned by their index. \nIndices that are only present in one Series, will get a NaN value in the resulting Series.",
"_____no_output_____"
]
],
[
[
"s + t",
"_____no_output_____"
]
],
[
[
"### Data Frames\n\nData Frames are reprenent a tabular (spreadsheet-like) data structure. They basically consist of columns of Series-like objects that have both a row- and column index.\n\nIn Chapter 07 (Analysis and Visualization) we have created a DataFrame by loading a CSV file with `pd.read_csv()` and Pandas can also read from HDF and Excel files. Data Frames can also be created from (nested) Python lists, NumPy arrays. Another way to create a DataFrame is to pass a dict of Series objects:",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame({'S': s, 'T': t})",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"Data frames support NumPy like indexing by row: `df[ from : to : step ]`",
"_____no_output_____"
]
],
[
[
"df[::2]",
"_____no_output_____"
]
],
[
[
"Expand a dataframe with a new index and value in column 'S':",
"_____no_output_____"
]
],
[
[
"dg = df.append(pd.DataFrame({'S': [-8128]}, index=['antineutrino']))\ndg",
"_____no_output_____"
],
[
"dh = dg.drop('neutron')\ndh",
"_____no_output_____"
]
],
[
[
"Transpose DataFrame (works in NumPy arrays as well):",
"_____no_output_____"
]
],
[
[
"df.T",
"_____no_output_____"
]
],
[
[
"Arithmetic can be applied to the whole data frame:",
"_____no_output_____"
]
],
[
[
"df < 42",
"_____no_output_____"
],
[
"# accessing a single column \n# will return a series\ndf['T']",
"_____no_output_____"
],
[
"# setting a name to a series\n# or expression will add a \n# column to the frame\ndf['small'] = df['T'] < 100\ndf",
"_____no_output_____"
],
[
"# deleting a column will\n# remove it from the frame\ndel df['small']\ndf",
"_____no_output_____"
]
],
[
[
"----\n\n----",
"_____no_output_____"
],
[
"## Hash Tables\n\nHash Tables provide key-value mapping. Python dictionaries are in fact hash tables.\n\n### How does a Hash Table work?\n\n| i | key | hash(key)%8 | Value |\n|----|:---:|:-----------:|:----------|\n| 0 | | | |\n| 1 | Kr | 1 | \"Krypton\" |\n| 2 | Ne | 2 | \"Neon\" |\n| 3 | | | |\n| 4 | Xe | 4 | \"Xenon\" |\n| 5 | | | |\n| 6 | | | |\n| 7 | He | 7 | \"Helium\" |\n\n\nPseudo code to retrieve a value from a hash table:\n```\ntable['key'] ->\n h = hash(key)\n i = h % size(table)\n return table[i.value]\n```\n\n### Resizing\n\n| i | key | hash(key)%12 | Value |\n|----|:---:|:------------:|:----------|\n| 0 | | | |\n| 1 | | | |\n| 2 | | | |\n| 3 | | | |\n| 4 | Xe | 4 | \"Xenon\" |\n| 5 | Kr | 5 | \"Krypton\" |\n| 6 | | | |\n| 7 | | | |\n| 8 | | | |\n| 9 | | | |\n| 10 | Ne | 10 | \"Neon\" |\n| 11 | He | 11 | \"Helium\" |\n\n\n### Collisions\n\nCollisions will happen!",
"_____no_output_____"
]
],
[
[
"hash('') == hash(0) == hash(0.0) == hash(False) == 0",
"_____no_output_____"
]
],
[
[
"```python\nelements = [ \"He\", \"Ne\", \"Ar\", \"Kr\", \"Xe\"]\nfor elem in elements:\n i = hash(elem) % 8\n print(\"%i - %s\"%(i, elem) )\n```\n\n```\n7 - He\n2 - Ne\n2 - Ar <---\n1 - Kr\n4 - Xe\n```\n\nThere are different strategies for dealing with hash collisions, e.g. appending a certain value to the key and calculating a new index, Open Addressing and Chaining. All these will slow down using the hash table.\n\nThe probablility for hash-collisions occuring rises with the number of entries in a hash table of a certain size.",
"_____no_output_____"
],
[
"----\n\n----",
"_____no_output_____"
],
[
"## B-Trees\n",
"_____no_output_____"
]
],
[
[
"from blist import sorteddict",
"_____no_output_____"
],
[
"b = sorteddict(first=\"Albert\", \n last=\"Einstein\",\n birthday=[1879, 3, 14])\nb",
"_____no_output_____"
],
[
"b['died'] = [1955, 4, 18]\nb",
"_____no_output_____"
],
[
"list(b.keys())",
"_____no_output_____"
]
],
[
[
"## K-D Trees",
"_____no_output_____"
]
],
[
[
"class Node(object):\n \n def __init__(self, point, left=None, right=None):\n self.point = point\n self.left = left\n self.right = right\n \n def __repr__(self):\n isleaf = self.left is None and self.right is None\n s = repr(self.point)\n if not isleaf:\n s = \"[\" + s + \":\"\n if self.left is not None:\n s += \"\\n left = \" + \"\\n \".join(repr(self.left).split('\\n'))\n if self.right is not None:\n s += \"\\n right = \" + \"\\n \".join(repr(self.right).split('\\n'))\n if not isleaf:\n s += \"\\n ]\"\n return s\n\n\ndef kdtree(points, depth=0):\n if len(points) == 0:\n return None\n k = len(points[0])\n a = depth % k\n points = sorted(points, key=lambda x: x[a])\n i = int(len(points) / 2) # middle index, rounded down\n node_left = kdtree(points[:i], depth + 1)\n node_right = kdtree(points[i+1:], depth + 1)\n node = Node(points[i], node_left, node_right)\n return node",
"_____no_output_____"
],
[
"points = [(1, 2), (3, 2), \n (5, 5), (2, 1), \n (4, 3), (1, 5)]\nroot = kdtree(points)\nprint(root)",
"[(3, 2):\n left = [(1, 2):\n left = (2, 1)\n right = (1, 5)\n ]\n right = [(5, 5):\n left = (4, 3)\n ]\n ]\n"
],
[
"from scipy.spatial import KDTree\ntree = KDTree(points)",
"_____no_output_____"
],
[
"tree.data",
"_____no_output_____"
],
[
"# query() defaults to only the closest point\ndist, idx = tree.query([(4.5, 1.25)])",
"_____no_output_____"
],
[
"dist",
"_____no_output_____"
],
[
"idx ",
"_____no_output_____"
],
[
"# fancy index by idx to get the point\ntree.data[idx]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca2d75f644b41adc69e557755ce4a1c97b3481d | 31,543 | ipynb | Jupyter Notebook | Supervised learning/Regression demo.ipynb | prashdash112/Scratch-my-ML | 1dea9398374f2cd70970730f8cbfcc3dfe5677a6 | [
"Apache-2.0"
] | null | null | null | Supervised learning/Regression demo.ipynb | prashdash112/Scratch-my-ML | 1dea9398374f2cd70970730f8cbfcc3dfe5677a6 | [
"Apache-2.0"
] | null | null | null | Supervised learning/Regression demo.ipynb | prashdash112/Scratch-my-ML | 1dea9398374f2cd70970730f8cbfcc3dfe5677a6 | [
"Apache-2.0"
] | null | null | null | 186.64497 | 17,288 | 0.914339 | [
[
[
"import numpy as np \nimport matplotlib.pyplot as plt \nimport seaborn as sns\nimport pandas as pd",
"_____no_output_____"
],
[
"x=2*np.random.rand(100,1)\ny=4+3*x+np.random.randn(100,1)",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\nplt.plot(x,y,'.')",
"_____no_output_____"
],
[
"X_b = np.c_[np.ones((100, 1)), x] # add x0 = 1 to each instance",
"_____no_output_____"
],
[
"theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)\ntheta_best",
"_____no_output_____"
],
[
"X_new = np.array([[0], [2]])\nX_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance\ny_predict = X_new_b.dot(theta_best)\ny_predict",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,6))\nplt.plot(X_new, y_predict, \"r-\")\nplt.plot(x, y, \"b.\")\nplt.axis([0, 2, 0, 15])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## The End",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
eca2ecccc93bc769ce78c1fd73588b12b9d84f60 | 19,506 | ipynb | Jupyter Notebook | .ipynb_checkpoints/appending-solutions-checkpoint.ipynb | Siddharth1698/Data-Analyst-Nanodegree | ad09ee76788338e94b97c32a3e3beb9ec43d3c5f | [
"MIT"
] | 1 | 2021-08-03T20:57:56.000Z | 2021-08-03T20:57:56.000Z | .ipynb_checkpoints/appending-solutions-checkpoint.ipynb | Siddharth1698/Data-Analyst-Nanodegree | ad09ee76788338e94b97c32a3e3beb9ec43d3c5f | [
"MIT"
] | null | null | null | .ipynb_checkpoints/appending-solutions-checkpoint.ipynb | Siddharth1698/Data-Analyst-Nanodegree | ad09ee76788338e94b97c32a3e3beb9ec43d3c5f | [
"MIT"
] | null | null | null | 32.241322 | 505 | 0.348354 | [
[
[
"# Appending Data\nFirst, import the necessary packages and load `winequality-red.csv` and `winequality-white.csv`.",
"_____no_output_____"
]
],
[
[
"# import numpy and pandas\nimport numpy as np\nimport pandas as pd\n\n# load red and white wine datasets\nred_df = pd.read_csv('winequality-red.csv', sep=';')\nwhite_df = pd.read_csv('winequality-white.csv', sep=';')",
"_____no_output_____"
],
[
"red_df.rename(columns={'total_sulfur-dioxide':'total_sulfur_dioxide'}, inplace=True)",
"_____no_output_____"
]
],
[
[
"## Create Color Columns\nCreate two arrays as long as the number of rows in the red and white dataframes that repeat the value “red” or “white.” NumPy offers really easy way to do this. Here’s the documentation for [NumPy’s repeat](https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html) function. Take a look and try it yourself.",
"_____no_output_____"
]
],
[
[
"# create color array for red dataframe\ncolor_red = np.repeat('red', red_df.shape[0])\n\n# create color array for white dataframe\ncolor_white = np.repeat('white', white_df.shape[0])",
"_____no_output_____"
]
],
[
[
"Add arrays to the red and white dataframes. Do this by setting a new column called 'color' to the appropriate array. The cell below does this for the red dataframe.",
"_____no_output_____"
]
],
[
[
"red_df['color'] = color_red\nred_df.head()",
"_____no_output_____"
]
],
[
[
"Do the same for the white dataframe and use `head()` to confirm the change.",
"_____no_output_____"
]
],
[
[
"white_df['color'] = color_white\nwhite_df.head()",
"_____no_output_____"
]
],
[
[
"## Combine DataFrames with Append\nCheck the documentation for [Pandas' append](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html) function and see if you can use this to figure out how to combine the dataframes. (Bonus: Why aren't we using the [merge](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) method to combine the dataframes?) If you don’t get it, I’ll show you how afterwards. Make sure to save your work in this notebook! You'll come back to this later.",
"_____no_output_____"
]
],
[
[
"# append dataframes\nwine_df = red_df.append(white_df) \n\n# view dataframe to check for success\nwine_df.head()",
"_____no_output_____"
]
],
[
[
"## Save Combined Dataset\nSave your newly combined dataframe as `winequality_edited.csv`. Remember, set `index=False` to avoid saving with an unnamed column!",
"_____no_output_____"
]
],
[
[
"wine_df.to_csv('winequality_edited.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca2f0554408616c7877d4be5bf88cc377eb633f | 134,555 | ipynb | Jupyter Notebook | Principal Component Analysis/Principal Component Analysis.ipynb | Pearl6193/Machine-Learning | 668e4e95541c394ae78b288046a0289b2b2f5ae1 | [
"MIT"
] | null | null | null | Principal Component Analysis/Principal Component Analysis.ipynb | Pearl6193/Machine-Learning | 668e4e95541c394ae78b288046a0289b2b2f5ae1 | [
"MIT"
] | null | null | null | Principal Component Analysis/Principal Component Analysis.ipynb | Pearl6193/Machine-Learning | 668e4e95541c394ae78b288046a0289b2b2f5ae1 | [
"MIT"
] | null | null | null | 180.127175 | 73,960 | 0.870997 | [
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"from sklearn.datasets import load_breast_cancer",
"_____no_output_____"
],
[
"cancer = load_breast_cancer()",
"_____no_output_____"
],
[
"cancer.keys()",
"_____no_output_____"
],
[
"print(cancer[\"DESCR\"])",
".. _breast_cancer_dataset:\n\nBreast cancer wisconsin (diagnostic) dataset\n--------------------------------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 569\n\n :Number of Attributes: 30 numeric, predictive attributes and the class\n\n :Attribute Information:\n - radius (mean of distances from center to points on the perimeter)\n - texture (standard deviation of gray-scale values)\n - perimeter\n - area\n - smoothness (local variation in radius lengths)\n - compactness (perimeter^2 / area - 1.0)\n - concavity (severity of concave portions of the contour)\n - concave points (number of concave portions of the contour)\n - symmetry\n - fractal dimension (\"coastline approximation\" - 1)\n\n The mean, standard error, and \"worst\" or largest (mean of the three\n worst/largest values) of these features were computed for each image,\n resulting in 30 features. For instance, field 0 is Mean Radius, field\n 10 is Radius SE, field 20 is Worst Radius.\n\n - class:\n - WDBC-Malignant\n - WDBC-Benign\n\n :Summary Statistics:\n\n ===================================== ====== ======\n Min Max\n ===================================== ====== ======\n radius (mean): 6.981 28.11\n texture (mean): 9.71 39.28\n perimeter (mean): 43.79 188.5\n area (mean): 143.5 2501.0\n smoothness (mean): 0.053 0.163\n compactness (mean): 0.019 0.345\n concavity (mean): 0.0 0.427\n concave points (mean): 0.0 0.201\n symmetry (mean): 0.106 0.304\n fractal dimension (mean): 0.05 0.097\n radius (standard error): 0.112 2.873\n texture (standard error): 0.36 4.885\n perimeter (standard error): 0.757 21.98\n area (standard error): 6.802 542.2\n smoothness (standard error): 0.002 0.031\n compactness (standard error): 0.002 0.135\n concavity (standard error): 0.0 0.396\n concave points (standard error): 0.0 0.053\n symmetry (standard error): 0.008 0.079\n fractal dimension (standard error): 0.001 0.03\n radius (worst): 7.93 36.04\n texture (worst): 12.02 49.54\n perimeter (worst): 50.41 251.2\n area (worst): 185.2 4254.0\n smoothness (worst): 0.071 0.223\n compactness (worst): 0.027 1.058\n concavity (worst): 0.0 1.252\n concave points (worst): 0.0 0.291\n symmetry (worst): 0.156 0.664\n fractal dimension (worst): 0.055 0.208\n ===================================== ====== ======\n\n :Missing Attribute Values: None\n\n :Class Distribution: 212 - Malignant, 357 - Benign\n\n :Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian\n\n :Donor: Nick Street\n\n :Date: November, 1995\n\nThis is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.\nhttps://goo.gl/U2Uwz2\n\nFeatures are computed from a digitized image of a fine needle\naspirate (FNA) of a breast mass. They describe\ncharacteristics of the cell nuclei present in the image.\n\nSeparating plane described above was obtained using\nMultisurface Method-Tree (MSM-T) [K. P. Bennett, \"Decision Tree\nConstruction Via Linear Programming.\" Proceedings of the 4th\nMidwest Artificial Intelligence and Cognitive Science Society,\npp. 97-101, 1992], a classification method which uses linear\nprogramming to construct a decision tree. Relevant features\nwere selected using an exhaustive search in the space of 1-4\nfeatures and 1-3 separating planes.\n\nThe actual linear program used to obtain the separating plane\nin the 3-dimensional space is that described in:\n[K. P. Bennett and O. L. Mangasarian: \"Robust Linear\nProgramming Discrimination of Two Linearly Inseparable Sets\",\nOptimization Methods and Software 1, 1992, 23-34].\n\nThis database is also available through the UW CS ftp server:\n\nftp ftp.cs.wisc.edu\ncd math-prog/cpo-dataset/machine-learn/WDBC/\n\n.. topic:: References\n\n - W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction \n for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on \n Electronic Imaging: Science and Technology, volume 1905, pages 861-870,\n San Jose, CA, 1993.\n - O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and \n prognosis via linear programming. Operations Research, 43(4), pages 570-577, \n July-August 1995.\n - W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques\n to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994) \n 163-171.\n"
],
[
"cancer[\"feature_names\"]",
"_____no_output_____"
],
[
"df = pd.DataFrame(data = cancer[\"data\"],columns = cancer[\"feature_names\"])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"scaler = StandardScaler()",
"_____no_output_____"
],
[
"scaler.fit(df)",
"_____no_output_____"
],
[
"scaled_data = scaler.transform(df)",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA(n_components=2)",
"_____no_output_____"
],
[
"pca.fit(scaled_data)",
"_____no_output_____"
],
[
"x_pca = pca.transform(scaled_data)",
"_____no_output_____"
],
[
"scaled_data.shape",
"_____no_output_____"
],
[
"x_pca.shape",
"_____no_output_____"
],
[
"x_pca",
"_____no_output_____"
],
[
"plt.figure(figsize = (12,6))\nplt.scatter(x_pca[:,0],x_pca[:,1],c=cancer[\"target\"],cmap = \"plasma\")\nplt.xlabel(\"First Principal Component\")\nplt.ylabel(\"Second Principal Component\")",
"_____no_output_____"
],
[
"pca.components_",
"_____no_output_____"
],
[
"df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names'])",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,6))\nsns.heatmap(df_comp,cmap='plasma',)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca2f99de1f3ad5e7501d3acaedb6df6ffcdd8c2 | 152,059 | ipynb | Jupyter Notebook | module4-classification-metrics/Jen_Banks__LS_DS_224_assignment.ipynb | JenBanks8585/DS-Unit-2-Kaggle-Challenge | 2ee78868d7ebf2cfddf3fb0979d10199a814d752 | [
"MIT"
] | null | null | null | module4-classification-metrics/Jen_Banks__LS_DS_224_assignment.ipynb | JenBanks8585/DS-Unit-2-Kaggle-Challenge | 2ee78868d7ebf2cfddf3fb0979d10199a814d752 | [
"MIT"
] | null | null | null | module4-classification-metrics/Jen_Banks__LS_DS_224_assignment.ipynb | JenBanks8585/DS-Unit-2-Kaggle-Challenge | 2ee78868d7ebf2cfddf3fb0979d10199a814d752 | [
"MIT"
] | null | null | null | 83.548901 | 22,394 | 0.754451 | [
[
[
"<a href=\"https://colab.research.google.com/github/JenBanks8585/DS-Unit-2-Kaggle-Challenge/blob/master/module4-classification-metrics/Jen_Banks__LS_DS_224_assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 2, Module 4*\n\n---",
"_____no_output_____"
],
[
"# Classification Metrics\n\n## Assignment\n- [ ] If you haven't yet, [review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.\n- [ ] Plot a confusion matrix for your Tanzania Waterpumps model.\n- [ ] Continue to participate in our Kaggle challenge. Every student should have made at least one submission that scores at least 70% accuracy (well above the majority class baseline).\n- [ ] Submit your final predictions to our Kaggle competition. Optionally, go to **My Submissions**, and _\"you may select up to 1 submission to be used to count towards your final leaderboard score.\"_\n- [ ] Commit your notebook to your fork of the GitHub repo.\n- [ ] Read [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.\n\n\n## Stretch Goals\n\n### Reading\n\n- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _\"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score.\"_\n- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)\n- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)\n\n\n### Doing\n- [ ] Share visualizations in our Slack channel!\n- [ ] RandomizedSearchCV / GridSearchCV, for model selection. (See module 3 assignment notebook)\n- [ ] Stacking Ensemble. (See module 3 assignment notebook)\n- [ ] More Categorical Encoding. (See module 2 assignment notebook)",
"_____no_output_____"
]
],
[
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
],
[
"#imports\n\n%matplotlib inline\nimport category_encoders as ce\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier",
"_____no_output_____"
],
[
"# Merge train_features.csv & train_labels.csv\ntrain = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), \n pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))\n\n# Read test_features.csv & sample_submission.csv\ntest = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')\nsample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')\n\n# Split train into train & val. Make val the same size as test.\ntarget = 'status_group'\ntrain, val = train_test_split(train, test_size=len(test), \n stratify=train[target], random_state=42)",
"_____no_output_____"
],
[
"# Data Wranggling\n\ndef wrangle(X):\n \"\"\"Wrangle train, validate, and test sets in the same way\"\"\"\n \n # Prevent SettingWithCopyWarning\n X = X.copy()\n \n # About 6.1% of 'funder' cell are missing, fill with mode\n X['funder']=X['funder'].fillna(X.funder.mode()[0], inplace=False)\n\n # About 6.2% of 'installer' cell are missing, fill with mode\n X['installer']=X['installer'].fillna(X['installer'].mode()[0], inplace=False)\n\n # About 3% of the time, latitude has small values near zero,\n # outside Tanzania, so we'll treat these values like zero.\n X['latitude'] = X['latitude'].replace(-2e-08, 0)\n\n # About 5.7% of 'public_meeting' cell are missing, fill with mode\n X['public_meeting']=X['public_meeting'].fillna(X['public_meeting'].mode()[0],\n inplace=False)\n\n # About 6.5% of 'public_meeting' cell are missing, fill with mode\n X['scheme_management']=X['scheme_management'].fillna(X['scheme_management'].mode()[0],\n inplace=False)\n \n # About 5.2% of 'public_meeting' cell are missing, fill with mode\n X['permit']=X['permit'].fillna(X['permit'].mode()[0], inplace=False)\n \n # When columns have zeros and shouldn't, they are like null values.\n # So we will replace the zeros with nulls, and impute missing values later.\n # Also create a \"missing indicator\" column, because the fact that\n # values are missing may be a predictive signal.\n cols_with_zeros = ['longitude', 'latitude', 'construction_year', \n 'gps_height', 'population']\n for col in cols_with_zeros:\n X[col] = X[col].replace(0, np.nan)\n X[col+'_MISSING'] = X[col].isnull()\n \n # Drop duplicate columns\n duplicates = ['quantity_group', 'payment_type']\n X = X.drop(columns=duplicates)\n \n # Drop recorded_by (never varies) and id (always varies, random)\n unusable_variance = ['recorded_by', 'id']\n X = X.drop(columns=unusable_variance)\n \n # Convert date_recorded to datetime\n X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)\n \n # Extract components from date_recorded, then drop the original column\n X['year_recorded'] = X['date_recorded'].dt.year\n X['month_recorded'] = X['date_recorded'].dt.month\n X['day_recorded'] = X['date_recorded'].dt.day\n X = X.drop(columns='date_recorded')\n \n # Engineer feature: how many years from construction_year to date_recorded\n X['years'] = X['year_recorded'] - X['construction_year']\n X['years_MISSING'] = X['years'].isnull()\n \n # return the wrangled dataframe\n return X\n\ntrain = wrangle(train)\nval = wrangle(val)\ntest = wrangle(test)",
"_____no_output_____"
],
[
"# Arrange data into X features matrix and y target vector\nX_train = train.drop(columns=target)\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]\nX_test = test\n\n# Make pipeline!\npipeline = make_pipeline(\n ce.OrdinalEncoder(), \n SimpleImputer(strategy='mean'), \n RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)\n)\n\n# Fit on train, score on val\npipeline.fit(X_train, y_train)\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy', accuracy_score(y_val, y_pred))",
"Validation Accuracy 0.8105585736174955\n"
],
[
"from sklearn.metrics import plot_confusion_matrix\n\nplot_confusion_matrix(pipeline, \n X_val,y_val,\n values_format='.0f',\n xticks_rotation='vertical',\n cmap='Blues');",
"_____no_output_____"
],
[
"# Number of correct predictions\n\n6952+331+4355",
"_____no_output_____"
],
[
"#Total predictions\n\nlen(y_val)",
"_____no_output_____"
],
[
"#Accuracy\n\n11638/len(y_val)",
"_____no_output_____"
],
[
"# Classification report\n\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(y_val, y_pred))",
" precision recall f1-score support\n\n functional 0.81 0.89 0.85 7798\nfunctional needs repair 0.57 0.32 0.41 1043\n non functional 0.84 0.79 0.81 5517\n\n accuracy 0.81 14358\n macro avg 0.74 0.67 0.69 14358\n weighted avg 0.80 0.81 0.80 14358\n\n"
],
[
"!pip install scikit-plot",
"Requirement already satisfied: scikit-plot in /usr/local/lib/python3.6/dist-packages (0.3.7)\nRequirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (0.22.2.post1)\nRequirement already satisfied: scipy>=0.9 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (1.4.1)\nRequirement already satisfied: joblib>=0.10 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (0.14.1)\nRequirement already satisfied: matplotlib>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (3.2.1)\nRequirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->scikit-plot) (1.18.2)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (2.4.7)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (2.8.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (1.2.0)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (0.10.0)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib>=1.4.0->scikit-plot) (1.12.0)\n"
],
[
"from scikitplot.metrics import plot_confusion_matrix\n\nplot_confusion_matrix(y_val, y_pred,\n figsize=(8,6),\n title=f'Confusion Matrix:N={len(y_val)}',\n normalize=False\n);",
"_____no_output_____"
]
],
[
[
"##Precision Example Computation",
"_____no_output_____"
]
],
[
[
"#How many correct predictions of non functional?\n4335\n\n#How many total predictions of non functional??\ntot_non_funct= 4335 + 167 + 670\n\n# Precision for non functional\n\n4335/tot_non_funct",
"_____no_output_____"
]
],
[
[
"##Recall Example Computation",
"_____no_output_____"
]
],
[
[
"# How many actual non functional?\n\ntot_tru_nf= 1091+71+4335\n\n#Recall for non functional\n4335/tot_tru_nf",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"#Try anothe pipeline\n# Using Bagging ensemble, no replacement\nfrom sklearn.ensemble import RandomForestClassifier, BaggingClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import validation_curve\n\npipeline_bag1=make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(), \n BaggingClassifier(DecisionTreeClassifier(),\n n_estimators=100,\n max_features=0.6, \n max_samples=0.4, \n bootstrap=False, \n bootstrap_features= False)\n)\n\nfeat =[ 0.3, .5, 0.8]\ntrain_scores, val_scores=validation_curve(\n pipeline_bag1, X_train,y_train,\n param_name='baggingclassifier__max_features',\n param_range=feat, scoring='accuracy',\n cv= 3,\n n_jobs=-1\n)\n\npipeline_bag1.fit(X_train, y_train)\nprint(pipeline_bag1.score(X_train, y_train))\nprint(pipeline_bag1.score(X_val, y_val))",
"0.9671861817858888\n0.8108371639504109\n"
],
[
"#Confusion Matrix\nfrom sklearn.metrics import plot_confusion_matrix\n\nplot_confusion_matrix(pipeline_bag1, \n X_val,y_val,\n values_format='.2f',\n xticks_rotation='vertical',\n cmap='Blues',\n normalize= 'true'\n);\n",
"_____no_output_____"
],
[
"# Classification Report\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(y_val, y_pred))",
" precision recall f1-score support\n\n functional 0.81 0.89 0.85 7798\nfunctional needs repair 0.57 0.32 0.41 1043\n non functional 0.84 0.79 0.81 5517\n\n accuracy 0.81 14358\n macro avg 0.74 0.67 0.69 14358\n weighted avg 0.80 0.81 0.80 14358\n\n"
],
[
"# Checking the percentages of each predictions\n\ny_train.value_counts(normalize=True)",
"_____no_output_____"
],
[
"#Grouping 'funtional but needs repair' and 'non functional' together\n\ny_train=y_train !='functional'\ny_val=y_val !='functional'\ny_train.value_counts(normalize=True)",
"_____no_output_____"
],
[
"len(val)==len(test)",
"_____no_output_____"
],
[
"# Retraining the dataset using the new grouping\n\npipeline_bag1.fit(X_train, y_train)\ny_pred=pipeline_bag1.predict(X_val)",
"_____no_output_____"
],
[
"# Determining the probablities for TRUE(needs repair or nonfunctional)\n\ny_pred_prob_pipeline_bag1= pipeline_bag1.predict_proba(X_val)[:,1]\ny_pred_prob_pipeline_bag1",
"_____no_output_____"
],
[
"import seaborn as sns\nfrom ipywidgets import interact, fixed\n\ny_pred_prob_pipeline_bag1=pipeline_bag1.predict_proba(X_val)[:,1]\n\ndef set_threshold(y_true, y_pred_prob_pipeline_bag1, threshold=0.5):\n y_pred=y_pred_prob_pipeline_bag1 > threshold\n\n ax= sns.distplot(y_pred_prob_pipeline_bag1);\n ax.axvline(threshold, color='red');\n plt.show();\n\n print (classification_report(y_true, y_pred))\n print(pd.Series(y_pred).value_counts())\n\ninteract(set_threshold,\n y_true=fixed(y_val),\n y_pred_prob_pipeline_bag1=fixed(y_pred_prob_pipeline_bag1),\n threshold=(0, 1, 0.02))",
"_____no_output_____"
],
[
"# Identify the 2000 waterpumps with highest predicted probabilities\n\nresults = pd.DataFrame({'y_val':y_val, 'y_pred_prob': y_pred_prob_pipeline_bag1})\nresults",
"_____no_output_____"
],
[
"top2000=results.sort_values(by='y_pred_prob', ascending=False)[:2000]\ntop2000",
"_____no_output_____"
],
[
"# How many of our recommendations were relevant?\n\ntrips = 2000\nprint(f'Baseline: {trips*.46} waterpump repairs in {trips} trips')\n\nrelevant_recommendations = top2000['y_val'].sum()\nprint(f'With model:Predict {relevant_recommendations} waterpump repairs in {trips} trips')",
"Baseline: 920.0 waterpump repairs in 2000 trips\nWith model:Predict 1970 waterpump repairs in 2000 trips\n"
],
[
"# Computing precision on the top 2000 waterpumps\n\nprecision_at_k2000= relevant_recommendations/trips\nprecision_at_k2000",
"_____no_output_____"
],
[
"#ROC Curve\n\nfrom sklearn.metrics import roc_curve\n\nfpr, tpr, thresholds = roc_curve(y_val, y_pred_prob_pipeline_bag1)\n\nplt.scatter(fpr, tpr)\nplt.title(\"ROC Curve\")\nplt.xlabel('False Positive Rate')\nplt.ylabel(\"True Positive Rate\");",
"_____no_output_____"
],
[
"# Area under the curve\n\nfrom sklearn.metrics import roc_auc_score\nroc_auc_score(y_val, y_pred_prob_pipeline_bag1)",
"_____no_output_____"
],
[
"#Tabulating the results\n\npd.DataFrame({'False Positive Rate':fpr,\n 'True Positive Rate': tpr,\n 'Threshold':thresholds}\n)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca30f0134f0a69e8bfabb1a8328c6923c582b46 | 184,970 | ipynb | Jupyter Notebook | #08. Hyperparameter Tuning with Cross Validation/08session.ipynb | apoboldon/machine-learning-program | 4758e3b8061e0f963161eb57a45ff382a5b15da3 | [
"MIT"
] | null | null | null | #08. Hyperparameter Tuning with Cross Validation/08session.ipynb | apoboldon/machine-learning-program | 4758e3b8061e0f963161eb57a45ff382a5b15da3 | [
"MIT"
] | null | null | null | #08. Hyperparameter Tuning with Cross Validation/08session.ipynb | apoboldon/machine-learning-program | 4758e3b8061e0f963161eb57a45ff382a5b15da3 | [
"MIT"
] | null | null | null | 116.700315 | 63,244 | 0.853663 | [
[
[
"<font size=\"+5\">#08. Hyperparameter Tuning with Cross Validation</font>",
"_____no_output_____"
],
[
"- Book + Private Lessons [Here ↗](https://sotastica.com/reservar)\n- Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)\n- Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄",
"_____no_output_____"
],
[
"# Load the Data",
"_____no_output_____"
],
[
"> - The goal of this dataset is\n> - To predict if **bank's customers** (rows) `default` next month\n> - Based on their **socio-demographical characteristics** (columns)",
"_____no_output_____"
]
],
[
[
"import pandas as pd\npd.set_option(\"display.max_columns\", None)\n\nurl = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls'\ndf = pd.read_excel(io=url, header=1, index_col=0)\ndf.sample(10)",
"_____no_output_____"
],
[
"y=df['default payment next month']",
"_____no_output_____"
],
[
"X=df.drop(columns='default payment next month')",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
">>> X_train, X_test, y_train, y_test = train_test_split(\n... X, y, test_size=0.33, random_state=42)",
"_____no_output_____"
]
],
[
[
"# `DecisionTreeClassifier()` with Default Hyperparameters",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\nmodel=DecisionTreeClassifier()",
"_____no_output_____"
],
[
"model.fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"## Accuracy",
"_____no_output_____"
],
[
"> In `train` data",
"_____no_output_____"
]
],
[
[
"model.score(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"> In `test` data",
"_____no_output_____"
]
],
[
[
"model.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"## Model Visualization",
"_____no_output_____"
],
[
"> - `plot_tree()`",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import plot_tree",
"_____no_output_____"
],
[
"plot_tree(decision_tree=model, max_depth=10, feature_names=X.columns, filled=True);",
"_____no_output_____"
]
],
[
[
"# `DecisionTreeClassifier()` with Custom Hyperparameters",
"_____no_output_____"
]
],
[
[
"%%HTML\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/7VeUPuFGJHk\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>",
"_____no_output_____"
]
],
[
[
"> - The `model` has this hyperparameters ↓",
"_____no_output_____"
]
],
[
[
"model = DecisionTreeClassifier()",
"_____no_output_____"
],
[
"model.get_params()",
"_____no_output_____"
]
],
[
[
"## 1st Configuration",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\nmodel=DecisionTreeClassifier(max_depth=3)",
"_____no_output_____"
],
[
"model.fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"## Accuracy",
"_____no_output_____"
],
[
"> In `train` data",
"_____no_output_____"
]
],
[
[
"model.score(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"> In `test` data",
"_____no_output_____"
]
],
[
[
"model.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"## Model Visualization",
"_____no_output_____"
],
[
"> - `plot_tree()`",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import plot_tree",
"_____no_output_____"
],
[
"plot_tree(decision_tree=model, max_depth=10, feature_names=X.columns, filled=True);",
"_____no_output_____"
]
],
[
[
"## 2nd Configuration",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\nmodel=DecisionTreeClassifier(max_depth=5)",
"_____no_output_____"
],
[
"model.fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"## Accuracy",
"_____no_output_____"
],
[
"> In `train` data",
"_____no_output_____"
]
],
[
[
"model.score(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"> In `test` data",
"_____no_output_____"
]
],
[
[
"model.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"## Model Visualization",
"_____no_output_____"
],
[
"> - `plot_tree()`",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import plot_tree",
"_____no_output_____"
],
[
"plot_tree(decision_tree=model, max_depth=10, feature_names=X.columns, filled=True);",
"_____no_output_____"
]
],
[
[
"## 3rd Configuration",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier\nmodel=DecisionTreeClassifier(min_samples_leaf=200)",
"_____no_output_____"
]
],
[
[
"model.fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"## Accuracy",
"_____no_output_____"
],
[
"> In `train` data",
"_____no_output_____"
]
],
[
[
"model.score(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"> In `test` data",
"_____no_output_____"
]
],
[
[
"model.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"## Model Visualization",
"_____no_output_____"
],
[
"> - `plot_tree()`",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import plot_tree",
"_____no_output_____"
],
[
"plot_tree(decision_tree=model, max_depth=10, feature_names=X.columns, filled=True);",
"_____no_output_____"
]
],
[
[
"## 4th Configuration",
"_____no_output_____"
],
[
"## 5th Configuration",
"_____no_output_____"
],
[
"# `GridSearchCV()` to find Best Hyperparameters",
"_____no_output_____"
],
[
"> - How many scores for each fold?",
"_____no_output_____"
],
[
"<img src=\"src/grid_search_cross_validation.png\" style=\"margin-top: 100px\"/>",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"dt = DecisionTreeClassifier()",
"_____no_output_____"
],
[
"dt.get_params()",
"_____no_output_____"
],
[
"cv = GridSearchCV(estimator=dt, param_grid={'max_depth': [4,5,6,7,8,9,10]}, cv=5, verbose=1)",
"_____no_output_____"
],
[
"cv.fit(X_train, y_train)",
"Fitting 5 folds for each of 7 candidates, totalling 35 fits\n"
],
[
"cv.best_estimator_",
"_____no_output_____"
],
[
"cv.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"# Other Models",
"_____no_output_____"
],
[
"## Support Vector Machines `SVC()`",
"_____no_output_____"
]
],
[
[
"%%HTML\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/efR1C6CvhmE\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>",
"_____no_output_____"
],
[
"from sklearn.svm import SVC",
"_____no_output_____"
],
[
"sv = SVC()",
"_____no_output_____"
],
[
"sv.get_params()",
"_____no_output_____"
],
[
"cv = GridSearchCV(estimator=sv,param_grid={'C': [1, 10, 100], 'gamma': [1, 0.1, 0.001], 'kernel': ['rbf']}, verbose=2)",
"_____no_output_____"
],
[
"sv.get_params().keys()",
"_____no_output_____"
],
[
"cv.fit(X_train,y_train)",
"Fitting 5 folds for each of 9 candidates, totalling 45 fits\n"
],
[
"cv.best_estimator_\n",
"_____no_output_____"
]
],
[
[
"## `KNeighborsClassifier()`",
"_____no_output_____"
]
],
[
[
"%%HTML\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/HVXime0nQeI\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>",
"_____no_output_____"
],
[
"from sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"from sklearn.neighbors import KNeighborsClassifier\nnb=KNeighborsClassifier()",
"_____no_output_____"
],
[
"nb.get_params()",
"_____no_output_____"
]
],
[
[
"# Best Model with Best Hyperparameters",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
eca3119e258f84a1a5fa26f04d17b163bd076699 | 120,070 | ipynb | Jupyter Notebook | Fig2.ipynb | yoojinhwang/Ecoacoustic_Complexity_Index | 1a08249e383eef4356b0175cc12a35a56720de0f | [
"MIT"
] | null | null | null | Fig2.ipynb | yoojinhwang/Ecoacoustic_Complexity_Index | 1a08249e383eef4356b0175cc12a35a56720de0f | [
"MIT"
] | null | null | null | Fig2.ipynb | yoojinhwang/Ecoacoustic_Complexity_Index | 1a08249e383eef4356b0175cc12a35a56720de0f | [
"MIT"
] | 1 | 2020-06-25T19:50:48.000Z | 2020-06-25T19:50:48.000Z | 690.057471 | 115,272 | 0.945965 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom infromation_theory_utils import Dissimilarity_JSD\nfrom scipy.linalg import svd, toeplitz\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Baseline Dataset Plot",
"_____no_output_____"
]
],
[
[
"lag = 512 # [32, 64, 128, 256, 512] \n\nbase = pd.read_pickle('../pkl_datasets/baseline_dataset_ACF_' + str(lag) + '.gzip')\ncotas = pd.read_csv('./boundary_files/Cotas_HxC_bins_' + str(int(lag)) + '.csv')\nnoise = pd.read_csv('./coloredNoises/coloredNoises_' + str(int(lag)) + '.csv')",
"_____no_output_____"
],
[
"Dissimilarity_matrix = np.zeros((7,7))\n\nfor index1, row1 in base.iterrows():\n lab1 = '$s_{'+(row1['ID'].split('.'))[-2].split('0')[-1]+'}$'\n r1 = row1['ACF_512']\n for index2, row2 in base.iterrows():\n lab2 = '$s_{'+(row2['ID'].split('.'))[-2].split('0')[-1]+'}$'\n r2 = row2['ACF_512']\n \n Sxx1 = toeplitz(r1)\n _, s1, _ = svd(Sxx1)\n \n Sxx2 = toeplitz(r2)\n _, s2, _ = svd(Sxx2)\n \n D = Dissimilarity_JSD(s1, s2)\n Dissimilarity_matrix[index1, index2] = D",
"_____no_output_____"
],
[
"plt.figure(figsize=(24,10))\nplt.rc('font', size=22)\nplt.rc('axes', titlesize=22)\n\nplt.subplot(1,2,1)\nfor index, row in base.iterrows():\n lab = '$s_{'+(row['ID'].split('.'))[-2].split('0')[-1]+'}$'\n plt.scatter(row['H'], row['C'], marker='.', s=300, label = lab)\n\n\nplt.plot(cotas['Entropy'],cotas['Complexity'], '--k', label = 'HxC boundaries')\nplt.plot(noise['Entropy'],noise['Complexity'], '--b', label = 'Colored noises')\nplt.xlim([0, 1])\nplt.ylim([0, np.max(cotas['Complexity'])+0.01])\nplt.ylabel('Complexity [C]')\nplt.xlabel('Entropy [H]')\nplt.legend(loc = 'upper left', frameon=False)\nplt.title('a)')\n\nplt.subplot(1,2,2)\nplt.rc('font', size=20)\nplt.rc('axes', titlesize=20)\n \ncolumns_labels = ['$s_{18}$', '$s_{17}$', '$s_{15}$', '$s_{13}$', '$s_{14}$', '$s_{16}$', '$s_{19}$']\n\nplt.imshow(np.tril(Dissimilarity_matrix, -1), cmap=plt.cm.Blues, interpolation='none', vmin=0, vmax=.6)\ncbar = plt.colorbar(fraction=.05)\ncbar.ax.set_ylabel('Jensen-Shannon Divergence')\n\nax = plt.gca()\nax.set_xticks(np.arange(7))\nax.set_yticks(np.arange(7))\nax.set_xticklabels(columns_labels)\nax.set_yticklabels(columns_labels)\n\nfor edge, spine in ax.spines.items():\n spine.set_visible(False)\n\nax.set_xticks(np.arange(7+1)-.5, minor=True)\nax.set_yticks(np.arange(7+1)-.5, minor=True)\nax.grid(which=\"minor\", color=\"w\", linestyle='-', linewidth=3)\nax.tick_params(which=\"minor\", bottom=False, left=False)\nplt.xlabel('Dissimilarity Matrix')\nplt.title('b)')\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
eca3251f5c59e6676f826dee471bdd314c1285e7 | 20,659 | ipynb | Jupyter Notebook | BT3103_HTML_Prototyping_in_notebooks.ipynb | triciachia98/bt3103-projects | 8ed8235ea99660f63237d9f1ebcd80bd3182d8aa | [
"MIT"
] | null | null | null | BT3103_HTML_Prototyping_in_notebooks.ipynb | triciachia98/bt3103-projects | 8ed8235ea99660f63237d9f1ebcd80bd3182d8aa | [
"MIT"
] | null | null | null | BT3103_HTML_Prototyping_in_notebooks.ipynb | triciachia98/bt3103-projects | 8ed8235ea99660f63237d9f1ebcd80bd3182d8aa | [
"MIT"
] | null | null | null | 33.107372 | 158 | 0.394066 | [
[
[
"[View in Colaboratory](https://colab.research.google.com/github/triciachia98/bt3103-projects/blob/master/BT3103_HTML_Prototyping_in_notebooks.ipynb)",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import display, HTML\ndisplay(HTML('<h3>You can render HTML in a Jupyter notebook</h3>'))",
"_____no_output_____"
],
[
"# This can be an easy way to develop simple form for your AWS lambda functions. \n\ntheHTML = \"\"\"\n<div>\n Enter your name:\n <input type='text'></input>\n <button>Submit</button>\n</div>\n\"\"\"\ndisplay(HTML(theHTML))",
"_____no_output_____"
],
[
"# We can configure forms to take actions when buttons are clicked. \n\nbuttonHTML = \"\"\"\n<script>\n \nvar doSomething = function(message){\n console.log(\"Logging the message\",message);\n}\n\n</script>\n<div>\n <button onclick=\"doSomething('Hello')\">Click to log a message.</button> \n</div>\n\"\"\"\ndisplay(HTML(buttonHTML))",
"_____no_output_____"
],
[
"# Prototyping a vue application in a Jupyter notebook.\n# Examples: https://vuejs.org/v2/guide/index.html\n\nvueApp = \"\"\"\n<html>\n\n<head>\n\t<title>Plain HTML Sandbox</title>\n</head>\n\n<body>\n <div id=\"app\">\n\t Counter {{counter}}\n\t <button v-on:click=\"increment()\">Increment</button>\n\t <button v-on:click=\"reset()\">Reset</button>\n\n </div>\n\n <script src=\"https://unpkg.com/vue\"></script>\n\n <script>\n\n var app = new Vue({\n el:\"#app\",\n data: {\n counter: 0, \n databaseUrl: 'https://boesch-4.firebaseio.com/temp2.json',\n },\n\t mounted: function () { //Will start at zero if you don't update on init\n this.update();\n },\n methods:{\n increment: function(){\n this.counter++\n this.setRemoteCounter(this.counter)\n },\n reset: function(){\n // Post to the counter to set it to zero. \n this.counter = 0\n\t\t this.setRemoteCounter(this.counter) \n },\n\t\t\t\t setRemoteCounter: async function(newVal){ \n // Post to the counter to set it to zero. \n let res = await fetch(this.databaseUrl, { \n method:'put', \n body: JSON.stringify({'counter':newVal})\n\t\t\t\t }) \n\n let theJson = await res.json()\n console.log(theJson)\n\n },\n update: async function() {\n let res = await fetch(this.databaseUrl) \n let theJson = await res.json()\n console.log(\"Updated remote counter\",theJson)\n\t\t if(theJson && theJson.counter) {\n console.log(\"here\")\n this.counter = theJson.counter\n }\n\t\t\t \n }\n }\n })\n\n </script>\n</body>\n\n</html>\n\"\"\"\ndisplay(HTML(vueApp))",
"_____no_output_____"
],
[
"# Graphs. \n\ngraphs = \"\"\"\n<html>\n\n<head>\n\t<title>Plain HTML Sandbox</title>\n\t<meta charset=\"UTF-8\" />\n</head>\n\n<body>\n\n<script src=\"https://www.gstatic.com/firebasejs/3.7.4/firebase.js\"></script>\n<script src=\"https://unpkg.com/vue\"></script>\n<script src=\"https://unpkg.com/vuex\"></script>\n<script src=\"https://unpkg.com/vuexfire\"></script>\n<script src=\"https://unpkg.com/[email protected]/dist/Chart.bundle.js\"></script>\n<script src=\"https://unpkg.com/[email protected]\"></script>\n<script type=\"text/javascript\" src=\"https://www.gstatic.com/charts/loader.js\"></script>\n\n<div id=\"app\">\n <h2> Charts based on <a href=\"https://github.com/ankane/vue-chartkick\">vue-chartkick</a> </h2>\n\t\n Line chart\n\n<line-chart :data=\"{'2017-01-01': 11, '2017-01-02': 6}\"></line-chart>\nPie chart\n\n<pie-chart :data=\"[['Blueberry', 44], ['Strawberry', 23]]\"></pie-chart>\nColumn chart\n\n<column-chart :data=\"[['Sun', 32], ['Mon', 46], ['Tue', 28]]\"></column-chart>\nBar chart\n\n<bar-chart :data=\"[['Work', 32], ['Play', 1492]]\"></bar-chart>\nArea chart\n\n<area-chart :data=\"{'2017-01-01': 11, '2017-01-02': 6}\"></area-chart>\nScatter chart\n\n<scatter-chart :data=\"[[174.0, 80.0], [176.5, 82.3]]\" xtitle=\"Size\" ytitle=\"Population\"></scatter-chart>\n\nGeo chart - Google Charts\n\n<geo-chart :data=\"[['United States', 44], ['Germany', 23], ['Brazil', 22]]\"></geo-chart>\nTimeline - Google Charts\n\n<timeline :data=\"[['Washington', '1789-04-29', '1797-03-03'], ['Adams', '1797-03-03', '1801-03-03']]\"></timeline>\n\n<line-chart :data=\"[[new Date(), 5], [1368174456, 4], ['2017-01-01 00:00:00 UTC', 7]]\"></line-chart>\n \n<pie-chart :data=\"{'Blueberry': 44, 'Strawberry': 23}\"></pie-chart>\n\n<line-chart :data=\"chartData\"></line-chart>\n</div>\n\n<script>\n\tvar app = new Vue({\n el: \"#app\",\n data: {\n chartData: [[\"Jan\", 4], [\"Feb\", 2], [\"Mar\", 10], [\"Apr\", 5], [\"May\", 3]]\n }\n })\n\n</script>\n\n</body>\n\n</html>\n\n\"\"\"\ndisplay(HTML(graphs))",
"_____no_output_____"
],
[
"# You can use python to acccess your firebase data\n\n\nimport json\nimport urllib.request\n\nurl = \"https://bt3103-rocks.firebaseio.com/.json\"\n\nresponse = urllib.request.urlopen(url)\ndata = response.read() # a `bytes` object\ntext = data.decode('utf-8') # a `str`; this step can't be used if data is binary\nprint(\"The text\", text)\nmyDictionary = json.loads(text)\nprint(\"The dictionary\", myDictionary)",
"The text {\"hello\":\"class\"}\nThe dictionary {'hello': 'class'}\n"
],
[
"# Posting data\n\nimport requests\nimport json\nimport time\n\n\n# Replace this with your firebase project url\nfirebaseProject = \"https://bt3103-rocks.firebaseio.com\"\nurl = firebaseProject+\"/hello.json\"\n\n# Get data using python requests\nresp = requests.get(url=url)\nprint(\"Data before put ->\", resp.text)\n\nnewData = \"class\"\nresp = requests.put(url=url, \n data=json.dumps(newData))\n\nprint(\"Response text from put ->\", resp.text)\n\n# Get data using python requests\nresp = requests.get(url=url)\nprint(\"Data after put ->\", resp.text)",
"Data before put -> \"class\"\nResponse text from put -> {\n \"error\" : \"Permission denied\"\n}\n\nData after put -> \"class\"\n"
]
],
[
[
"Create a python function to increment your Firebase counter. ",
"_____no_output_____"
]
],
[
[
"def incrementFirebaseCounter():\n pass\n\n\n\n\nincrementFirebaseCounter()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
eca33d779618acf4a5dc7d563cd6f22926b60de4 | 16,298 | ipynb | Jupyter Notebook | Tutorials/05_Moller-Plesset/5b_density-fitted-mp2.ipynb | mrnicegyu11/psi4numpy | bf83ce90c27290edac3942ad36596031c22b75ed | [
"BSD-3-Clause"
] | null | null | null | Tutorials/05_Moller-Plesset/5b_density-fitted-mp2.ipynb | mrnicegyu11/psi4numpy | bf83ce90c27290edac3942ad36596031c22b75ed | [
"BSD-3-Clause"
] | null | null | null | Tutorials/05_Moller-Plesset/5b_density-fitted-mp2.ipynb | mrnicegyu11/psi4numpy | bf83ce90c27290edac3942ad36596031c22b75ed | [
"BSD-3-Clause"
] | null | null | null | 49.090361 | 1,025 | 0.619217 | [
[
[
"\"\"\"Tutorial: Describing the implementation of density-fitted MP2 from an RHF reference\"\"\"\n\n__author__ = \"Dominic A. Sirianni\"\n__credit__ = [\"Dominic A. Sirianni\", \"Daniel G. A. Smith\"]\n\n__copyright__ = \"(c) 2014-2018, The Psi4NumPy Developers\"\n__license__ = \"BSD-3-Clause\"\n__date__ = \"2017-05-24\"",
"_____no_output_____"
]
],
[
[
"# Density Fitted MP2\n\nAs we saw in tutorial (5a), the single most expensive step for a conventional MP2 program using full ERIs is the integral transformation from the atomic orbital (AO) to molecular orbital (MO) basis, scaling as ${\\cal O}(N^5)$. The scaling of this step may be reduced to ${\\cal O}(N^4)$ if we employ density fitting, as the three-index density fitted tensors may be transformed individually into the MO basis before being recombined to form the full four-index tensors in the MO basis needed by the MP2 energy expression. This tutorial will discuss the specific challenges encountered when applying density fitting to an MP2 program.\n\n### Implementation\nThe first part of our DF-MP2 program will look exactly the same as the conventional MP2 program that we wrote in (5a), with the exception that we must specify the `scf_type df` and omit the option `mp2_type conv` within the `psi4.set_options()` block, to ensure that we are employing density fitting in the Hartree-Fock reference. Below, implement the following:\n\n- Import Psi4 and NumPy, and set memory & output file\n- Define our molecule and Psi4 options\n- Compute the RHF reference wavefucntion and energy\n- Obtain the number of occupied and virtual MOs, and total number of MOs\n- Get the orbital energies and coefficient matrix; partition into occupied & virtual blocks",
"_____no_output_____"
]
],
[
[
"# ==> Import statements & Global Options <==\nimport psi4\nimport numpy as np\n\npsi4.set_memory(int(2e9))\nnumpy_memory = 2\npsi4.core.set_output_file('output.dat', False)",
"_____no_output_____"
],
[
"# ==> Options Definitions & SCF E, Wfn <==\nmol = psi4.geometry(\"\"\"\nO\nH 1 1.1\nH 1 1.1 2 104\nsymmetry c1\n\"\"\")\n\n\npsi4.set_options({'basis': 'aug-cc-pvdz',\n 'scf_type': 'df',\n 'e_convergence': 1e-8,\n 'd_convergence': 1e-8})\n\n# Get the SCF wavefunction & energies\nscf_e, scf_wfn = psi4.energy('scf', return_wfn=True)\n\n# Number of Occupied orbitals & MOs\nndocc = scf_wfn.nalpha()\nnmo = scf_wfn.nmo()\nnvirt = nmo - ndocc\n\n# Get orbital energies, cast into NumPy array, and separate occupied & virtual\neps = np.asarray(scf_wfn.epsilon_a())\ne_ij = eps[:ndocc]\ne_ab = eps[ndocc:]\n\n# Get MO coefficients from SCF wavefunction\nC = np.asarray(scf_wfn.Ca())\nCocc = C[:, :ndocc]\nCvirt = C[:, ndocc:]",
"_____no_output_____"
]
],
[
[
"From the conventional MP2 program, we know that the next step is to obtain the ERIs and transform them into the MO basis using the orbital coefficient matrix, **C**. In order to do this using density-fitted integrals, we must first build and transform the DF-ERI's similar to that in the density-fitted HF chapter. However, we use an auxiliary basis set that better reproduces the valence electrons important for correlation compared to the JKFIT auxiliary basis of Hartree-Fock. We instead use the RIFIT auxiliary basis.",
"_____no_output_____"
]
],
[
[
"# ==> Density Fitted ERIs <==\n# Build auxiliary basis set\naux = psi4.core.BasisSet.build(mol, \"DF_BASIS_SCF\", \"\", \"RIFIT\", \"aug-cc-pVDZ\")\n\n# Build instance of Mints object\norb = scf_wfn.basisset()\nmints = psi4.core.MintsHelper(orb)\n\n# Build a zero basis\nzero_bas = psi4.core.BasisSet.zero_ao_basis_set()\n\n# Raw 3-index\nPpq = np.squeeze(mints.ao_eri(zero_bas, aux, orb, orb))\n\n# Build and invert the Coulomb metric\nmetric = mints.ao_eri(zero_bas, aux, zero_bas, aux)\nmetric.power(-0.5, 1.e-14)\nmetric = np.squeeze(metric)\n\nQpq = np.einsum(\"QP,Ppq->Qpq\", metric, Ppq, optimize=True)",
"_____no_output_____"
]
],
[
[
"Now that we have our three-index integrals, we are able to transform them into the MO basis. To do this, we could simply use `np.einsum()` to carry out the transformation in a single step:\n~~~python\n# Transform Qpq -> Qmo @ O(N^5)\nQmo = np.einsum('pi,Qpq,qj->Qij', C, Qpq, C, optimize=True)\n~~~\nThis simple transformation works, but it doesn't reduce the caling of the transformation. This approach saves over the conventional one only because a single ${\\cal O}(N^5)$ transformation would need to be done, instead of four. We can, however, borrow the idea from conventional MP2 to carry out the transformation in more than one step, saving the intermediates along the way. Using this approach, we are able to transform the `Qpq` tensors into the MO basis in two successive ${\\cal O}(N^4)$ steps. In the cell below, transform the `Qpq` tensors with this reduced scaling algorithm, and save the occupied-virtual slice of the full `Qmo` tensor:",
"_____no_output_____"
]
],
[
[
"# ==> Transform Qpq -> Qmo @ O(N^4) <==\nQmo = np.einsum('pi,Qpq->Qiq', C, Qpq, optimize=True)\nQmo = np.einsum('Qiq,qj->Qij', Qmo, C, optimize=True)\n\n# Get Occupied-Virtual Block\nQmo = Qmo[:, :ndocc, ndocc:]",
"_____no_output_____"
]
],
[
[
"We are now ready to compute the DF-MP2 correlation energy $E_0^{(2)}$. One approach for doing this would clearly be to form the four-index OVOV $(ia\\mid jb)$ ERI tensor directly [an ${\\cal O}(N^5)$ contraction], and proceed exactly as we did for conventional MP2. This would, however, result in needing to store this entire tensor in memory, which would be prohibitive for large systems/basis sets and would only result in minimal savings. A more clever (and much less memory-intensive) algorithm can be found by considering the MP2 correlation energy expressions,\n\n\\begin{equation}\nE_{\\rm 0,\\,SS}^{(2)} = \\sum_{ij}\\sum_{ab}\\frac{(ia\\mid jb)[(ia\\mid jb) - (ib\\mid ja)]}{\\epsilon_i - \\epsilon_a + \\epsilon_j - \\epsilon_b},\\,{\\rm and}\n\\end{equation}\n\\begin{equation}\nE_{\\rm 0,\\,OS}^{(2)} = \\sum_{ij}\\sum_{ab}\\frac{(ia\\mid jb)(ia\\mid jb)}{\\epsilon_i - \\epsilon_a + \\epsilon_j - \\epsilon_b},\n\\end{equation}\n\nfor particular values of the occupied orbital indices $i$ and $j$:\n\n\\begin{equation}\nE_{\\rm 0,\\,SS}^{(2)}(i, j) = \\sum_{ab}\\frac{I_{ab}[I_{ab} - I_{ba}]}{\\epsilon_i + \\epsilon_j - \\boldsymbol{\\epsilon}_{ab}}\n\\end{equation}\n\\begin{equation}\nE_{\\rm 0,\\,OS}^{(2)}(i, j) = \\sum_{ab}\\frac{I_{ab}I_{ab}}{\\epsilon_i + \\epsilon_j - \\boldsymbol{\\epsilon}_{ab}},\n\\end{equation}\n\nfor virtual-virtual blocks of the full ERI tensors $I_{ab}$ and a matrix $\\boldsymbol{\\epsilon}_{ab}$ containing all possible combinations of the virtual orbital energies $\\epsilon_a$ and $\\epsilon_b$. These expressions are advantageous because they only call for two-index contractions between the virtual-virtual blocks of the OVOV ERI tensor, and the storage of only the VV-block of this tensor in memory. Furthermore, the formation of the $I_{ab}$ tensor is also ameliorated, since only the auxiliary-virtual blocks of the three-index `Qmo` tensor must be contracted, which can be done on-the-fly as opposed to beforehand (requiring no storage in memory). In practice, these expressions can be used within explicit loops over occupied indices $i$ and $j$; therefore the overall scaling of this step is still ${\\cal O}(N^5)$ (formation of $I_{ab}$ is ${\\cal O}(N^3)$ inside two loops), however the the drastically reduced memory requirements result in this method a significant win over conventional MP2.\n\nOne potentially mysterious quantity in the frozen-index expressions given above is the virtual-virtual orbital eigenvalue tensor, **$\\epsilon$**. To build this array, we can again borrow an idea from our implementation of conventional MP2: reshaping and broadcasting. In the cell below, use these techniques to build the VV $\\boldsymbol{\\epsilon}_{ab}$ tensor.\n\nHint: In the frozen-index expressions above, $\\boldsymbol{\\epsilon}_{ab}$ is *subtracted* from the occupied orbital energies $\\epsilon_i$ and $\\epsilon_j$. Therefore, the virtual orbital energies should be added together to have the correct sign!",
"_____no_output_____"
]
],
[
[
"# ==> Build VV Epsilon Tensor <==\ne_vv = e_ab.reshape(-1, 1) + e_ab",
"_____no_output_____"
]
],
[
[
"In addition to the memory savings incurred by generating VV-blocks of our ERI tensors on-the-fly, we can exploit the permutational symmetry of these tensors [Sherrill:ERI] to drastically reduce the number of loops (and therefore Qv,Qv contractions!) which are needed to compute the MP2 correlation energy. To see the relevant symmetry, recall that a spin-free four index ERI over spatial orbitals (written in chemists' notation) is given by\n\n$$(i\\,a\\mid j\\,b) = \\int{\\rm d}^3{\\bf r}_1{\\rm d}^3{\\bf r}_2\\phi_i^*({\\bf x}_1)\\phi_a({\\bf x}_1)\\frac{1}{r_{12}}\\phi_j^*({\\bf x}_2)\\phi_b({\\bf x}_2)$$\n\nFor real orbitals, it is easy to see that $(i\\,a\\mid j\\,b) = (j\\,b\\mid i\\,a)$; therefore, it is unnecessary to iterate over all combinations of $i$ and $j$, since the value of the contractions containing either $(i\\,a\\mid j\\,b)$ or $(j\\,b\\mid i\\,a)$ will be identical. Therefore, it suffices to iterate over all $i$ and only $j\\geq i$. Then, the \"diagonal elements\" ($i = j$) will contribute once to each of the same-spin and opposite-spin correlation energies, and the \"off-diagonal\" elements ($i\\neq j$) will contribute twice to each correlation energy due to symmetry. This corresponds to placing either a 1 or a 2 in the numerator of the energy denominator, i.e., \n\n\\begin{equation}\nE_{denom} = \\frac{\\alpha}{\\epsilon_i + \\epsilon_j - \\boldsymbol{\\epsilon}_{ab}};\\;\\;\\;\\alpha = \\begin{cases}1;\\; i=j\\\\2;\\;i\\neq j\\end{cases},\n\\end{equation}\n\nbefore contracting this tensor with $I_{ab}$ and $I_{ba}$ to compute the correlation energy. In the cell below, compute the same-spin and opposite-spin DF-MP2 correlation energies using the frozen-index expressions 3 and 4 above, exploiting the permutational symmetry of the full $(ia\\mid jb)$ ERIs. Then, using the correlation energies, compute the total MP2 energy using the DF-RHF energy we computed above.",
"_____no_output_____"
]
],
[
[
"mp2_os_corr = 0.0\nmp2_ss_corr = 0.0\nfor i in range(ndocc):\n # Get epsilon_i from e_ij\n e_i = e_ij[i]\n \n # Get 2d array Qa for i from Qov\n i_Qa = Qmo[:, i, :]\n \n for j in range(i, ndocc):\n # Get epsilon_j from e_ij\n e_j = e_ij[j]\n \n # Get 2d array Qb for j from Qov\n j_Qb = Qmo[:, j, :]\n \n # Compute 2d ERI array for fixed i,j from Qa & Qb\n ij_Iab = np.einsum('Qa,Qb->ab', i_Qa, j_Qb, optimize=True)\n\n # Compute energy denominator\n if i == j:\n e_denom = 1.0 / (e_i + e_j - e_vv)\n else:\n e_denom = 2.0 / (e_i + e_j - e_vv)\n\n # Compute SS & OS MP2 Correlation\n mp2_os_corr += np.einsum('ab,ab,ab->', ij_Iab, ij_Iab, e_denom, optimize=True)\n mp2_ss_corr += np.einsum('ab,ab,ab->', ij_Iab, ij_Iab - ij_Iab.T, e_denom, optimize=True)\n\n# Compute MP2 correlation & total MP2 Energy\nmp2_corr = mp2_os_corr + mp2_ss_corr\nMP2_E = scf_e + mp2_corr",
"_____no_output_____"
],
[
"# ==> Compare to Psi4 <==\npsi4.compare_values(psi4.energy('mp2'), MP2_E, 8, 'MP2 Energy')",
" MP2 Energy........................................................PASSED\n"
]
],
[
[
"## References\n\n1. Original paper: \"Note on an Approximation Treatment for Many-Electron Systems\"\n\t> [[Moller:1934:618](https://journals.aps.org/pr/abstract/10.1103/PhysRev.46.618)] C. Møller and M. S. Plesset, *Phys. Rev.* **46**, 618 (1934)\n2. The Laplace-transformation in MP theory: \"Minimax approximation for the decomposition of energy denominators in Laplace-transformed Møller–Plesset perturbation theories\"\n > [[Takasuka:2008:044112](http://aip.scitation.org/doi/10.1063/1.2958921)] A. Takatsuka, T. Siichiro, and W. Hackbusch, *J. Phys. Chem.*, **129**, 044112 (2008)\n3. Equations taken from:\n\t> [[Szabo:1996](https://books.google.com/books?id=KQ3DAgAAQBAJ&printsec=frontcover&dq=szabo+%26+ostlund&hl=en&sa=X&ved=0ahUKEwiYhv6A8YjUAhXLSCYKHdH5AJ4Q6AEIJjAA#v=onepage&q=szabo%20%26%20ostlund&f=false)] A. Szabo and N. S. Ostlund, *Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory*. Courier Corporation, 1996.\n4. Algorithms taken from:\n\t> [Crawford:prog] T. D. Crawford, \"The Second-Order Møller–Plesset Perturbation Theory (MP2) Energy.\" Accessed via the web at http://github.com/CrawfordGroup/ProgrammingProjects.\n5. ERI Permutational Symmetries\n\t> [Sherrill:ERI] C. David Sherrill, \"Permutational Symmetries of One- and Two-Electron Integrals.\" Accessed via the web at http://vergil.chemistry.gatech.edu/notes/permsymm/permsymm.pdf.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
eca345493080d3c88597f3b3ecba3fa97e37556c | 212,857 | ipynb | Jupyter Notebook | diabetes_analysis.ipynb | mayoor/stats-ml-exps | 5ac26d948cf43fd5b3e349dd58125b0ca80f0eaf | [
"Apache-2.0"
] | null | null | null | diabetes_analysis.ipynb | mayoor/stats-ml-exps | 5ac26d948cf43fd5b3e349dd58125b0ca80f0eaf | [
"Apache-2.0"
] | null | null | null | diabetes_analysis.ipynb | mayoor/stats-ml-exps | 5ac26d948cf43fd5b3e349dd58125b0ca80f0eaf | [
"Apache-2.0"
] | null | null | null | 103.883358 | 36,304 | 0.828157 | [
[
[
"## Experiments with Diabetes Data",
"_____no_output_____"
]
],
[
[
"# pip install scipy",
"_____no_output_____"
],
[
"import pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats \n%matplotlib inline",
"_____no_output_____"
],
[
"df = pd.read_csv('diabetes.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nn, bins, patches = ax.hist(df['BMI'], 50, density=True)\n\nsigma = np.std(df['BMI'])\nmu = np.mean(df['BMI'])\n# add a 'best fit' line\ny = ((1 / (np.sqrt(2 * np.pi) * sigma)) *\n np.exp(-0.5 * (1 / sigma * (bins - mu))**2))\nax.plot(bins, y, '--')",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nn, bins, patches = ax.hist(df[df['Outcome']>0.8]['BMI'], 50, density=True)\ndiabetes_recs = df[df['Outcome']>0.8]\nsigma_d = np.std(df[df['Outcome']>0.8]['BMI'])\nmu_d = np.mean(df[df['Outcome']>0.8]['BMI'])\n# add a 'best fit' line\nyb = ((1 / (np.sqrt(2 * np.pi) * sigma_d)) *\n np.exp(-0.5 * (1 / sigma_d * (bins - mu_d))**2))\nax.plot(bins, yb, '--')",
"_____no_output_____"
],
[
"print(sigma_d, sigma, mu, mu_d)",
"7.249404266473001 7.87902573154013 31.992578124999998 35.14253731343284\n"
],
[
"7.879/np.sqrt(len(diabetes_recs))",
"_____no_output_____"
],
[
"df[df['Outcome']==1].mean()",
"_____no_output_____"
],
[
"bmi_outcome = df[['BMI','Age','Outcome']]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nax.scatter(bmi_outcome['Age'],bmi_outcome['BMI'],color=['red' if o > 0.8 else 'blue' for o in bmi_outcome['Outcome']])\na = ax.set_xlabel(\"AGE\")\nb = ax.set_ylabel(\"BMI\")",
"_____no_output_____"
],
[
"bmi_outcome[bmi_outcome['Outcome'] > 0.8].describe()",
"_____no_output_____"
],
[
"pedigree_outcome = df[['DiabetesPedigreeFunction','Outcome','BMI']]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nax.scatter(pedigree_outcome['DiabetesPedigreeFunction'],pedigree_outcome['BMI'],color=['red' if o > 0.8 else 'blue' for o in pedigree_outcome['Outcome']])\na = ax.set_xlabel(\"DiabetesPedigreeFunction\")\nb = ax.set_ylabel(\"BMI\")",
"_____no_output_____"
],
[
"fit_alpha, fit_loc, fit_beta=stats.gamma.fit(pedigree_outcome['DiabetesPedigreeFunction'])",
"_____no_output_____"
],
[
"print (f\"alpha: {fit_alpha}, loc: {fit_loc}, beta: {fit_beta}\")",
"alpha: 1.5869013338294626, loc: 0.07670028444557661, beta: 0.24902575812405062\n"
],
[
"fig, ax = plt.subplots()\nnbins = 25\nn, bins, patches = ax.hist(pedigree_outcome['DiabetesPedigreeFunction'], bins=nbins)\nx = np.linspace(0,2.5,nbins+1)\ny1 = stats.gamma.pdf(x, a=fit_alpha, scale=fit_beta) * 100\nax.plot(bins,y1,'-')\n",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"lmodel = LinearRegression()",
"_____no_output_____"
],
[
"trainx, testx, trainy, testy = train_test_split(df[[col for col in df.columns if col != 'Outcome']].values, df['Outcome'], test_size=0.3)",
"_____no_output_____"
],
[
"lmodel.fit(trainx, trainy)",
"_____no_output_____"
],
[
"prediction = lmodel.predict(testx)",
"_____no_output_____"
],
[
"prediction[[prediction > 0.5]] = 1",
"<ipython-input-24-d9296ea19e92>:1: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n prediction[[prediction > 0.5]] = 1\n"
],
[
"prediction[[prediction < 1]] = 0",
"<ipython-input-25-f5453e55cdce>:1: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n prediction[[prediction < 1]] = 0\n"
],
[
"from sklearn.metrics import classification_report",
"_____no_output_____"
],
[
"print(classification_report(testy, prediction))",
" precision recall f1-score support\n\n 0 0.76 0.85 0.81 150\n 1 0.65 0.51 0.57 81\n\n accuracy 0.73 231\n macro avg 0.71 0.68 0.69 231\nweighted avg 0.72 0.73 0.72 231\n\n"
],
[
"from sklearn.linear_model import LogisticRegression\nlogmodel = LogisticRegression(max_iter=1000, C=10)",
"_____no_output_____"
],
[
"logmodel.fit(trainx, trainy)",
"_____no_output_____"
],
[
"prediction = logmodel.predict(testx)",
"_____no_output_____"
],
[
"print(classification_report(testy, prediction))",
" precision recall f1-score support\n\n 0 0.77 0.85 0.81 150\n 1 0.66 0.53 0.59 81\n\n accuracy 0.74 231\n macro avg 0.72 0.69 0.70 231\nweighted avg 0.73 0.74 0.73 231\n\n"
],
[
"fig,ax = plt.subplots()\nax.barh(width=lmodel.coef_, y=[col for col in df.columns if col != 'Outcome'])",
"_____no_output_____"
],
[
"lmodel.coef_",
"_____no_output_____"
],
[
"from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.impute import KNNImputer\nfrom sklearn.impute import SimpleImputer",
"_____no_output_____"
],
[
"df[df['BloodPressure']==0].count()",
"_____no_output_____"
],
[
"linpipeline = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', LinearRegression())\n ])",
"_____no_output_____"
],
[
"linpipeline.fit(trainx, trainy)",
"_____no_output_____"
],
[
"preds = linpipeline.predict(testx)",
"_____no_output_____"
],
[
"preds[np.argwhere(preds>=0.5).flatten()] = 1",
"_____no_output_____"
],
[
"preds[np.argwhere(preds<1).flatten()]=0",
"_____no_output_____"
],
[
"print(classification_report(testy, preds))",
" precision recall f1-score support\n\n 0 0.76 0.85 0.81 150\n 1 0.65 0.51 0.57 81\n\n accuracy 0.73 231\n macro avg 0.71 0.68 0.69 231\nweighted avg 0.72 0.73 0.72 231\n\n"
],
[
"fig, ax = plt.subplots()\nax.barh(width=linpipeline['model'].coef_, y = [col for col in df.columns if col != 'Outcome'])",
"_____no_output_____"
],
[
"dfnew = df[df.columns]",
"_____no_output_____"
],
[
"dfnew['BMI'] = dfnew['BMI'].replace({0:np.nan})",
"_____no_output_____"
],
[
"dfnew['BloodPressure'] = dfnew['BloodPressure'].replace({0:np.nan})",
"_____no_output_____"
],
[
"dfnew['SkinThickness'] = dfnew['SkinThickness'].replace({0:np.nan})",
"_____no_output_____"
],
[
"trainxn, testxn, trainyn, testyn = train_test_split(dfnew[[col for col in dfnew.columns if col != 'Outcome']].values, dfnew['Outcome'], test_size=0.3)",
"_____no_output_____"
],
[
"linpipeline2 = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', LinearRegression())\n ])",
"_____no_output_____"
],
[
"linpipeline2.fit(trainxn, trainyn)",
"_____no_output_____"
],
[
"predsn = linpipeline2.predict(testxn)\npredsn[np.argwhere(predsn>=0.65).flatten()] = 1\npredsn[np.argwhere(predsn<1).flatten()]=0",
"_____no_output_____"
],
[
"print(classification_report(testyn, predsn))",
" precision recall f1-score support\n\n 0 0.75 0.97 0.85 152\n 1 0.88 0.37 0.52 79\n\n accuracy 0.77 231\n macro avg 0.81 0.67 0.68 231\nweighted avg 0.79 0.77 0.73 231\n\n"
],
[
"fig, ax = plt.subplots()\nax.barh(width=linpipeline['model'].coef_, y = [col for col in df.columns if col != 'Outcome'])",
"_____no_output_____"
],
[
"plt.bar(height=df['Outcome'].value_counts().values,x=[0,1])",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier",
"_____no_output_____"
],
[
"treepipeline = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', DecisionTreeClassifier())\n ])",
"_____no_output_____"
],
[
"treepipeline.fit(trainxn, trainyn)\ndt = DecisionTreeClassifier()\ndt.fit(trainx, trainy)",
"_____no_output_____"
],
[
"print(classification_report(testyn, treepipeline.predict(testxn)))",
" precision recall f1-score support\n\n 0 0.75 0.70 0.73 152\n 1 0.49 0.56 0.52 79\n\n accuracy 0.65 231\n macro avg 0.62 0.63 0.63 231\nweighted avg 0.66 0.65 0.66 231\n\n"
],
[
"print(classification_report(testy, dt.predict(testx)))",
" precision recall f1-score support\n\n 0 0.74 0.78 0.76 150\n 1 0.55 0.49 0.52 81\n\n accuracy 0.68 231\n macro avg 0.64 0.64 0.64 231\nweighted avg 0.67 0.68 0.68 231\n\n"
],
[
"from sklearn.ensemble import AdaBoostClassifier",
"_____no_output_____"
],
[
"boostpipeline = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', AdaBoostClassifier())\n ])",
"_____no_output_____"
],
[
"boostpipeline.fit(trainxn, trainyn)\nprint(classification_report(testyn, boostpipeline.predict(testxn)))",
" precision recall f1-score support\n\n 0 0.78 0.83 0.80 152\n 1 0.62 0.54 0.58 79\n\n accuracy 0.73 231\n macro avg 0.70 0.69 0.69 231\nweighted avg 0.72 0.73 0.73 231\n\n"
],
[
"plt.bar(height=df['Pregnancies'].value_counts(), x=df['Pregnancies'].value_counts().index.values)",
"_____no_output_____"
],
[
"df_new = df[[col for col in df.columns if col != 'Pregnancies' and col != 'Insulin']]",
"_____no_output_____"
],
[
"trainxn, testxn, trainyn, testyn = train_test_split(df_new[[col for col in df_new.columns if col != 'Outcome']].values, df_new['Outcome'], test_size=0.3)",
"_____no_output_____"
],
[
"linpipeline_nop = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', LinearRegression())\n ])",
"_____no_output_____"
],
[
"linpipeline_nop.fit(trainxn, trainyn)",
"_____no_output_____"
],
[
"predsnp = linpipeline_nop.predict(testxn)\npredsnp[np.argwhere(predsnp>=0.40).flatten()] = 1\npredsnp[np.argwhere(predsnp<1).flatten()]=0",
"_____no_output_____"
],
[
"print(classification_report(testyn, predsnp))",
" precision recall f1-score support\n\n 0 0.82 0.88 0.85 145\n 1 0.77 0.66 0.71 86\n\n accuracy 0.80 231\n macro avg 0.79 0.77 0.78 231\nweighted avg 0.80 0.80 0.80 231\n\n"
],
[
"adabmodel = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', AdaBoostClassifier())\n ])",
"_____no_output_____"
],
[
"adabmodel.fit(trainxn, trainyn)\nprint(classification_report(testyn, adabmodel.predict(testxn)))",
" precision recall f1-score support\n\n 0 0.79 0.89 0.84 145\n 1 0.76 0.60 0.68 86\n\n accuracy 0.78 231\n macro avg 0.78 0.75 0.76 231\nweighted avg 0.78 0.78 0.78 231\n\n"
],
[
"plt.barh(y=[col for col in df_new.columns if col != 'Outcome'], width=adabmodel['model'].feature_importances_)",
"_____no_output_____"
],
[
"df_new.corr()",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA\nfrom sklearn.preprocessing import MinMaxScaler",
"_____no_output_____"
],
[
"pl = Pipeline([('scaler', StandardScaler()),\n# ('Imputer', KNNImputer()),\n ('dimreduction', PCA(n_components=6)),\n ('linreg', LinearRegression())])",
"_____no_output_____"
],
[
"pl.fit(trainxn, trainyn)\npredsnp1 = pl.predict(testxn)\npredsnp1[np.argwhere(predsnp1>=0.45).flatten()] = 1\npredsnp1[np.argwhere(predsnp1<1).flatten()]=0",
"_____no_output_____"
],
[
"print(classification_report(testyn, predsnp1))",
" precision recall f1-score support\n\n 0 0.62 0.61 0.62 145\n 1 0.36 0.36 0.36 86\n\n accuracy 0.52 231\n macro avg 0.49 0.49 0.49 231\nweighted avg 0.52 0.52 0.52 231\n\n"
],
[
"from sklearn.feature_selection import RFECV\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn import metrics",
"_____no_output_____"
],
[
"rfecv = RFECV(estimator=LogisticRegression(), step=1, cv=StratifiedKFold(10),\n scoring=metrics.make_scorer(metrics.f1_score))\nrfecv.fit(trainxn, trainyn)",
"_____no_output_____"
],
[
"predsnp2 = rfecv.predict(testxn)\npredsnp2[np.argwhere(predsnp2>=0.45).flatten()] = 1\npredsnp2[np.argwhere(predsnp2<1).flatten()]=0",
"_____no_output_____"
],
[
"print(classification_report(testyn, predsnp2))",
" precision recall f1-score support\n\n 0 0.76 0.93 0.84 145\n 1 0.81 0.51 0.63 86\n\n accuracy 0.77 231\n macro avg 0.79 0.72 0.73 231\nweighted avg 0.78 0.77 0.76 231\n\n"
],
[
"rfecv.get_support()",
"_____no_output_____"
],
[
"imp_features = np.array([col for col in df_new.columns if col != 'Outcome'])[rfecv.get_support()].tolist()\nprint (f\" Important Features are: {imp_features}\")",
" Important Features are: ['Glucose', 'BMI', 'DiabetesPedigreeFunction', 'Age']\n"
],
[
"dff = df[imp_features+['Outcome']]\ntrainx_trim, testx_trim, trainy_trim, testy_trim = train_test_split(dff[[col for col in dff.columns if col!='Outcome']].values, dff['Outcome'], test_size=0.3)",
"_____no_output_____"
],
[
"adabmodel = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', AdaBoostClassifier(base_estimator=LogisticRegression(), n_estimators=25, learning_rate=0.5))\n ])\nadabmodel.fit(trainx_trim, trainy_trim)\nprint(classification_report(testy_trim, adabmodel.predict(testx_trim)))",
" precision recall f1-score support\n\n 0 0.78 0.89 0.83 152\n 1 0.71 0.53 0.61 79\n\n accuracy 0.77 231\n macro avg 0.75 0.71 0.72 231\nweighted avg 0.76 0.77 0.76 231\n\n"
],
[
"linpipeline_trim = Pipeline([('imputer', KNNImputer()),\n ('scaler', StandardScaler()),\n ('model', LinearRegression())\n ])\nlinpipeline_trim.fit(trainx_trim, trainy_trim)\npredsnp3 = linpipeline_trim.predict(testx_trim)\npredsnp3[np.argwhere(predsnp3>=0.45).flatten()] = 1\npredsnp3[np.argwhere(predsnp3<1).flatten()]=0",
"_____no_output_____"
],
[
"print(classification_report(testy_trim, predsnp3))",
" precision recall f1-score support\n\n 0 0.81 0.83 0.82 152\n 1 0.66 0.63 0.65 79\n\n accuracy 0.76 231\n macro avg 0.74 0.73 0.73 231\nweighted avg 0.76 0.76 0.76 231\n\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca3654a0ec694a684ad918864ceb52fe6178a3e | 25,507 | ipynb | Jupyter Notebook | Emissions_CW.ipynb | Vizzuality/gfw_nbks | 70b166e7aaf26c697d9171bb98565f76f17b965a | [
"MIT"
] | 1 | 2017-12-22T18:03:35.000Z | 2017-12-22T18:03:35.000Z | Emissions_CW.ipynb | Vizzuality/gfw_nbks | 70b166e7aaf26c697d9171bb98565f76f17b965a | [
"MIT"
] | null | null | null | Emissions_CW.ipynb | Vizzuality/gfw_nbks | 70b166e7aaf26c697d9171bb98565f76f17b965a | [
"MIT"
] | null | null | null | 98.482625 | 17,632 | 0.839495 | [
[
[
"## Emissions widget:\n\n* dyanamic sentence\n\n* stacked chart of historical LULC and agriculture emissions\n\nWe will need to use the Climate Watch API and grab data:\n\n- Total including LUCF (for DS)\n- Agriculture\n- Land-use change and Forestry\n\nhttps://www.climatewatchdata.org/countries/BRA?calculation=ABSOLUTE_VALUE&filter=312%2C314&source=25",
"_____no_output_____"
]
],
[
[
"#Import Global Metadata etc\n\n%run '0.Importable_Globals.ipynb'",
"_____no_output_____"
],
[
"# For FAO data, only country (admin0) level data are avaiable\n\nadm0 = 'BRA' ",
"_____no_output_____"
],
[
"url = f\"https://www.climatewatchdata.org/api/v1/emissions?gas=107&location={adm0}&source=25\"\n\nr = requests.get(url)\nr.url",
"_____no_output_____"
],
[
"# keys = ['Total including LUCF','Land-Use Change and Forestry','Agriculture']",
"_____no_output_____"
],
[
"def extract_emissions(response, key):\n \"\"\"After a response from the Climate Watch API extract two lists of\n years and values for a specified emission type.\n e.g. ['Total including LUCF','Land-Use Change and Forestry','Agriculture']:\n Returns year, values in absolute emissions, and values relative to total annual emissions\n \"\"\"\n years = []\n values = []\n total_emissions = []\n for item in r.json():\n if item.get('sector') == 'Total including LUCF':\n for row in (item.get('emissions')):\n total_emissions.append(row.get('value'))\n np.array(total_emissions)\n for k in [key]:\n for item in response.json():\n if item.get('sector') == k:\n for row in (item.get('emissions')):\n years.append(row.get('year'))\n values.append(row.get('value'))\n return np.array(years), np.array(values), np.array(total_emissions)",
"_____no_output_____"
],
[
"extract_emissions(r, 'Agriculture')",
"_____no_output_____"
],
[
"years, agriculture_values, total_emissions = extract_emissions(r, 'Agriculture')\n_, lucf_values, _ = extract_emissions(r, 'Land-Use Change and Forestry')\n\nwidth = 0.35\nfig, ax = plt.subplots()\nax.fill_between(years, 0, agriculture_values, color='darkblue', alpha=0.5)\nax.fill_between(years, agriculture_values, agriculture_values + lucf_values, color='lightblue', alpha=0.5)\n\n# add some text for labels, title and axes ticks\nax.set_ylabel(f'Emissions (tCO2e)')\nax.set_xlabel('Year')\nax.set_title(f\"Historical emissions for {iso_to_countries[adm0]}\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"On hover you can show the absolute and relative emissions. Relative emissions are calculated as a % of the total emissions, e.g:",
"_____no_output_____"
]
],
[
[
"relative_agriculture_emissions = (agriculture_values / total_emissions) * 100\nprint(relative_agriculture_emissions)",
"[ 20.89880789 21.31417274 21.53370348 21.51452447 21.76950478\n 21.75744554 20.73746778 20.81585142 21.03886939 21.06791913\n 21.27820433 19.25067116 20.11753352 20.95425744 21.44168663\n 21.46427829 30.24797958 29.72646501 29.33255573 30.16287807\n 29.73509963 35.26227491 34.03070024 33.15889468 32.56050677]\n"
],
[
"emission_fraction = ((sum(agriculture_values) + sum(lucf_values))/sum(total_emissions)) * 100\n\n(\n f\"In {iso_to_countries[adm0]}, \"\n f\"land-use change and forestry combined with agriculture \"\n f\"contributed {lucf_values.sum() + agriculture_values.sum():3,.0f} tCO₂e of emissions \"\n f\"emissions from {years.min()}–{years.max()}, \"\n f\"{emission_fraction:2.0f}% of {iso_to_countries[adm0]}'s \"\n f\"total over this period.\"\n)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
eca36e107b25a33df5a5ecee80f23c45351edb3a | 25,364 | ipynb | Jupyter Notebook | notebooks/summarize-evaluation-icassp-clips.ipynb | BirdVox/bv_context_adaptation | 9bd446326ac927d72f7c333eac07ee0490fc3127 | [
"MIT"
] | 5 | 2018-10-17T21:17:26.000Z | 2019-06-14T01:48:29.000Z | notebooks/summarize-evaluation-icassp-clips.ipynb | BirdVox/bv_context_adaptation | 9bd446326ac927d72f7c333eac07ee0490fc3127 | [
"MIT"
] | null | null | null | notebooks/summarize-evaluation-icassp-clips.ipynb | BirdVox/bv_context_adaptation | 9bd446326ac927d72f7c333eac07ee0490fc3127 | [
"MIT"
] | 3 | 2018-12-22T00:04:43.000Z | 2021-06-09T20:02:28.000Z | 41.242276 | 89 | 0.400725 | [
[
[
"import csv\nimport datetime\nimport h5py\nimport itertools\nimport numpy as np\nimport os\nimport pandas as pd\nimport sklearn.metrics\nimport sys\nimport time\n\nsys.path.append(\"../src\")\nimport localmodule\n\n\nfor aug_kind_str in [\"none\", \"all\"]:\n\n # Define constants.\n data_dir = localmodule.get_data_dir()\n dataset_name = localmodule.get_dataset_name()\n folds = localmodule.fold_units()\n models_dir = localmodule.get_models_dir()\n units = localmodule.get_units()\n n_units = len(units)\n model_name = \"icassp-convnet\"\n if not aug_kind_str == \"none\":\n model_name = \"_\".join(\n [model_name, \"aug-\" + aug_kind_str])\n model_dir = os.path.join(models_dir, model_name)\n\n # Initialize dict of best trials.\n adhoc_best_trials_dict = {}\n \n # Initialize \"best-5\" matrices.\n adhoc_val_best5_tps = np.zeros((n_units, 5))\n adhoc_val_best5_fps = np.zeros((n_units, 5))\n adhoc_val_best5_tns = np.zeros((n_units, 5))\n adhoc_val_best5_fns = np.zeros((n_units, 5))\n\n adhoc_test_best5_tps = np.zeros((n_units, 5))\n adhoc_test_best5_fps = np.zeros((n_units, 5))\n adhoc_test_best5_tns = np.zeros((n_units, 5))\n adhoc_test_best5_fns = np.zeros((n_units, 5))\n\n\n # Define list of trial combinations.\n list_of_trials = [list(range(5))] * n_units\n combinations = list(itertools.product(*list_of_trials))\n n_combinations = len(combinations)\n\n\n # Loop over recording units.\n for test_unit_id, test_unit_str in enumerate(units):\n\n # Define directory for test unit.\n unit_dir = os.path.join(model_dir, test_unit_str)\n\n # Create CSV file for metrics.\n metrics_name = \"_\".join([\n dataset_name,\n model_name,\n test_unit_str,\n \"clip-metrics\"\n ])\n metrics_path = os.path.join(unit_dir, metrics_name + \".csv\")\n metrics_df = pd.read_csv(metrics_path)\n\n # Find 5 trials maximizing validation accuracy.\n adhoc_val_accs = np.array( \\\n metrics_df[\"Ad hoc validation accuracy (%)\"])\n adhoc_val_best5_trials = np.argsort(adhoc_val_accs)[::-1][:5]\n \n \n # Store best trial in dictionary.\n adhoc_val_best1_trial = adhoc_val_best5_trials[0]\n adhoc_best_trials_dict[test_unit_str] = adhoc_val_best1_trial\n\n # Read best 5 validation TP, FP, TN, and FN.\n adhoc_val_tps = np.array(metrics_df[\"Ad hoc validation TP\"])\n adhoc_val_best5_tps[test_unit_id, :] = \\\n adhoc_val_tps[adhoc_val_best5_trials]\n adhoc_val_fps = np.array(metrics_df[\"Ad hoc validation FP\"])\n adhoc_val_best5_fps[test_unit_id, :] = \\\n adhoc_val_fps[adhoc_val_best5_trials]\n adhoc_val_tns = np.array(metrics_df[\"Ad hoc validation TN\"])\n adhoc_val_best5_tns[test_unit_id, :] = \\\n adhoc_val_tns[adhoc_val_best5_trials]\n adhoc_val_fns = np.array(metrics_df[\"Ad hoc validation FN\"])\n adhoc_val_best5_fns[test_unit_id, :] = \\\n adhoc_val_fns[adhoc_val_best5_trials]\n\n # Read cv-best 5 test TP, FP, TN, and FN.\n adhoc_test_tps = np.array(metrics_df[\"Ad hoc test TP\"])\n adhoc_test_best5_tps[test_unit_id, :] = \\\n adhoc_test_tps[adhoc_val_best5_trials]\n adhoc_test_fps = np.array(metrics_df[\"Ad hoc test FP\"])\n adhoc_test_best5_fps[test_unit_id, :] = \\\n adhoc_test_fps[adhoc_val_best5_trials]\n adhoc_test_tns = np.array(metrics_df[\"Ad hoc test TN\"])\n adhoc_test_best5_tns[test_unit_id, :] = \\\n adhoc_test_tns[adhoc_val_best5_trials]\n adhoc_test_fns = np.array(metrics_df[\"Ad hoc test FN\"])\n adhoc_test_best5_fns[test_unit_id, :] = \\\n adhoc_test_fns[adhoc_val_best5_trials]\n\n\n # Get mean and standard deviation of validation accuracies.\n comb_best5_adhoc_val_tps = []\n comb_best5_adhoc_val_fps = []\n comb_best5_adhoc_val_tns = []\n comb_best5_adhoc_val_fns = []\n for comb_id in range(n_combinations):\n comb = list(combinations[comb_id])\n comb_best5_adhoc_val_tps.append(np.sum(np.array(\n [adhoc_val_best5_tps[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_val_fps.append(np.sum(np.array(\n [adhoc_val_best5_fps[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_val_tns.append(np.sum(np.array(\n [adhoc_val_best5_tns[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_val_fns.append(np.sum(np.array(\n [adhoc_val_best5_fns[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_val_tps = np.array(comb_best5_adhoc_val_tps)\n comb_best5_adhoc_val_fps = np.array(comb_best5_adhoc_val_fps)\n comb_best5_adhoc_val_tns = np.array(comb_best5_adhoc_val_tns)\n comb_best5_adhoc_val_fns = np.array(comb_best5_adhoc_val_fns)\n comb_best5_adhoc_val_accs =\\\n (comb_best5_adhoc_val_tps+comb_best5_adhoc_val_tns)/\\\n (comb_best5_adhoc_val_tps+comb_best5_adhoc_val_tns+\\\n comb_best5_adhoc_val_fps+comb_best5_adhoc_val_fns)\n mean_best5_adhoc_val_accs = np.mean(comb_best5_adhoc_val_accs)\n std_best5_adhoc_val_accs = np.std(comb_best5_adhoc_val_accs)\n oracle_adhoc_val_accs = np.max(comb_best5_adhoc_val_accs)\n\n\n # Get mean and standard deviation of test accuracies.\n comb_best5_adhoc_test_tps = []\n comb_best5_adhoc_test_fps = []\n comb_best5_adhoc_test_tns = []\n comb_best5_adhoc_test_fns = []\n for comb_id in range(n_combinations):\n comb = list(combinations[comb_id])\n comb_best5_adhoc_test_tps.append(np.sum(np.array(\n [adhoc_test_best5_tps[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_test_fps.append(np.sum(np.array(\n [adhoc_test_best5_fps[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_test_tns.append(np.sum(np.array(\n [adhoc_test_best5_tns[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_test_fns.append(np.sum(np.array(\n [adhoc_test_best5_fns[t, comb[t]] for t in range(n_units)])))\n comb_best5_adhoc_test_tps = np.array(comb_best5_adhoc_test_tps)\n comb_best5_adhoc_test_fps = np.array(comb_best5_adhoc_test_fps)\n comb_best5_adhoc_test_tns = np.array(comb_best5_adhoc_test_tns)\n comb_best5_adhoc_test_fns = np.array(comb_best5_adhoc_test_fns)\n comb_best5_adhoc_test_accs =\\\n (comb_best5_adhoc_test_tps+comb_best5_adhoc_test_tns)/\\\n (comb_best5_adhoc_test_tps+comb_best5_adhoc_test_tns+\\\n comb_best5_adhoc_test_fps+comb_best5_adhoc_test_fns)\n mean_best5_adhoc_test_accs = np.mean(comb_best5_adhoc_test_accs)\n std_best5_adhoc_test_accs = np.std(comb_best5_adhoc_test_accs)\n oracle_adhoc_test_accs = np.max(comb_best5_adhoc_test_accs)\n \n print(\"Augmentation: {}\".format(aug_kind_str))\n print(adhoc_best_trials_dict)\n print(\"Validation accuracy: {:5.3f}% ± {:5.3f} (best 5)\".format(\n mean_best5_adhoc_val_accs * 100, std_best5_adhoc_val_accs * 100))\n print(\"Validation accuracy: {:5.3f}% (oracle)\".format(\n oracle_adhoc_val_accs * 100))\n print(\"Test accuracy: {:5.3f}% ± {:5.3f} (best 5)\".format(\n mean_best5_adhoc_test_accs * 100, std_best5_adhoc_test_accs * 100))\n print(\"Test accuracy: {:5.3f}% (oracle)\".format(\n oracle_adhoc_test_accs * 100))\n print(\"\")",
"Augmentation: none\n{'unit01': 5, 'unit02': 5, 'unit03': 3, 'unit05': 5, 'unit07': 5, 'unit10': 8}\nValidation accuracy: 85.894% ± 7.108 (best 5)\nValidation accuracy: 95.000% (oracle)\nTest accuracy: 83.252% ± 5.294 (best 5)\nTest accuracy: 92.821% (oracle)\n\nAugmentation: all\n{'unit01': 5, 'unit02': 7, 'unit03': 9, 'unit05': 7, 'unit07': 7, 'unit10': 8}\nValidation accuracy: 94.928% ± 0.391 (best 5)\nValidation accuracy: 96.091% (oracle)\nTest accuracy: 94.852% ± 0.983 (best 5)\nTest accuracy: 96.383% (oracle)\n\n"
],
[
"metrics_df",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
eca3743d3d1ca2590eac14003d6552207a943454 | 122,924 | ipynb | Jupyter Notebook | docs/scales/scale_x_date.ipynb | bpopeters/ggpy | 6c73cbd33f2ce894b6797dca6cb83cae01cfd1b6 | [
"BSD-2-Clause"
] | 1,133 | 2017-01-10T16:58:15.000Z | 2022-03-31T14:40:29.000Z | docs/scales/scale_x_date.ipynb | bpopeters/ggpy | 6c73cbd33f2ce894b6797dca6cb83cae01cfd1b6 | [
"BSD-2-Clause"
] | 287 | 2015-01-02T18:54:17.000Z | 2017-01-10T14:48:14.000Z | docs/scales/scale_x_date.ipynb | bpopeters/ggpy | 6c73cbd33f2ce894b6797dca6cb83cae01cfd1b6 | [
"BSD-2-Clause"
] | 295 | 2017-01-16T19:16:49.000Z | 2022-02-18T14:10:58.000Z | 668.065217 | 68,692 | 0.922318 | [
[
[
"%matplotlib inline\nfrom ggplot import *",
"_____no_output_____"
]
],
[
[
"### `scale_x_date`\n`scale_x_date` scale a continuous x-axis. Its parameters are:\n\n- `name` - axis label\n- `breaks` - x tick breaks\n- `labels` - x tick labels\n\n- `date_format` - date string formatter",
"_____no_output_____"
]
],
[
[
"ggplot(meat, aes('date','beef')) + \\\n geom_line() + \\\n scale_x_date(breaks=date_breaks('10 years'),\n labels=date_format('%B %-d, %Y'))",
"_____no_output_____"
],
[
"ggplot(meat, aes(x='date', y='beef')) + \\\n stat_smooth(method='loewss', span=0.2, se=False) + \\\n scale_x_date(\"Date\", breaks=date_breaks('10 years'), labels=date_format('%B %-d, %Y'))",
"_____no_output_____"
],
[
"ggplot(meat, aes(x='date', ymin='beef - 1000', ymax='beef + 1000')) + \\\n geom_area() + \\\n scale_x_date(labels=date_format(\"%m/%Y\"))",
"_____no_output_____"
],
[
"ggplot(pageviews, aes(x='date_hour', y='pageviews')) + \\\n geom_point() + \\\n scale_x_date(breaks='1 month')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
eca38cde19625ac2dbf862be50e046f401a61874 | 25,232 | ipynb | Jupyter Notebook | doc/tutorial/color_palettes.ipynb | ajalexei/seaborn | 6b381dbf68e8508cb3c86c88aa79dd6a3f19e0db | [
"BSD-3-Clause"
] | 2 | 2019-01-13T19:21:05.000Z | 2021-04-01T05:12:15.000Z | doc/tutorial/color_palettes.ipynb | ajalexei/seaborn | 6b381dbf68e8508cb3c86c88aa79dd6a3f19e0db | [
"BSD-3-Clause"
] | null | null | null | doc/tutorial/color_palettes.ipynb | ajalexei/seaborn | 6b381dbf68e8508cb3c86c88aa79dd6a3f19e0db | [
"BSD-3-Clause"
] | null | null | null | 35.638418 | 768 | 0.628607 | [
[
[
".. _palette_tutorial:\n\n.. currentmodule:: seaborn",
"_____no_output_____"
]
],
[
[
"# Choosing color palettes",
"_____no_output_____"
]
],
[
[
"Color is more important than other aspects of figure style because color can reveal patterns in the data if used effectively or hide those patterns if used poorly. There are a number of great resources to learn about good techniques for using color in visualizations, I am partial to this `series of blog posts <https://earthobservatory.nasa.gov/blogs/elegantfigures/2013/08/05/subtleties-of-color-part-1-of-6/>`_ from Rob Simmon and this `more technical paper <https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/ColorMapsExpanded.pdf>`_. The matplotlib docs also now have a `nice tutorial <https://matplotlib.org/users/colormaps.html>`_ that illustrates some of the perceptual properties of the built in colormaps.\n\nSeaborn makes it easy to select and use color palettes that are suited to the kind of data you are working with and the goals you have in visualizing it.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"sns.set(rc={\"figure.figsize\": (6, 6)})\nnp.random.seed(sum(map(ord, \"palettes\")))",
"_____no_output_____"
]
],
[
[
"Building color palettes\n-----------------------\n\nThe most important function for working with discrete color palettes is :func:`color_palette`. This function provides an interface to many (though not all) of the possible ways you can generate colors in seaborn, and it's used internally by any function that has a ``palette`` argument (and in some cases for a ``color`` argument when multiple colors are needed).\n\n:func:`color_palette` will accept the name of any seaborn palette or matplotlib colormap (except ``jet``, which you should never use). It can also take a list of colors specified in any valid matplotlib format (RGB tuples, hex color codes, or HTML color names). The return value is always a list of RGB tuples.\n\nFinally, calling :func:`color_palette` with no arguments will return the current default color cycle.\n\nA corresponding function, :func:`set_palette`, takes the same arguments and will set the default color cycle for all plots. You can also use :func:`color_palette` in a ``with`` statement to temporarily change the default palette (see :ref:`below <palette_contexts>`).\n\nIt is generally not possible to know what kind of color palette or colormap is best for a set of data without knowing about the characteristics of the data. Following that, we'll break up the different ways to use :func:`color_palette` and other seaborn palette functions by the three general kinds of color palettes: *qualitative*, *sequential*, and *diverging*.",
"_____no_output_____"
],
[
".. _qualitative_palettes:\n\nQualitative color palettes\n--------------------------\n\nQualitative (or categorical) palettes are best when you want to distinguish discrete chunks of data that do not have an inherent ordering.\n\nWhen importing seaborn, the default color cycle is changed to a set of six colors that evoke the standard matplotlib color cycle while aiming to be a bit more pleasing to look at.",
"_____no_output_____"
]
],
[
[
"current_palette = sns.color_palette()\nsns.palplot(current_palette)",
"_____no_output_____"
]
],
[
[
"There are six variations of the default theme, called ``deep``, ``muted``, ``pastel``, ``bright``, ``dark``, and ``colorblind``.\n\nUsing circular color systems\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen you have more than six categories to distinguish, the easiest thing is to draw evenly-spaced colors in a circular color space (such that the hue changes which keeping the brightness and saturation constant). This is what most seaborn functions default to when they need to use more colors than are currently set in the default color cycle.\n\nThe most common way to do this uses the ``hls`` color space, which is a simple transformation of RGB values.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"hls\", 8))",
"_____no_output_____"
]
],
[
[
"There is also the :func:`hls_palette` function that lets you control the lightness and saturation of the colors.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.hls_palette(8, l=.3, s=.8))",
"_____no_output_____"
]
],
[
[
"However, because of the way the human visual system works, colors that are even \"intensity\" in terms of their RGB levels won't necessarily look equally intense. `We perceive <https://en.wikipedia.org/wiki/Color_vision>`_ yellows and greens as relatively brighter and blues as relatively darker, which can be a problem when aiming for uniformity with the ``hls`` system.\n\nTo remedy this, seaborn provides an interface to the `husl <http://www.hsluv.org/>`_ system (since renamed to HSLuv), which also makes it easy to select evenly spaced hues while keeping the apparent brightness and saturation much more uniform.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"husl\", 8))",
"_____no_output_____"
]
],
[
[
"There is similarly a function called :func:`husl_palette` that provides a more flexible interface to this system.\n\nUsing categorical Color Brewer palettes\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAnother source of visually pleasing categorical palettes comes from the `Color Brewer <http://colorbrewer2.org/>`_ tool (which also has sequential and diverging palettes, as we'll see below). These also exist as matplotlib colormaps, but they are not handled properly. In seaborn, when you ask for a qualitative Color Brewer palette, you'll always get the discrete colors, but this means that at a certain point they will begin to cycle.\n\nA nice feature of the Color Brewer website is that it provides some guidance on which palettes are color blind safe. There is a variety of `kinds <https://en.wikipedia.org/wiki/Color_blindness>`_ of color blindness, but the most common variant leads to difficulty distinguishing reds and greens. It's generally a good idea to avoid using red and green for plot elements that need to be discriminated based on color.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"Paired\"))",
"_____no_output_____"
],
[
"sns.palplot(sns.color_palette(\"Set2\", 10))",
"_____no_output_____"
]
],
[
[
"To help you choose palettes from the Color Brewer library, there is the :func:`choose_colorbrewer_palette` function. This function, which must be used in a Jupyter notebook, will launch an interactive widget that lets you browse the various options and tweak their parameters.\n\nOf course, you might just want to use a set of colors you particularly like together. Because :func:`color_palette` accepts a list of colors, this is easy to do.",
"_____no_output_____"
]
],
[
[
"flatui = [\"#9b59b6\", \"#3498db\", \"#95a5a6\", \"#e74c3c\", \"#34495e\", \"#2ecc71\"]\nsns.palplot(sns.color_palette(flatui))",
"_____no_output_____"
]
],
[
[
".. _using_xkcd_palettes:\n \nUsing named colors from the xkcd color survey\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nA while back, `xkcd <https://xkcd.com/>`_ ran a `crowdsourced effort <https://blog.xkcd.com/2010/05/03/color-survey-results/>`_ to name random RGB colors. This produced a set of `954 named colors <https://xkcd.com/color/rgb/>`_, which you can now reference in seaborn using the ``xkcd_rgb`` dictionary:",
"_____no_output_____"
]
],
[
[
"plt.plot([0, 1], [0, 1], sns.xkcd_rgb[\"pale red\"], lw=3)\nplt.plot([0, 1], [0, 2], sns.xkcd_rgb[\"medium green\"], lw=3)\nplt.plot([0, 1], [0, 3], sns.xkcd_rgb[\"denim blue\"], lw=3);",
"_____no_output_____"
]
],
[
[
"In addition to pulling out single colors from the ``xkcd_rgb`` dictionary, you can also pass a list of names to the :func:`xkcd_palette` function.",
"_____no_output_____"
]
],
[
[
"colors = [\"windows blue\", \"amber\", \"greyish\", \"faded green\", \"dusty purple\"]\nsns.palplot(sns.xkcd_palette(colors))",
"_____no_output_____"
]
],
[
[
".. _sequential_palettes:\n\nSequential color palettes\n-------------------------\n\nThe second major class of color palettes is called \"sequential\". This kind of color mapping is appropriate when data range from relatively low or uninteresting values to relatively high or interesting values. Although there are cases where you will want discrete colors in a sequential palette, it's more common to use them as a colormap in functions like :func:`kdeplot` and :func:`heatmap` (along with similar matplotlib functions).\n\nIt's common to see colormaps like ``jet`` (or other rainbow palettes) used in this case, because the range of hues gives the impression of providing additional information about the data. However, colormaps with large hue shifts tend to introduce discontinuities that don't exist in the data, and our visual system isn't able to naturally map the rainbow to quantitative distinctions like \"high\" or \"low\". The result is that these visualizations end up being more like a puzzle, and they obscure patterns in the data rather than revealing them. The jet palette is because the brightest colors, yellow and cyan, are used for intermediate data values. This has the effect of emphasizing uninteresting (and arbitrary) values while deemphasizing the extremes.\n\nFor sequential data, it's better to use palettes that have at most a relatively subtle shift in hue accompanied by a large shift in brightness and saturation. This approach will naturally draw the eye to the relatively important parts of the data.\n\nThe Color Brewer library has a great set of these palettes. They're named after the dominant color (or colors) in the palette.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"Blues\"))",
"_____no_output_____"
]
],
[
[
"Like in matplotlib, if you want the lightness ramp to be reversed, you can add a ``_r`` suffix to the palette name.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"BuGn_r\"))",
"_____no_output_____"
]
],
[
[
"Seaborn also adds a trick that allows you to create \"dark\" palettes, which do not have as wide a dynamic range. This can be useful if you want to map lines or points sequentially, as brighter-colored lines might otherwise be hard to distinguish.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"GnBu_d\"))",
"_____no_output_____"
]
],
[
[
"Remember that you may want to use the :func:`choose_colorbrewer_palette` function to play with the various options, and you can set the ``as_cmap`` argument to ``True`` if you want the return value to be a colormap object that you can pass to seaborn or matplotlib functions.",
"_____no_output_____"
],
[
".. _cubehelix_palettes:\n\nSequential \"cubehelix\" palettes\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe `cubehelix <https://www.mrao.cam.ac.uk/~dag/CUBEHELIX/>`_ color palette system makes sequential palettes with a linear increase or decrease in brightness and some variation in hue. This means that the information in your colormap will be preserved when converted to black and white (for printing) or when viewed by a colorblind individual.\n\nMatplotlib has the default cubehelix version built into it:",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"cubehelix\", 8))",
"_____no_output_____"
]
],
[
[
"Seaborn adds an interface to the cubehelix *system* so that you can make a variety of palettes that all have a well-behaved linear brightness ramp.\n\nThe default palette returned by the seaborn :func:`cubehelix_palette` function is a bit different from the matplotlib default in that it does not rotate as far around the hue wheel or cover as wide a range of intensities. It also reverses the order so that more important values are darker:",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.cubehelix_palette(8))",
"_____no_output_____"
]
],
[
[
"Other arguments to :func:`cubehelix_palette` control how the palette looks. The two main things you'll change are the ``start`` (a value between 0 and 3) and ``rot``, or number of rotations (an arbitrary value, but probably within -1 and 1),",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.cubehelix_palette(8, start=.5, rot=-.75))",
"_____no_output_____"
]
],
[
[
"You can also control how dark and light the endpoints are and even reverse the ramp:",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.cubehelix_palette(8, start=2, rot=0, dark=0, light=.95, reverse=True))",
"_____no_output_____"
]
],
[
[
"By default you just get a list of colors, like any other seaborn palette, but you can also return the palette as a colormap object that can be passed to seaborn or matplotlib functions using ``as_cmap=True``.",
"_____no_output_____"
]
],
[
[
"x, y = np.random.multivariate_normal([0, 0], [[1, -.5], [-.5, 1]], size=300).T\ncmap = sns.cubehelix_palette(light=1, as_cmap=True)\nsns.kdeplot(x, y, cmap=cmap, shade=True);",
"_____no_output_____"
]
],
[
[
"To help select good palettes or colormaps using this system, you can use the :func:`choose_cubehelix_palette` function in a notebook to launch an interactive app that will let you play with the different parameters. Pass ``as_cmap=True`` if you want the function to return a colormap (rather than a list) for use in function like ``hexbin``.",
"_____no_output_____"
],
[
"Custom sequential palettes\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor a simpler interface to custom sequential palettes, you can use :func:`light_palette` or :func:`dark_palette`, which are both seeded with a single color and produce a palette that ramps either from light or dark desaturated values to that color. These functions are also accompanied by the :func:`choose_light_palette` and :func:`choose_dark_palette` functions that launch interactive widgets to create these palettes.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.light_palette(\"green\"))",
"_____no_output_____"
],
[
"sns.palplot(sns.dark_palette(\"purple\"))",
"_____no_output_____"
]
],
[
[
"These palettes can also be reversed.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.light_palette(\"navy\", reverse=True))",
"_____no_output_____"
]
],
[
[
"They can also be used to create colormap objects rather than lists of colors.",
"_____no_output_____"
]
],
[
[
"pal = sns.dark_palette(\"palegreen\", as_cmap=True)\nsns.kdeplot(x, y, cmap=pal);",
"_____no_output_____"
]
],
[
[
"By default, the input can be any valid matplotlib color. Alternate interpretations are controlled by the ``input`` argument. Currently you can provide tuples in ``hls`` or ``husl`` space along with the default ``rgb``, and you can also seed the palette with any valid ``xkcd`` color.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.light_palette((210, 90, 60), input=\"husl\"))",
"_____no_output_____"
],
[
"sns.palplot(sns.dark_palette(\"muted purple\", input=\"xkcd\"))",
"_____no_output_____"
]
],
[
[
"Note that the default input space for the interactive palette widgets is ``husl``, which is different from the default for the function itself, but much more useful in this context.",
"_____no_output_____"
],
[
".. _diverging_palettes:\n\nDiverging color palettes\n------------------------\n\nThe third class of color palettes is called \"diverging\". These are used for data where both large low and high values are interesting. There is also usually a well-defined midpoint in the data. For instance, if you are plotting changes in temperature from some baseline timepoint, it is best to use a diverging colormap to show areas with relative decreases and areas with relative increases.\n\nThe rules for choosing good diverging palettes are similar to good sequential palettes, except now you want to have two relatively subtle hue shifts from distinct starting hues that meet in an under-emphasized color at the midpoint. It's also important that the starting values are of similar brightness and saturation.\n\nIt's also important to emphasize here that using red and green should be avoided, as a substantial population of potential viewers will be `unable to distinguish them <https://en.wikipedia.org/wiki/Color_blindness>`_.\n\nIt should not surprise you that the Color Brewer library comes with a set of well-chosen diverging colormaps.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"BrBG\", 7))",
"_____no_output_____"
],
[
"sns.palplot(sns.color_palette(\"RdBu_r\", 7))",
"_____no_output_____"
]
],
[
[
"Another good choice that is built into matplotlib is the ``coolwarm`` palette. Note that this colormap has less contrast between the middle values and the extremes.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.color_palette(\"coolwarm\", 7))",
"_____no_output_____"
]
],
[
[
"Custom diverging palettes\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nYou can also use the seaborn function :func:`diverging_palette` to create a custom colormap for diverging data. (Naturally there is also a companion interactive widget, :func:`choose_diverging_palette`). This function makes diverging palettes using the ``husl`` color system. You pass it two hues (in degrees) and, optionally, the lightness and saturation values for the extremes. Using ``husl`` means that the extreme values, and the resulting ramps to the midpoint, will be well-balanced.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.diverging_palette(220, 20, n=7))",
"_____no_output_____"
],
[
"sns.palplot(sns.diverging_palette(145, 280, s=85, l=25, n=7))",
"_____no_output_____"
]
],
[
[
"The ``sep`` argument controls the width of the separation between the two ramps in the middle region of the palette.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.diverging_palette(10, 220, sep=80, n=7))",
"_____no_output_____"
]
],
[
[
"It's also possible to make a palette with the midpoint is dark rather than light.",
"_____no_output_____"
]
],
[
[
"sns.palplot(sns.diverging_palette(255, 133, l=60, n=7, center=\"dark\"))",
"_____no_output_____"
]
],
[
[
".. _palette_contexts:\n\nSetting the default color palette\n---------------------------------\n\nThe :func:`color_palette` function has a companion called :func:`set_palette`. The relationship between them is similar to the pairs covered in the :ref:`aesthetics tutorial <aesthetics_tutorial>`. :func:`set_palette` accepts the same arguments as :func:`color_palette`, but it changes the default matplotlib parameters so that the palette is used for all plots.",
"_____no_output_____"
]
],
[
[
"def sinplot(flip=1):\n x = np.linspace(0, 14, 100)\n for i in range(1, 7):\n plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)",
"_____no_output_____"
],
[
"sns.set_palette(\"husl\")\nsinplot()",
"_____no_output_____"
]
],
[
[
"The :func:`color_palette` function can also be used in a ``with`` statement to temporarily change the color palette.",
"_____no_output_____"
]
],
[
[
"with sns.color_palette(\"PuBuGn_d\"):\n sinplot()",
"_____no_output_____"
]
]
] | [
"raw",
"markdown",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code"
] | [
[
"raw"
],
[
"markdown"
],
[
"raw"
],
[
"code",
"code",
"code"
],
[
"raw",
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw",
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw",
"raw"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"raw",
"raw"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
]
] |
eca3962e361d4af1c65effb995f283e41dde2e61 | 39,552 | ipynb | Jupyter Notebook | ssq_dream.ipynb | muxuezi/lottery | 219e9686eb67eaddb53330f34cd2630d3d1e83ed | [
"MIT"
] | 1 | 2020-05-24T12:41:49.000Z | 2020-05-24T12:41:49.000Z | ssq_dream.ipynb | muxuezi/lottery | 219e9686eb67eaddb53330f34cd2630d3d1e83ed | [
"MIT"
] | null | null | null | ssq_dream.ipynb | muxuezi/lottery | 219e9686eb67eaddb53330f34cd2630d3d1e83ed | [
"MIT"
] | 1 | 2020-05-24T12:41:42.000Z | 2020-05-24T12:41:42.000Z | 30.708075 | 105 | 0.320161 | [
[
[
"# %load ../ssq_dream.py\n\nimport numpy as np\nimport pandas as pd\nimport random\nfrom glob import glob\nfrom tqdm import tqdm\nfrom itertools import combinations\nfrom nltk.util import ngrams\n\ndf = pd.read_csv(\"ssq.csv\")\nprint(df.tail().loc[:, \"id\":\"blue\"])",
" id date red1 red2 red3 red4 red5 red6 blue\n2507 2019149 2019-12-26 1 6 27 28 31 33 7\n2508 2019150 2019-12-29 2 9 14 22 27 29 2\n2509 2019151 2019-12-31 2 6 9 18 24 26 14\n2510 2020001 2020-01-02 2 15 23 26 29 30 2\n2511 2020002 2020-01-05 4 9 14 15 16 29 11\n"
],
[
"# 确定red_tag\n\n# \n\nbonus = {\n (0, 1): 5,\n (1, 1): 5,\n (2, 1): 5,\n (3, 1): 10,\n (4, 0): 10,\n (4, 1): 200,\n (5, 0): 200,\n (5, 1): 3000,\n (6, 0): 5000000,\n (6, 1): 10000000,\n}\n\n# 检查列表\n\ntopic, n = \"ssq\", 6\n\nhot_water = sorted(glob(f\"gamble/{topic}/*-{topic}.p\"))[-1]\ndate = hot_water[-16:-6]\ndf_dream = pd.read_csv(hot_water)\n\nlast_shit = df.loc[df.date <= date, \"date\"].max()\ndf_last = df.loc[df.date == last_shit, df_dream.columns]\nlist_last = df_last.values[0].tolist()\n\n\ndef getd(v):\n return len(set(v[:n]) & set(list_last[:n])), len(set(v[n:]) & set(list_last[n:]))\n\n\ndf_dream[\"check\"] = df_dream.apply(getd, axis=1)\ndf_dream[\"bonus\"] = df_dream.check.apply(lambda _: bonus.get(_, 0))\nprint(df_dream)\nprint(\"bonus\", df_dream[\"bonus\"].sum())",
" red1 red2 red3 red4 red5 red6 blue check bonus\n1045 1 7 10 14 21 25 12 (1, 0) 0\n30 11 17 20 22 28 32 1 (0, 0) 0\n85 5 12 16 18 26 30 10 (1, 0) 0\n50 4 13 15 17 24 27 14 (2, 0) 0\n143 6 8 19 25 29 32 7 (1, 0) 0\nbonus 0\n"
],
[
"blue = df.blue.iloc[-1]\ntd = pd.datetime.today().date()\nyear = str(td.year)\nyall = df.date.str.slice(0, 4)\nynum = yall.nunique() - 1\nidx = yall == year\n\n# other years blue average\nkey = pd.Series(ngrams(df.loc[~idx, \"blue\"], 2)).value_counts().to_frame(\"key\")\ndfkey = key.loc[[_ for _ in key.index if _[0] == blue]] / ynum\n# this year blue\nkey2 = pd.Series(ngrams(df.loc[idx, \"blue\"], 2)).value_counts().to_frame(\"key2\")\ndfkey2 = key2.loc[[_ for _ in key2.index if _[0] == blue]]\n\ndfkey = dfkey.join(dfkey2, how=\"left\").fillna(0)\ndfkey[\"key3\"] = dfkey[\"key2\"] - dfkey[\"key\"]\ndfkey.sort_values(by=\"key3\", inplace=True)\nblued = [b for _, b in dfkey.head().index]",
"_____no_output_____"
],
[
"key['bluea'] = [_[0] for _ in key.index]\nkey['blueb'] = [_[1] for _ in key.index]",
"_____no_output_____"
],
[
"# # 输出\ncolumns = [\"red1\", \"red2\", \"red3\", \"red4\", \"red5\", \"red6\", \"blue\"]\nidxa = (df.red1 <= 11) & (df.red6 >= 18)\nt = len(df)\nr = len(df[idxa]) / t * 100\nbset = df.head(-1).loc[idxa, \"red1\":\"red6\"].apply(set, axis=1)\ntail = set(df.iloc[-1][\"red1\":\"red6\"])\n\n# 剔除与最后一期交集>3\nstar = bset.apply(lambda _: len(_ & tail))\nbar = bset.loc[star[star < 3].index]\nia = np.random.choice(bar.index)\nidx, obj = [ia], bar[ia]\nfor i in range(4):\n ib = np.random.choice(bar.apply(lambda _: len(_ & obj)).nsmallest(10).index)\n idx.append(ib)\n obj |= bar[ib]\n\nds = df.loc[idx, columns]\nobj = set(np.reshape(ds.loc[:, \"red1\":\"red6\"].values, (1, 30))[0])\ndr = sorted(set(range(1, 34)) - obj)\nprint(f\"God bless! total {t} red1<=9 and red6>=23: {len(df[idxa])} {r:.1f}%\")\nprint(f\"ETL {len(bar)} unique {len(obj)} drop {dr}\\n\")\n\nds.blue = blued\nds.to_csv(f\"gamble/ssq/{td}-ssq.p\")\nprint(ds)",
"God bless! total 2512 red1<=9 and red6>=23: 2326 92.6%\nETL 2199 unique 27 drop [2, 3, 9, 23, 31, 33]\n\n red1 red2 red3 red4 red5 red6 blue\n1045 1 7 10 14 21 25 12\n30 11 17 20 22 28 32 1\n85 5 12 16 18 26 30 10\n50 4 13 15 17 24 27 14\n143 6 8 19 25 29 32 7\n"
]
],
[
[
"# 写入数据库app.db",
"_____no_output_____"
]
],
[
[
"from sqlalchemy import create_engine\nengine = create_engine('sqlite:///app.db', echo=False)",
"_____no_output_____"
],
[
"ds.assign(date=td, bonous=0).to_sql('lotteries', con=engine, if_exists='replace', index_label='id')",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"# 最大遗漏\nb = df[[\"blue\"]].reset_index()\n\nb['z'] = b.groupby(\"blue\").index.diff()",
"_____no_output_____"
],
[
"b.tail()",
"_____no_output_____"
],
[
"b.groupby(\"blue\").z.describe(percentiles=np.arange(0,1,0.05))",
"_____no_output_____"
],
[
"y=b.query('blue==16 and z>46')",
"_____no_output_____"
],
[
"y=b.query('z>70')\ny",
"_____no_output_____"
],
[
"df.loc[y.index-1]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca3b896b9da4d6b466b8639b567ad6aae203759 | 34,787 | ipynb | Jupyter Notebook | Cartpole_reinforment_learning.ipynb | ak-cell/Cartpole-Reinforcement-Learning | f4795bb9cf6c73cf146685c3c401a8cf0023fc2d | [
"MIT"
] | 1 | 2019-11-20T05:53:21.000Z | 2019-11-20T05:53:21.000Z | Cartpole_reinforment_learning.ipynb | ak-cell/Cartpole-Reinforcement-Learning | f4795bb9cf6c73cf146685c3c401a8cf0023fc2d | [
"MIT"
] | null | null | null | Cartpole_reinforment_learning.ipynb | ak-cell/Cartpole-Reinforcement-Learning | f4795bb9cf6c73cf146685c3c401a8cf0023fc2d | [
"MIT"
] | null | null | null | 120.788194 | 296 | 0.660362 | [
[
[
"import numpy as np\nimport gym\nfrom keras.models import Sequential\nfrom keras.layers import Dense,Activation,Flatten\nfrom keras.optimizers import Adam\n\nfrom rl.agents.dqn import DQNAgent\nfrom rl.policy import EpsGreedyQPolicy\nfrom rl.memory import SequentialMemory",
"Using TensorFlow backend.\n"
],
[
"#Setting variable\nENV_NAME='CartPole-v0'\n#Get the environment and extract the number of actions\n#in the cartpole problem\nenv=gym.make(ENV_NAME)\nnp.random.seed(123)\nenv.seed(123)\nnb_actions=env.action_space.n",
"_____no_output_____"
],
[
"#We will build a simple single hidden layer neural network model\nmodel=Sequential()\nmodel.add(Flatten(input_shape=(1,)+env.observation_space.shape))\nmodel.add(Dense(16))\nmodel.add(Activation('relu'))\nmodel.add(Dense(nb_actions))\nmodel.add(Activation('linear'))\nprint(model.summary())",
"WARNING:tensorflow:From C:\\Users\\aksha\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten_1 (Flatten) (None, 4) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 16) 80 \n_________________________________________________________________\nactivation_1 (Activation) (None, 16) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 2) 34 \n_________________________________________________________________\nactivation_2 (Activation) (None, 2) 0 \n=================================================================\nTotal params: 114\nTrainable params: 114\nNon-trainable params: 0\n_________________________________________________________________\nNone\n"
],
[
"#Now configure and compile our agent.WE will set set our policy as Epsilon Greedy and our memory is \n#Sequential because we want to store the result of actions we performed and the rewards we get for \n#each action\npolicy=EpsGreedyQPolicy()\nmemory=SequentialMemory(limit=50000,window_length=1)\ndqn=DQNAgent(model=model,nb_actions=nb_actions,memory=memory,nb_steps_warmup=10,policy=policy)\ndqn.compile(Adam(lr=1e-3),metrics=['mae'])\n#Okay,now its time to learn something! We visualize the training gere for show,\n#but this slows down training quite a lot\ndqn.fit(env,nb_steps=5000,visualize=True,verbose=2)\n",
"Training for 5000 steps ...\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
eca3b941b1785a2dd6bd80b13e90857f018406e8 | 89,470 | ipynb | Jupyter Notebook | content/ch-demos/coin-game.ipynb | vabarbosa/qiskit-textbook | 3fa4f33e0c3b14be2a88ea7f58e99074f31b21da | [
"Apache-2.0"
] | null | null | null | content/ch-demos/coin-game.ipynb | vabarbosa/qiskit-textbook | 3fa4f33e0c3b14be2a88ea7f58e99074f31b21da | [
"Apache-2.0"
] | null | null | null | content/ch-demos/coin-game.ipynb | vabarbosa/qiskit-textbook | 3fa4f33e0c3b14be2a88ea7f58e99074f31b21da | [
"Apache-2.0"
] | null | null | null | 49.132345 | 11,112 | 0.722141 | [
[
[
"# **Quantum Coin Game**\n\n---\n\n",
"_____no_output_____"
],
[
"### **Table of Contents**\n---\n\n1. [Introduction](#introduction)<br>\n 1.1 [What is Quantum Coin Game](#quantum_definition)<br>\n 1.2 [Concept](#concept)<br>\n 1.3 [Idea](#idea)<br>\n 1.4 [Rules of the Game](#rules)<br>\n2. [Play It](#play_it)<br>\n3. [Analogy](#analogy)<br>\n4. [Approach](#approach)<br>\n5. [Optimal Strategy](#optimal)<br>\n 5.1 [Play it with Qiskit](#quantum_play)<br>\n 5.2 [Measurement](#quantum_measurement)<br>\n 5.3 [QASM Simulator](#quantum_qasm)<br>\n 5.4 [Who Wins?](#quantum_wins)<br>\n 5.5 [Running on Quantum Computer](#real_qc)<br> \n6. [Conclusion](#conclusion)<br>\n7. [References](#references)<br>\n8. [Quick Exercise](#quick_exercise)<br>\n9. [Version Information](#version_information)",
"_____no_output_____"
],
[
"### **Introduction** <a id=\"introduction\"></a>\n\n---\n\n\n### What is Quantum Coin Game ? <a id=\"quantum_definition\"></a>\n Quantum Coin Game is one of the fundamental concept of quantum computing, which uses simple implementation of quantum gates or more precisely uses the wierdness of quantum mechanics, to win about 97% of the time, when played against an opponent. Flipping of coin and say heads or tails.\n\n### Where the concept came from ? <a id=\"concept\"></a>\n The concept of Quantum Coin Game came from the idea of classical coin game which can only show heads and tails. But since the game utilizes the concepts of quantum mechanics, it would be interesting to see what could be the outcome of the whole experiment.\n\n### What is the main idea of this game ? <a id=\"idea\"></a>\n The main concept of this game is how the quantum computer uses the power of quantum superposition, which tells an object can exists in 2 different states at the same time, to win absolutely everytime.\n\n **NOTE**: To learn more about quantum superposition, link to \"[Qiskit Textbook](https://qiskit.org/textbook/ch-states/representing-qubit-states.html)\" superposition page.\n\n### What are the rules of this game ? <a id=\"rules\"></a>\n 1. Quantum Computer plays a move but it is not revealed to the Opponent(Human).\n 2. Opponent(Human) plays a move and it is also not revealed to the Quantum Computer.\n 3. Finally Quantum Computer plays a move.\n 4. Results are shown. If its heads, then Quantum Computer wins. Else, Opponent(Human) wins.\n \n**NOTE**: \"Playing a move\" refers to \"Flipping the coin\" and we consider the coin as fair coin.\n\n**NOTE**: Refer to [Shohini's Ted Talk](#conclusion)\n\n---\n\n\n",
"_____no_output_____"
],
[
"#### **Play it** <a id=\"play_it\"></a>",
"_____no_output_____"
]
],
[
[
"# Importing all the necessary library\nfrom qiskit import QuantumCircuit, Aer, IBMQ, QuantumRegister, ClassicalRegister, execute\nfrom qiskit.tools.jupyter import *\nfrom qiskit.visualization import *\nimport qiskit.tools.jupyter\nimport ipywidgets as widgets\n\n# Layout\nbutton_p = widgets.Button(\n description='Play')\ngate_p = widgets.Dropdown(\n options=[('Identity', 'i'), ('Bit Flip', 'x')],\n description='Choice: ',\n disabled=False,\n)\nout_p = widgets.Output()\ndef on_button_clicked(b):\n with out_p:\n \n # Initial Circuit\n circuit_p = QuantumRegister(1, 'circuit')\n measure_p = ClassicalRegister(1, 'result')\n qc_p = QuantumCircuit(circuit_p, measure_p)\n \n # Turn 1\n qc_p.h(circuit_p[0])\n \n # Turn 2\n if gate_p.value == 'i':\n qc_p.i(circuit_p[0])\n if gate_p.value == 'x':\n qc_p.x(circuit_p[0])\n \n # Turn 3\n qc_p.h(circuit_p[0])\n \n # Measure \n qc_p.measure(circuit_p, measure_p)\n \n # QASM\n backend_p = Aer.get_backend('aer_simulator')\n job_p = execute(qc_p, backend_p, shots=8192)\n res_p = job_p.result().get_counts()\n \n # Result\n if len(res_p) == 1 and list(res_p.keys())[0] == '0':\n print(\"You Lose to Quantum. Quantum Computer Wins\")\n if len(res_p) == 1 and list(res_p.keys())[0] == '1':\n print(\"You Win against Quantum Computer\")\n if len(res_p) == 2:\n print(\"Either Quantum or You Wins\")\n\nbutton_p.on_click(on_button_clicked)\nwidgets.VBox([gate_p, button_p, out_p])",
"_____no_output_____"
]
],
[
[
"### **Analogy** <a id=\"analogy\"></a>\n\n---\n\nNow that we know what is a quantum coin game, what is it based on and most importantly what are the rules of this game, lets convert the concept of this game in quantum computing terminology.\n\n* The 'coin' in flipping a coin we referring here is a 'single qubit gate'.\n$$\n |\\psi\\rangle=\\begin{bmatrix}\\alpha \\\\ \\beta\\end{bmatrix}\n$$\n\n where $\\alpha, \\beta \\in \\mathbb{C}$ and $|\\alpha|^2 + |\\beta|^2 = 1$\n\n\n* \"Flipping\" the coin is application of the bit-flip operator\n\n$$\n X = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}\n$$\n\n* The \"heads\" state is defined as \n$$\n|0\\rangle = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}\n$$ and \"tails\" as \n$$\n|1\\rangle = \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}\n$$\n\n* The quantum computer \"plays\" by applying the Hadamard $H$ operator \n$$\nH = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix}\n$$\n",
"_____no_output_____"
],
[
"### **Approach** <a id=\"approach\"></a>\n\n---\nLets see how to approach the game in quantum computing terminology-\n\n* The coin is initialized to the $|0\\rangle$ \"heads\" state.\n\n* The computer plays, applying the Hadamard $H$ operator to the coin (operators are applied using matrix multiplication). \n$$\nH|0\\rangle = \\frac{1}{\\sqrt2}(|0\\rangle + |1\\rangle)\n$$\nThe coin enters the \n$$\nH|0\\rangle = |+\\rangle = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix}\n$$\nstate.\n\n\n* The human plays, choosing whether to flip the coin (apply the $X$ operator) or do nothing (apply the $I$ operator). However, since the $X$ operator just flips the state vector upside down, $X$ has no effect. Same goes for $I$.\n$$\nX|+\\rangle=|+\\rangle \n$$\n$$\nI|+\\rangle=|+\\rangle \n$$\nNo matter what, the state is $|+\\rangle$ after the human plays.\n\n* The computer plays, applying the Hadamard $H$ operator again, taking the coin to the $|0⟩$ \"heads\" state.\n$$\nH|+\\rangle = |0\\rangle\n$$",
"_____no_output_____"
]
],
[
[
"# Importing all the necessary library\n\nfrom qiskit import QuantumCircuit, Aer, IBMQ, QuantumRegister, ClassicalRegister, execute\nfrom qiskit.tools.jupyter import *\nfrom qiskit.visualization import *\nimport qiskit.tools.jupyter\nimport ipywidgets as widgets",
"_____no_output_____"
],
[
"# Building the initial circuit\n\ndef initial_circuit():\n circuit = QuantumRegister(1, 'circuit')\n measure = ClassicalRegister(1, 'result')\n qc = QuantumCircuit(circuit, measure)\n qc.draw('mpl')\n return qc, circuit, measure",
"_____no_output_____"
],
[
"# Widget Initialization\n\ngate = widgets.Dropdown(\n options=[('Identity', 'i'), ('Bit Flip', 'x')],\n description='Choice: ',\n disabled=False,\n)",
"_____no_output_____"
]
],
[
[
"### **Optimal Strategy** <a id=\"optimal\"></a>",
"_____no_output_____"
],
[
"Using the above approach the possibility table reduces to-\n\n\nStart State | Quantum | Classical | Quantum | Result | Who wins?\n-------------|----------|------------|----------|-----------------|-----------\n$|0\\rangle$ | $H$ | $I$ | $H$ | $|0\\rangle$ | Quantum\n$|0\\rangle$ | $H$ | $X$ | $H$ | $|0\\rangle$ | Quantum\n\nNow lets look at the possibilities-\n\n\n1. Quantum Computer Wins ( $|0\\rangle$ ):\n\n$$\n\\frac{2}{2} = 100 \\%\n$$\n\n2. Classical Human Wins ( $|1\\rangle$ ):\n\n$$\n \\frac{0}{2} = 0 \\%\n$$\n\n3. Either Quantum Computer or Classical Human Wins ( $|0\\rangle + |1\\rangle$ ):\n\n$$\n \\frac{0}{2} = 0 \\%\n$$",
"_____no_output_____"
],
[
"This table shows the quantum computer wins $100\\%$ of the time. But in Shohini's talk it is $~97\\%$, due to errors.",
"_____no_output_____"
],
[
"### **Lets play this version using Qiskit** <a id=\"quantum_play\"></a>",
"_____no_output_____"
],
[
"#### Building the initial circuit",
"_____no_output_____"
]
],
[
[
"qc, circuit, measure = initial_circuit()",
"_____no_output_____"
]
],
[
[
"#### **Turn 1. Quantum Computer**",
"_____no_output_____"
]
],
[
[
"# Use H Gate\n\nqc.h(circuit[0])\nqc.draw('mpl')",
"_____no_output_____"
]
],
[
[
"#### **Turn 2. Classical Human**",
"_____no_output_____"
]
],
[
[
"if gate.value == 'i':\n qc.i(circuit[0])\nif gate.value == 'x':\n qc.x(circuit[0])\n\nqc.draw('mpl')",
"_____no_output_____"
]
],
[
[
"#### **Turn 3. Quantum Computer**",
"_____no_output_____"
],
[
"Quantum Computer uses Hadamard $H$ on its first turn",
"_____no_output_____"
]
],
[
[
"# Used H Gate\n\nqc.h(circuit[0])\nqc.draw('mpl')",
"_____no_output_____"
]
],
[
[
"#### **Measurement** <a id=\"quantum_measurement\"></a>",
"_____no_output_____"
]
],
[
[
"qc.measure(circuit, measure)\nqc.draw('mpl')",
"_____no_output_____"
]
],
[
[
"#### **QASM_Simulator** <a id=\"quantum_qasm\"></a>",
"_____no_output_____"
]
],
[
[
"backend = Aer.get_backend('aer_simulator')\njob = execute(qc, backend, shots=8192)\nres = job.result().get_counts()\nprint(res)\nplot_histogram(res)",
"{'0': 8192}\n"
]
],
[
[
"#### **Lets see who wins** <a id=\"quantum_wins\"></a>",
"_____no_output_____"
]
],
[
[
"if len(res) == 1 and list(res.keys())[0] == '0':\n print(\"Quantum Computer Wins\")\nif len(res) == 1 and list(res.keys())[0] == '1':\n print(\"Classical Human Wins\")\nif len(res) == 2:\n print(\"Either Quantum Computer or Classical Human Wins\")",
"Quantum Computer Wins\n"
]
],
[
[
"#### **Running on Quantum Computer** <a id=\"real_qc\"></a>",
"_____no_output_____"
]
],
[
[
"provider = IBMQ.load_account()\nbackend_real = provider.get_backend('ibmq_manila')\njob_real = execute(qc, backend_real, shots=8192)\nres_real = job_real.result().get_counts()\nprint(res_real)\nplot_histogram(res_real)",
"{'0': 8134, '1': 58}\n"
]
],
[
[
"Unlike the perfect simulation, the real quantum computer only wins ~$99\\ \\%$ of the time, the $1\\ \\%$ in which it loses is due to errors. Quantum computers have improved a bit since [Shohini's talk](#conclusion) where the error is closer to $3\\ \\%$.",
"_____no_output_____"
],
[
"### **Conclusion** <a id=\"conclusion\"></a>",
"_____no_output_____"
],
[
"This simple and yet fun little game shows the basic quantum states $|0\\rangle$, $|1\\rangle$, $|+\\rangle$ and $|−\\rangle$, plus the common ways of moving between them with the $X$, $H$, $I$, $Z$ gates. ",
"_____no_output_____"
],
[
"### **References** <a id=\"references\"></a>",
"_____no_output_____"
],
[
"This notebook is inspired from:\n\n * [1]. [Ted talk by Sohini Ghosh](https://www.ted.com/talks/shohini_ghose_a_beginner_s_guide_to_quantum_computing#t-208006). \n\n * [2]. Quantum Coin Flipping from [Wikipedia](https://en.wikipedia.org/wiki/Quantum_coin_flipping)",
"_____no_output_____"
],
[
"#### **Quick Exercise** <a id=\"quick_exercise\"></a>",
"_____no_output_____"
],
[
"The rules of the game we learned so far are the main rules of the game.\nBut, think of other variations of the game as well, tweak the game a little could result in significant change in answer. Such as-\n\n1. What if, instead of quantum computer taking first turn, the classical human take the first turn ?\n2. What if, instead of representing head as $|0\\rangle$, the tail is represented as $|0\\rangle$ ?\n3. What if, instead of using fair coin, we used unfair coin ?\n4. What if, instead of playing against a classical human, the quantum computer plays with another quantum computer ?\n5. What if, instead of having 3 turns, there are $n$ number of turns ?\n6. What if, instead of using all gates, restrict the use of some gates ?\n\nand many more variations are possible.",
"_____no_output_____"
],
[
"### **Version Information** <a id=\"version_information\"></a>",
"_____no_output_____"
]
],
[
[
"%qiskit_version_table",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
eca3c8b9031fb044e975e6fde465432e86cbfa7e | 12,162 | ipynb | Jupyter Notebook | Python Absolute Beginner/Module_3_Practice_2_IntroPy.ipynb | Blade24-byte/pythonteachingcode | 3fa7f7e5b459873ad6c0b921c0760d11e97db054 | [
"MIT"
] | null | null | null | Python Absolute Beginner/Module_3_Practice_2_IntroPy.ipynb | Blade24-byte/pythonteachingcode | 3fa7f7e5b459873ad6c0b921c0760d11e97db054 | [
"MIT"
] | null | null | null | Python Absolute Beginner/Module_3_Practice_2_IntroPy.ipynb | Blade24-byte/pythonteachingcode | 3fa7f7e5b459873ad6c0b921c0760d11e97db054 | [
"MIT"
] | null | null | null | 24.719512 | 166 | 0.481418 | [
[
[
"# 1-5 Intro Python Practice \n## conditionals, type, and mathematics extended \n \n<font size=\"5\" color=\"#00A0B2\" face=\"verdana\"> <B>Student will be able to</B></font> \n- code more than two choices using **`elif`** \n- gather numeric input using type casting \n- perform subtraction, multiplication and division operations in code \n",
"_____no_output_____"
],
[
"# \n<font size=\"6\" color=\"#B24C00\" face=\"verdana\"> <B>Tasks</B></font>",
"_____no_output_____"
],
[
"### Rainbow colors\nask for input of a favorite rainbow color first letter: ROYGBIV \n\nUsing `if`, `elif`, and `else`: \n- print the color matching the letter \n - R = Red \n - O = Orange \n - Y = Yellow \n - G = Green\n - B = Blue\n - I = Indigo\n - V = Violet\n - else print \"no match\"\n",
"_____no_output_____"
]
],
[
[
"# [ ] complete rainbow colors\nrbow_color = input(\"Input your favorite rainbow color: \").upper()\n\nif rbow_color == \"R\":\n print(\"Your color is: Red\")\nelif rbow_color == \"O\":\n print(\"Your color is: Orange\")\nelif rbow_color == \"Y\":\n print(\"Your color is: Yellow\")\nelif rbow_color == \"G\":\n print(\"Your color is: Green\")\nelif rbow_color == \"B\":\n print(\"Your color is: Blue\")\nelif rbow_color == \"I\":\n print(\"Your color is: Indigo\")\nelif rbow_color == \"V\":\n print(\"Your color is: Violet\")\nelse:\n print(\"No Match\")\n\n",
"Input your favorite rainbow color: r\nYour color is: Red\n"
],
[
"# [ ] make the code above into a function rainbow_color() that has a string parameter, \n# get input and call the function and return the matching color as a string or \"no match\" message.\n# Call the function and print the return string.\ndef rainbow_color(rbow_color):\n if rbow_color == \"R\":\n return \"Red\"\n elif rbow_color == \"O\":\n return \"Orange\"\n elif rbow_color == \"Y\":\n return \"Yellow\"\n elif rbow_color == \"G\":\n return \"Green\"\n elif rbow_color == \"B\":\n return \"Blue\"\n elif rbow_color == \"I\":\n return \"Indigo\"\n elif rbow_color == \"V\":\n return \"Violet\"\n else:\n return \"No Match\"\n\n \nrbow_color = input(\"Input your fave rainbow color: \").upper()\nprint(\"Your color is:\",rainbow_color(rbow_color))\n",
"Input your fave rainbow color: r\nYour color is: Red\n"
]
],
[
[
"# \n**Create function age_20() that adds or subtracts 20 from your age for a return value based on current age** (use `if`) \n- call the funtion with user input and then use the return value in a sentence \nexample `age_20(25)` returns **5**: \n> \"5 years old, 20 years difference from now\"",
"_____no_output_____"
]
],
[
[
"# [ ] complete age_20()\ndef age_20(age):\n if int(age) >= 20:\n return int(age)-20\n else:\n return 20-int(age)\n \nyour_age = input(\"Enter your age: \")\n\nif your_age.isdigit():\n print(str(age_20(your_age)),\"years old, 20 years diff from now\")\nelse:\n print(\"Error: Invalid age\")\n",
"Enter your age: 20\n0 years old, 20 years diff from now\n"
]
],
[
[
"**create a function rainbow_or_age that takes a string argument**\n- if argument is a digit return the value of calling age_20() with the str value cast as **`int`** \n- if argument is an alphabetical character return the value of calling rainbow_color() with the str\n- if neither return FALSE",
"_____no_output_____"
]
],
[
[
"# [ ] create rainbow_or_age()\ndef rainbow_or_age(roa):\n if roa.isdigit():\n return age_20(roa)\n elif roa.isalpha():\n return rainbow_color(roa.upper())\n else:\n return False\n \nrainbow_or_age(input(\"Enter your color or age: \"))\n",
"Enter your color or age: 34\n"
],
[
"# [ ] add 2 numbers from input using a cast to integer and display the answer \n\nnum1 = input(\"Enter #1: \")\nnum2 = input(\"Enter #2: \")\n\nif num1.isdigit() and num2.isdigit():\n sum12 = int(num1) + int(num2)\n print(\"The Sum is\",sum12)\nelse:\n print(\"Invalid input(s)\")",
"Enter #1: 1\nEnter #2: 2\nThe Sum is 3\n"
],
[
"# [ ] Multiply 2 numbers from input using cast and save the answer as part of a string \"the answer is...\"\n# display the string using print\n\nnum1 = input(\"Enter #1: \")\nnum2 = input(\"Enter #2: \")\n\nif num1.isdigit() and num2.isdigit():\n prod12 = int(num1) * int(num2)\n print(\"The Product is\",prod12)\nelse:\n print(\"Invalid input(s)\")",
"Enter #1: 1\nEnter #2: 2\nThe Product is 2\n"
],
[
"# [ ] get input of 2 numbers and display the average: (num1 + num2) divided by 2\n\nnum1 = input(\"Enter #1: \")\nnum2 = input(\"Enter #2: \")\n\nif num1.isdigit() and num2.isdigit():\n ave12 = (int(num1) + int(num2))/2\n print(\"The Average is\",ave12)\nelse:\n print(\"Invalid input(s)\")",
"Enter #1: 1\nEnter #2: 2\nThe Average is 1.5\n"
],
[
"# [ ] get input of 2 numbers and subtract the largest from the smallest (use an if statement to see which is larger)\n# show the answer\n\nnum3 = input(\"Enter #1: \")\nnum4 = input(\"Enter #2: \")\n\nif num3.isdigit() and num4.isdigit():\n int_num3 = int(num3)\n int_num4 = int(num4)\n if int_num3 >= int_num4:\n diff12 = int_num3 - int_num4\n else:\n diff12 = int_num4 - int_num3\n print(\"The Difference is\",diff12)\nelse:\n print(\"Invalid input(s)\")",
"Enter #1: 1\nEnter #2: 2\nThe Difference is 1\n"
],
[
"# [ ] Divide a larger number by a smaller number and print the integer part of the result\n# don't divide by zero! if a zero is input make the result zero\n# [ ] cast the answer to an integer to cut off the decimals and print the result\n\nnum3 = input(\"Enter #1: \")\nnum4 = input(\"Enter #2: \")\n\nif num3.isdigit() and num4.isdigit():\n int_num3 = int(num3)\n int_num4 = int(num4)\n if int_num3 == 0 or int_num4 == 0:\n quot12 = 0\n elif int_num3 >= int_num4:\n quot12 = int_num3 / int_num4\n else:\n quot12 = int_num4 / int_num3\n print(\"The Quotient is\",int(quot12))\nelse:\n print(\"Invalid input(s)\")\n",
"Enter #1: 10\nEnter #2: 5\nThe Quotient is 2\n"
],
[
"id =3556\nif id > 2999:\n print(id, \"is a new student\")",
"3556 is a new student\n"
],
[
"calc = 5 + 15 / 5 + 3 * 2 - 1\nprint(calc)",
"13.0\n"
],
[
"name = \"Jin Xu\"\nif type(name) == type(\"Hello\") :\n print(name, \"is a string\")\nelse:\n print(name, \"is not a string entry\")",
"Jin Xu is a string\n"
],
[
"x = 3 + 9 * 2\nprint(x)",
"21\n"
],
[
"hot_plate = True\nif hot_plate:\n print(\"Be careful, hot plate!\")\nelse:\n print(\"The plate is ready\")",
"Be careful, hot plate!\n"
]
],
[
[
"[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
eca3d3f1d1a8b51fa32e0e8869452d384c86bea6 | 176,225 | ipynb | Jupyter Notebook | OU_process.ipynb | hstrey/Circuit-Discovery | ea16927a5c15ef9658bbc39f89cb74af62021244 | [
"MIT"
] | null | null | null | OU_process.ipynb | hstrey/Circuit-Discovery | ea16927a5c15ef9658bbc39f89cb74af62021244 | [
"MIT"
] | null | null | null | OU_process.ipynb | hstrey/Circuit-Discovery | ea16927a5c15ef9658bbc39f89cb74af62021244 | [
"MIT"
] | null | null | null | 537.271341 | 72,988 | 0.940814 | [
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport langevin\n\nSMALL_SIZE = 16\nMEDIUM_SIZE = 18\nBIGGER_SIZE = 20\n\nplt.rc('font', size=SMALL_SIZE) # controls default text sizes\nplt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title\nplt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels\nplt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize\nplt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title\n\nSEED = 35010732 # from random.org\nnp.random.seed(SEED)\n\nprint(plt.style.available)\nplt.style.use('seaborn-white')",
"['seaborn-dark', 'seaborn-darkgrid', 'seaborn-ticks', 'fivethirtyeight', 'seaborn-whitegrid', 'classic', '_classic_test', 'fast', 'seaborn-talk', 'seaborn-dark-palette', 'seaborn-bright', 'seaborn-pastel', 'grayscale', 'seaborn-notebook', 'ggplot', 'seaborn-colorblind', 'seaborn-muted', 'seaborn', 'Solarize_Light2', 'seaborn-paper', 'bmh', 'tableau-colorblind10', 'seaborn-white', 'dark_background', 'seaborn-poster', 'seaborn-deep']\n"
],
[
"# function to calculate A and B from the dataset\ndef OUanalytic1(data):\n N = data.size\n data1sq = data[0]**2\n dataNsq = data[-1]**2\n datasq = np.sum(data[1:-1]**2)\n datacorr = np.sum(data[0:-1]*data[1:])\n coef = [(N-1)*datasq,\n (2.0-N)*datacorr,\n -data1sq-(N+1)*datasq-dataNsq,\n N*datacorr]\n B=np.roots(coef)[-1]\n Q=(data1sq+dataNsq)/(1-B**2)\n Q=Q+datasq*(1+B**2)/(1-B**2)\n Q=Q-datacorr*2*B/(1-B**2)\n A = Q/N\n P2A = -N/2/A**2\n Btmp = (N-1)*(1+B**2)/(1-B**2)**2\n tmp = (2+6*B**2)*(data1sq+dataNsq) + (4+12*B**2)*datasq - (12*B+4*B**3)*datacorr\n P2B = Btmp - tmp/A/2/(1-B**2)**3\n PAB = (N-1)*B/A/(1-B**2)\n dA = np.sqrt(-P2B/(P2A*P2B-PAB**2))\n dB = np.sqrt(-P2A/(P2A*P2B-PAB**2))\n return A,dA,B,dB\n\ndef OUresult1(data,deltat):\n A, dA, B ,dB = OUanalytic1(data)\n tau = -deltat/np.log(B)\n dtau = deltat*dB/B/np.log(B)**2\n return A,dA,tau,dtau",
"_____no_output_____"
],
[
"# function to calculate A and B from the dataset\ndef OUanalytic2(data):\n N = data.size\n data1sq = data[0]**2\n dataNsq = data[-1]**2\n datasq = np.sum(data[1:-1]**2)\n datacorr = np.sum(data[0:-1]*data[1:])\n coef = [(N-1)*datasq,\n (2.0-N)*datacorr,\n -data1sq-(N+1)*datasq-dataNsq,\n N*datacorr]\n B=np.roots(coef)[-1]\n Q=(data1sq+dataNsq)/(1-B**2)\n Q=Q+datasq*(1+B**2)/(1-B**2)\n Q=Q-datacorr*2*B/(1-B**2)\n A = Q/N\n P2A = -N/A**2/2\n Btmp = B**2*(1+2*N)\n tmp = (1+Btmp)*(data1sq+dataNsq) + (2*Btmp + N + 1 -B**4*(N-1))*datasq - 2*B*(1+B**2+2*N)*datacorr\n P2B = -tmp/((1-B**2)**2*(data1sq+dataNsq + (1+B**2)*datasq - 2*B*datacorr))\n PAB = (N-1)*B/A/(1-B**2)\n dA = np.sqrt(-P2B/(P2A*P2B-PAB**2))\n dB = np.sqrt(-P2A/(P2A*P2B-PAB**2))\n return A,dA,B,dB\n\ndef OUresult2(data,deltat):\n A, dA, B ,dB = OUanalytic2(data)\n tau = -deltat/np.log(B)\n dtau = deltat*dB/B/np.log(B)**2\n return A,dA,tau,dtau",
"_____no_output_____"
],
[
"AA,DD = 1.0,1.0\ndt = 0.01\ntau_real = AA/DD\nN=10000 # length of data set\ndata = langevin.time_series(A=AA, D=DD, delta_t=dt, N=N)\ntime = np.arange(N)*dt",
"_____no_output_____"
],
[
"plt.plot(time,data,\"k\")\nplt.axhline(y=0, color='k', linestyle='-.')\nplt.xlabel(\"time in sec\")\nplt.ylabel(\"activation\")\nplt.ylim((-10,10))\nplt.title(\"Original Data\")\nplt.text(-3, 9, 'A',\n horizontalalignment='left',\n verticalalignment='top',fontsize=18)\nplt.savefig(\"OUdata.png\",format='png',dpi=300,bbox_inches='tight',facecolor=\"white\",backgroundcolor=\"white\")\nplt.savefig(\"OUdata.pdf\",format='pdf',dpi=300,bbox_inches='tight',facecolor=\"white\",backgroundcolor=\"white\")",
"_____no_output_____"
],
[
"fitA,fitdA,fitTau,fitdTau = OUresult1(data,0.01)",
"_____no_output_____"
],
[
"print(fitA,fitdA,fitTau,fitdTau)",
"0.9518073684707051 0.13152308936215348 0.9521980515207534 0.13228364525348418\n"
],
[
"fitD = fitA/fitTau\nfitdD = np.sqrt(fitdA**2/fitTau**2+fitA**2*fitdTau**2/fitTau**4)",
"_____no_output_____"
],
[
"print(fitA,fitdA,fitD,fitdD)",
"0.9518073684707051 0.13152308936215348 0.9995897040017837 0.19586452713587033\n"
],
[
"for i in range(5):\n data = langevin.time_series(A=fitA, D=fitD, delta_t=dt, N=N)\n plt.plot(time,data+4*i)\n plt.axhline(y=4*i, color='k', linestyle='-.')\nplt.yticks([])\nplt.xlabel(\"time in sec\")\nplt.ylabel(\"activation\")\nplt.title(\"Sample of fitted solutions\")\nplt.text(-3,19, 'B',\n horizontalalignment='left',\n verticalalignment='top',fontsize=18)\nplt.savefig(\"OUresult.png\",format='png',dpi=300,bbox_inches='tight',facecolor=\"white\",backgroundcolor=\"white\")\nplt.savefig(\"OUresult.pdf\",format='pdf',dpi=300,bbox_inches='tight',facecolor=\"white\",backgroundcolor=\"white\")",
"_____no_output_____"
],
[
"cf, caxs = plt.subplots(nrows=1,ncols=2,figsize=(11,3.5),sharex=True)\nax = caxs[0]\nax.plot(time,data,\"k\")\nax.axhline(y=0, color='k', linestyle='-.')\nax.set_xlabel(\"time in sec\")\nax.set_ylabel(\"activation\")\nax.set_ylim((-10,10))\nax.set_title(\"Original Data\")\nax.text(-3, 9.4, 'A',\n horizontalalignment='left',\n verticalalignment='top',fontsize=18)\n\nax = caxs[1]\nfor i in range(5):\n data = langevin.time_series(A=fitA, D=fitD, delta_t=dt, N=N)\n ax.plot(time,data+4*i)\n ax.axhline(y=4*i, color='k', linestyle='-.')\nax.set_yticks([])\nax.set_xlabel(\"time in sec\")\nax.set_title(\"Sample of fitted solutions\")\nax.text(-3,19.5, 'B',\n horizontalalignment='left',\n verticalalignment='top',fontsize=18)\n\ncf.savefig('OU_final.png',format='png',dpi=300,bbox_inches='tight',facecolor=\"white\",backgroundcolor=\"white\")\ncf.savefig('OU_final.pdf',format='pdf',dpi=300,bbox_inches='tight',facecolor=\"white\",backgroundcolor=\"white\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca3e8959a0453c98b89a8bbcfbcb43924d129f9 | 39,984 | ipynb | Jupyter Notebook | 1_Neural Networks and Deep Learning/Week2/1_Python Basics With Numpy/Python_Basics_With_Numpy.ipynb | zyong812/deeplearning.ai | 5b1c29e8f84d0733ce6c49c14fb8e4a1bc736e55 | [
"MIT"
] | null | null | null | 1_Neural Networks and Deep Learning/Week2/1_Python Basics With Numpy/Python_Basics_With_Numpy.ipynb | zyong812/deeplearning.ai | 5b1c29e8f84d0733ce6c49c14fb8e4a1bc736e55 | [
"MIT"
] | null | null | null | 1_Neural Networks and Deep Learning/Week2/1_Python Basics With Numpy/Python_Basics_With_Numpy.ipynb | zyong812/deeplearning.ai | 5b1c29e8f84d0733ce6c49c14fb8e4a1bc736e55 | [
"MIT"
] | null | null | null | 32.881579 | 414 | 0.505277 | [
[
[
"# Python Basics with Numpy (optional assignment)\n\nWelcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. \n\n**Instructions:**\n- You will be using Python 3.\n- Avoid using for-loops and while-loops, unless you are explicitly told to do so.\n- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.\n- After coding your function, run the cell right below it to check if your result is correct.\n\n**After this assignment you will:**\n- Be able to use iPython Notebooks\n- Be able to use numpy functions and numpy matrix/vector operations\n- Understand the concept of \"broadcasting\"\n- Be able to vectorize code\n\nLet's get started!",
"_____no_output_____"
],
[
"## About iPython Notebooks ##\n\niPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing \"SHIFT\"+\"ENTER\" or by clicking on \"Run Cell\" (denoted by a play symbol) in the upper bar of the notebook. \n\nWe will often specify \"(≈ X lines of code)\" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.\n\n**Exercise**: Set test to `\"Hello World\"` in the cell below to print \"Hello World\" and run the two cells below.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 1 line of code)\ntest = \"Hello Word\"\n### END CODE HERE ###",
"_____no_output_____"
],
[
"print (\"test: \" + test)",
"test: Hello Word\n"
]
],
[
[
"**Expected output**:\ntest: Hello World",
"_____no_output_____"
],
[
"<font color='blue'>\n**What you need to remember**:\n- Run your cells using SHIFT+ENTER (or \"Run cell\")\n- Write code in the designated areas using Python 3 only\n- Do not modify the code outside of the designated areas",
"_____no_output_____"
],
[
"## 1 - Building basic functions with numpy ##\n\nNumpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.\n\n### 1.1 - sigmoid function, np.exp() ###\n\nBefore using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().\n\n**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.\n\n**Reminder**:\n$sigmoid(x) = \\frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.\n\n<img src=\"images/Sigmoid.png\" style=\"width:500px;height:228px;\">\n\nTo refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: basic_sigmoid\n\nimport math\nimport numpy as np\n\ndef basic_sigmoid(x):\n \"\"\"\n Compute sigmoid of x.\n\n Arguments:\n x -- A scalar\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+math.exp(-x))\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
],
[
"basic_sigmoid(3)",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n<table style = \"width:40%\">\n <tr>\n <td>** basic_sigmoid(3) **</td> \n <td>0.9525741268224334 </td> \n </tr>\n\n</table>",
"_____no_output_____"
],
[
"Actually, we rarely use the \"math\" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful. ",
"_____no_output_____"
]
],
[
[
"### One reason why we use \"numpy\" instead of \"math\" in Deep Learning ###\nx = [1, 2, 3]\n\n# basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.",
"_____no_output_____"
]
],
[
[
"In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# example of np.exp\n# x = np.array([1, 2, 3])\nx = [1, 2, 3]\nprint(np.exp(x)) # result is (exp(1), exp(2), exp(3))",
"[ 2.71828183 7.3890561 20.08553692]\n"
]
],
[
[
"Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \\frac{1}{x}$ will output s as a vector of the same size as x.",
"_____no_output_____"
]
],
[
[
"# example of vector operation\nx = np.array([1, 2, 3])\nprint (x + 3)",
"[4 5 6]\n"
]
],
[
[
"Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html). \n\nYou can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.\n\n**Exercise**: Implement the sigmoid function using numpy. \n\n**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.\n$$ \\text{For } x \\in \\mathbb{R}^n \\text{, } sigmoid(x) = sigmoid\\begin{pmatrix}\n x_1 \\\\\n x_2 \\\\\n ... \\\\\n x_n \\\\\n\\end{pmatrix} = \\begin{pmatrix}\n \\frac{1}{1+e^{-x_1}} \\\\\n \\frac{1}{1+e^{-x_2}} \\\\\n ... \\\\\n \\frac{1}{1+e^{-x_n}} \\\\\n\\end{pmatrix}\\tag{1} $$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sigmoid\n\nimport numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()\n\ndef sigmoid(x):\n \"\"\"\n Compute the sigmoid of x\n\n Arguments:\n x -- A scalar or numpy array of any size\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+np.exp(-x))\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n<table>\n <tr> \n <td> **sigmoid([1,2,3])**</td> \n <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> \n </tr>\n</table> \n",
"_____no_output_____"
]
],
[
[
"x = np.array([1, 2, 3])\nsigmoid(x)",
"_____no_output_____"
]
],
[
[
"### 1.2 - Sigmoid gradient\n\nAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.\n\n**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\\_derivative(x) = \\sigma'(x) = \\sigma(x) (1 - \\sigma(x))\\tag{2}$$\nYou often code this function in two steps:\n1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.\n2. Compute $\\sigma'(x) = s(1-s)$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sigmoid_derivative\n\ndef sigmoid_derivative(x):\n \"\"\"\n Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.\n You can store the output of the sigmoid function into variables and then use it to calculate the gradient.\n \n Arguments:\n x -- A scalar or numpy array\n\n Return:\n ds -- Your computed gradient.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n s = sigmoid(x)\n ds = s*(1-s)\n ### END CODE HERE ###\n \n return ds",
"_____no_output_____"
],
[
"x = np.array([1, 2, 3])\nprint (\"sigmoid_derivative(x) = \" + str(sigmoid_derivative(x)))",
"sigmoid_derivative(x) = [0.19661193 0.10499359 0.04517666]\n"
]
],
[
[
"**Expected Output**: \n\n\n<table>\n <tr> \n <td> **sigmoid_derivative([1,2,3])**</td> \n <td> [ 0.19661193 0.10499359 0.04517666] </td> \n </tr>\n</table> \n\n",
"_____no_output_____"
],
[
"### 1.3 - Reshaping arrays ###\n\nTwo common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html). \n- X.shape is used to get the shape (dimension) of a matrix/vector X. \n- X.reshape(...) is used to reshape X into some other dimension. \n\nFor example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you \"unroll\", or reshape, the 3D array into a 1D vector.\n\n<img src=\"images/image2vector_kiank.png\" style=\"width:500px;height:300;\">\n\n**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\\*height\\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:\n``` python\nv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c\n```\n- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc. ",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: image2vector\ndef image2vector(image):\n \"\"\"\n Argument:\n image -- a numpy array of shape (length, height, depth)\n \n Returns:\n v -- a vector of shape (length*height*depth, 1)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2],1)\n ### END CODE HERE ###\n \n return v",
"_____no_output_____"
],
[
"# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values\nimage = np.array([[[ 0.67826139, 0.29380381],\n [ 0.90714982, 0.52835647],\n [ 0.4215251 , 0.45017551]],\n\n [[ 0.92814219, 0.96677647],\n [ 0.85304703, 0.52351845],\n [ 0.19981397, 0.27417313]],\n\n [[ 0.60659855, 0.00533165],\n [ 0.10820313, 0.49978937],\n [ 0.34144279, 0.94630077]]])\n\nprint(image.shape)\nprint (\"image2vector(image) = \" + str(image2vector(image)))",
"(3, 3, 2)\nimage2vector(image) = [[0.67826139]\n [0.29380381]\n [0.90714982]\n [0.52835647]\n [0.4215251 ]\n [0.45017551]\n [0.92814219]\n [0.96677647]\n [0.85304703]\n [0.52351845]\n [0.19981397]\n [0.27417313]\n [0.60659855]\n [0.00533165]\n [0.10820313]\n [0.49978937]\n [0.34144279]\n [0.94630077]]\n"
],
[
"np.array([[2,3,1], [2,45,6]])",
"_____no_output_____"
],
[
"print(image[0, :, :])",
"[[0.67826139 0.29380381]\n [0.90714982 0.52835647]\n [0.4215251 0.45017551]]\n"
]
],
[
[
"**Expected Output**: \n\n\n<table style=\"width:100%\">\n <tr> \n <td> **image2vector(image)** </td> \n <td> [[ 0.67826139]\n [ 0.29380381]\n [ 0.90714982]\n [ 0.52835647]\n [ 0.4215251 ]\n [ 0.45017551]\n [ 0.92814219]\n [ 0.96677647]\n [ 0.85304703]\n [ 0.52351845]\n [ 0.19981397]\n [ 0.27417313]\n [ 0.60659855]\n [ 0.00533165]\n [ 0.10820313]\n [ 0.49978937]\n [ 0.34144279]\n [ 0.94630077]]</td> \n </tr>\n \n \n</table>",
"_____no_output_____"
],
[
"### 1.4 - Normalizing rows\n\nAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \\frac{x}{\\| x\\|} $ (dividing each row vector of x by its norm).\n\nFor example, if $$x = \n\\begin{bmatrix}\n 0 & 3 & 4 \\\\\n 2 & 6 & 4 \\\\\n\\end{bmatrix}\\tag{3}$$ then $$\\| x\\| = np.linalg.norm(x, axis = 1, keepdims = True) = \\begin{bmatrix}\n 5 \\\\\n \\sqrt{56} \\\\\n\\end{bmatrix}\\tag{4} $$and $$ x\\_normalized = \\frac{x}{\\| x\\|} = \\begin{bmatrix}\n 0 & \\frac{3}{5} & \\frac{4}{5} \\\\\n \\frac{2}{\\sqrt{56}} & \\frac{6}{\\sqrt{56}} & \\frac{4}{\\sqrt{56}} \\\\\n\\end{bmatrix}\\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.\n\n\n**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: normalizeRows\n\ndef normalizeRows(x):\n \"\"\"\n Implement a function that normalizes each row of the matrix x (to have unit length).\n \n Argument:\n x -- A numpy matrix of shape (n, m)\n \n Returns:\n x -- The normalized (by row) numpy matrix. You are allowed to modify x.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)\n x_norm = np.linalg.norm(x,axis=1,keepdims=True)\n \n # Divide x by its norm.\n x = x/x_norm\n ### END CODE HERE ###\n\n return x",
"_____no_output_____"
],
[
"x = np.array([\n [0, 3, 4],\n [1, 6, 4]])\nprint(\"normalizeRows(x) = \" + str(normalizeRows(x)))",
"normalizeRows(x) = [[0. 0.6 0.8 ]\n [0.13736056 0.82416338 0.54944226]]\n"
],
[
"np.dot(x, np.array([2,3,4]).reshape(3,1)).reshape(1,2)",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n\n<table style=\"width:60%\">\n\n <tr> \n <td> **normalizeRows(x)** </td> \n <td> [[ 0. 0.6 0.8 ]\n [ 0.13736056 0.82416338 0.54944226]]</td> \n </tr>\n \n \n</table>",
"_____no_output_____"
],
[
"**Note**:\nIn normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! ",
"_____no_output_____"
],
[
"### 1.5 - Broadcasting and the softmax function ####\nA very important concept to understand in numpy is \"broadcasting\". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).",
"_____no_output_____"
],
[
"**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.\n\n**Instructions**:\n- $ \\text{for } x \\in \\mathbb{R}^{1\\times n} \\text{, } softmax(x) = softmax(\\begin{bmatrix}\n x_1 &&\n x_2 &&\n ... &&\n x_n \n\\end{bmatrix}) = \\begin{bmatrix}\n \\frac{e^{x_1}}{\\sum_{j}e^{x_j}} &&\n \\frac{e^{x_2}}{\\sum_{j}e^{x_j}} &&\n ... &&\n \\frac{e^{x_n}}{\\sum_{j}e^{x_j}} \n\\end{bmatrix} $ \n\n- $\\text{for a matrix } x \\in \\mathbb{R}^{m \\times n} \\text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\\begin{bmatrix}\n x_{11} & x_{12} & x_{13} & \\dots & x_{1n} \\\\\n x_{21} & x_{22} & x_{23} & \\dots & x_{2n} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n x_{m1} & x_{m2} & x_{m3} & \\dots & x_{mn}\n\\end{bmatrix} = \\begin{bmatrix}\n \\frac{e^{x_{11}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{12}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{13}}}{\\sum_{j}e^{x_{1j}}} & \\dots & \\frac{e^{x_{1n}}}{\\sum_{j}e^{x_{1j}}} \\\\\n \\frac{e^{x_{21}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{22}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{23}}}{\\sum_{j}e^{x_{2j}}} & \\dots & \\frac{e^{x_{2n}}}{\\sum_{j}e^{x_{2j}}} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\frac{e^{x_{m1}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m2}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m3}}}{\\sum_{j}e^{x_{mj}}} & \\dots & \\frac{e^{x_{mn}}}{\\sum_{j}e^{x_{mj}}}\n\\end{bmatrix} = \\begin{pmatrix}\n softmax\\text{(first row of x)} \\\\\n softmax\\text{(second row of x)} \\\\\n ... \\\\\n softmax\\text{(last row of x)} \\\\\n\\end{pmatrix} $$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: softmax\n\ndef softmax(x):\n \"\"\"Calculates the softmax for each row of the input x.\n\n Your code should work for a row vector and also for matrices of shape (n, m).\n\n Argument:\n x -- A numpy matrix of shape (n,m)\n\n Returns:\n s -- A numpy matrix equal to the softmax of x, of shape (n,m)\n \"\"\"\n \n ### START CODE HERE ### (≈ 3 lines of code)\n # Apply exp() element-wise to x. Use np.exp(...).\n x_exp = np.exp(x)\n\n # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).\n x_sum = np.sum(x_exp,axis =1 ,keepdims=True)\n \n # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.\n s = x_exp/x_sum\n\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
],
[
"x = np.array([\n [9, 2, 5, 0, 0],\n [7, 5, 0, 0 ,0]])\n\nx_exp = np.exp(x)\nx_sum = np.sum(x_exp, axis=1,keepdims=True)\nx_exp/x_sum",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table style=\"width:60%\">\n\n <tr> \n <td> **softmax(x)** </td> \n <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04\n 1.21052389e-04]\n [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04\n 8.01252314e-04]]</td> \n </tr>\n</table>\n",
"_____no_output_____"
],
[
"**Note**:\n- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.\n\nCongratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.",
"_____no_output_____"
],
[
"<font color='blue'>\n**What you need to remember:**\n- np.exp(x) works for any np.array x and applies the exponential function to every coordinate\n- the sigmoid function and its gradient\n- image2vector is commonly used in deep learning\n- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. \n- numpy has efficient built-in functions\n- broadcasting is extremely useful",
"_____no_output_____"
],
[
"## 2) Vectorization",
"_____no_output_____"
],
[
"\nIn deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.",
"_____no_output_____"
]
],
[
[
"import time\n\nx1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###\ntic = time.process_time()\ndot = 0\nfor i in range(len(x1)):\n dot+= x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC OUTER PRODUCT IMPLEMENTATION ###\ntic = time.process_time()\nouter = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros\nfor i in range(len(x1)):\n for j in range(len(x2)):\n outer[i,j] = x1[i]*x2[j]\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC ELEMENTWISE IMPLEMENTATION ###\ntic = time.process_time()\nmul = np.zeros(len(x1))\nfor i in range(len(x1)):\n mul[i] = x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###\nW = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array\ntic = time.process_time()\ngdot = np.zeros(W.shape[0])\nfor i in range(W.shape[0]):\n for j in range(len(x1)):\n gdot[i] += W[i,j]*x1[j]\ntoc = time.process_time()\nprint (\"gdot = \" + str(gdot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")",
"dot = 278\n ----- Computation time = 0.10699999999985721ms\nouter = [[81. 18. 18. 81. 0. 81. 18. 45. 0. 0. 81. 18. 45. 0. 0.]\n [18. 4. 4. 18. 0. 18. 4. 10. 0. 0. 18. 4. 10. 0. 0.]\n [45. 10. 10. 45. 0. 45. 10. 25. 0. 0. 45. 10. 25. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [63. 14. 14. 63. 0. 63. 14. 35. 0. 0. 63. 14. 35. 0. 0.]\n [45. 10. 10. 45. 0. 45. 10. 25. 0. 0. 45. 10. 25. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [81. 18. 18. 81. 0. 81. 18. 45. 0. 0. 81. 18. 45. 0. 0.]\n [18. 4. 4. 18. 0. 18. 4. 10. 0. 0. 18. 4. 10. 0. 0.]\n [45. 10. 10. 45. 0. 45. 10. 25. 0. 0. 45. 10. 25. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n ----- Computation time = 0.2670000000000172ms\nelementwise multiplication = [81. 4. 10. 0. 0. 63. 10. 0. 0. 0. 81. 4. 25. 0. 0.]\n ----- Computation time = 0.12999999999996348ms\ngdot = [22.14698904 28.70299442 16.59200655]\n ----- Computation time = 0.172000000000061ms\n"
],
[
"W.shape",
"_____no_output_____"
],
[
"x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### VECTORIZED DOT PRODUCT OF VECTORS ###\ntic = time.process_time()\ndot = np.dot(x1,x2)\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED OUTER PRODUCT ###\ntic = time.process_time()\nouter = np.outer(x1,x2)\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED ELEMENTWISE MULTIPLICATION ###\ntic = time.process_time()\nmul = np.multiply(x1,x2)\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED GENERAL DOT PRODUCT ###\ntic = time.process_time()\ndot = np.dot(W,x1)\ntoc = time.process_time()\nprint (\"gdot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")",
"dot = 278\n ----- Computation time = 0.3959999999998409ms\nouter = [[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]\n [18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]\n [45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [63 14 14 63 0 63 14 35 0 0 63 14 35 0 0]\n [45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]\n [18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]\n [45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]\n ----- Computation time = 0.18099999999954264ms\nelementwise multiplication = [81 4 10 0 0 63 10 0 0 0 81 4 25 0 0]\n ----- Computation time = 0.140000000000029ms\ngdot = [22.14698904 28.70299442 16.59200655]\n ----- Computation time = 0.14099999999972468ms\n"
]
],
[
[
"As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. \n\n**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.",
"_____no_output_____"
],
[
"### 2.1 Implement the L1 and L2 loss functions\n\n**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.\n\n**Reminder**:\n- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \\hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.\n- L1 loss is defined as:\n$$\\begin{align*} & L_1(\\hat{y}, y) = \\sum_{i=0}^m|y^{(i)} - \\hat{y}^{(i)}| \\end{align*}\\tag{6}$$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: L1\n\ndef L1(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L1 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum(np.abs(yhat - y))\n ### END CODE HERE ###\n \n return loss",
"_____no_output_____"
],
[
"yhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L1 = \" + str(L1(yhat,y)))",
"L1 = 1.1\n"
]
],
[
[
"**Expected Output**:\n\n<table style=\"width:20%\">\n\n <tr> \n <td> **L1** </td> \n <td> 1.1 </td> \n </tr>\n</table>\n",
"_____no_output_____"
],
[
"**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\\sum_{j=0}^n x_j^{2}$. \n\n- L2 loss is defined as $$\\begin{align*} & L_2(\\hat{y},y) = \\sum_{i=0}^m(y^{(i)} - \\hat{y}^{(i)})^2 \\end{align*}\\tag{7}$$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: L2\n\ndef L2(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L2 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum((yhat-y) ** 2 )\n# loss = np.sum(yhat-y)\n ### END CODE HERE ###\n \n return loss",
"_____no_output_____"
],
[
"yhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L2 = \" + str(L2(yhat,y)))",
"L2 = 0.43\n"
]
],
[
[
"**Expected Output**: \n<table style=\"width:20%\">\n <tr> \n <td> **L2** </td> \n <td> 0.43 </td> \n </tr>\n</table>",
"_____no_output_____"
],
[
"Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!",
"_____no_output_____"
],
[
"<font color='blue'>\n**What to remember:**\n- Vectorization is very important in deep learning. It provides computational efficiency and clarity.\n- You have reviewed the L1 and L2 loss.\n- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
eca3eb81cfe85322f6fee2af5f8fedc5bae1e04a | 8,885 | ipynb | Jupyter Notebook | Lesson04/.ipynb_checkpoints/Exercise 15-checkpoint.ipynb | TrainingByPackt/Machine-Learning-Fundamentals-eLearning | 7bc7a48eda279b46f15da550a440de9c686318fc | [
"MIT"
] | 1 | 2020-02-10T08:56:10.000Z | 2020-02-10T08:56:10.000Z | Lesson04/.ipynb_checkpoints/Exercise 15-checkpoint.ipynb | TrainingByPackt/Machine-Learning-Fundamentals-eLearning | 7bc7a48eda279b46f15da550a440de9c686318fc | [
"MIT"
] | null | null | null | Lesson04/.ipynb_checkpoints/Exercise 15-checkpoint.ipynb | TrainingByPackt/Machine-Learning-Fundamentals-eLearning | 7bc7a48eda279b46f15da550a440de9c686318fc | [
"MIT"
] | 3 | 2020-02-22T07:23:17.000Z | 2020-11-11T16:58:27.000Z | 78.628319 | 1,385 | 0.675858 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"data = pd.read_csv(\"../datasets/fertility_Diagnosis.txt\", header=None)",
"_____no_output_____"
],
[
"X = data.iloc[:,:9]\nY = data.iloc[:,9]",
"_____no_output_____"
],
[
"from sklearn.naive_bayes import GaussianNB\nmodel = GaussianNB()\nmodel.fit(X,Y)",
"_____no_output_____"
],
[
"pred = model.predict([[-0.33,0.69,0,1,1,0,0.8,0,0.88]])\nprint(pred)",
"['N']\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
eca3f1453909e123d2c24c6d7853962e05af91d3 | 272,730 | ipynb | Jupyter Notebook | official TF tutorials/official-tutorial_09_corpus-creation_share-features.ipynb | oliverglanz/Text-Fabric | 4b94e4f8bf13c1df94dfb77d9ec6e81ac6d220fa | [
"MIT"
] | null | null | null | official TF tutorials/official-tutorial_09_corpus-creation_share-features.ipynb | oliverglanz/Text-Fabric | 4b94e4f8bf13c1df94dfb77d9ec6e81ac6d220fa | [
"MIT"
] | null | null | null | official TF tutorials/official-tutorial_09_corpus-creation_share-features.ipynb | oliverglanz/Text-Fabric | 4b94e4f8bf13c1df94dfb77d9ec6e81ac6d220fa | [
"MIT"
] | null | null | null | 59.940659 | 10,478 | 0.555777 | [
[
[
"<img align=\"right\" src=\"images/tf-small.png\" width=\"128\"/>\n<img align=\"right\" src=\"images/etcbc.png\"/>\n<img align=\"right\" src=\"images/dans-small.png\"/>\n\n# Sharing data features\n\n## Explore additional data\nThe ETCBC has a few other repositories with data that work in conjunction with the BHSA data.\nOne of them you have already seen: \n[phono](https://github.com/ETCBC/phono),\nfor phonetic transcriptions.\nThere is also\n[parallels](https://github.com/ETCBC/parallels)\nfor detecting parallel passages,\nand\n[valence](https://github.com/ETCBC/valence)\nfor studying patterns around verbs that determine their meanings.\n\n## Make your own data\nIf you study the additional data, you can observe how that data is created and also\nhow it is turned into a text-fabric data module.\nThe last step is incredibly easy. You can write out every Python dictionary where the keys are numbers\nand the values string or numbers as a Text-Fabric feature.\nWhen you are creating data, you have already constructed those dictionaries, so writing\nthem out is just one method call.\nSee for example how the\n[flowchart](https://nbviewer.jupyter.org/github/etcbc/valence/blob/master/programs/flowchart.ipynb#Add-sense-feature-to-valence-module)\nnotebook in valence writes out verb sense data.\n\n## Share your new data\nYou can then easily share your new features on GitHub, so that your colleagues everywhere \ncan try it out for themselves.\n\nHere is how you draw in other data, for example\n\n* [etcbc/valence/tf](https://github.com/etcbc/valence) :\n the results of the *verbal valence* work of Janet Dyk in the SYNVAR project;\n* [etcbc/lingo/heads/tf](https://github.com/etcbc/lingo/tree/master/heads) :\n head words for phrases, work done by Cody Kingham;\n* [ch-jensen/Semantic-mapping-of-participants/actor/tf](https://github.com/ch-jensen/Semantic-mapping-of-participants) :\n participant analysis in progress by Christian Høygaard-Jensen;\n* [cmerwich/bh-reference-system/tf](https://github.com/cmerwich/bh-reference-system):\n participant analysis in progress by Christiaan Erwich;\n* or whatever you have in the making!\n\nYou can add such data on the fly, by passing a `mod={org}/{repo}/{path}` parameter,\nor a bunch of them separated by commas.\n\nIf the data is there, it will be auto-downloaded and stored on your machine.\n\nLet's do it.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"# Incantation\n\nThe ins and outs of installing Text-Fabric, getting the corpus, and initializing a notebook are\nexplained in the [start tutorial](start.ipynb).",
"_____no_output_____"
]
],
[
[
"from tf.app import use",
"_____no_output_____"
],
[
"A = use(\n 'bhsa',\n mod=(\n 'etcbc/valence/tf,'\n 'etcbc/lingo/heads/tf,'\n 'ch-jensen/Semantic-mapping-of-participants/actor/tf'\n ),\n hoist=globals(),\n)",
"Using TF-app in /Users/dirk/github/annotation/app-bhsa/code:\n\trepo clone offline under ~/github (local github)\n\tconnecting to online GitHub repo etcbc/bhsa ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/bhsa/tf/c:\n\trv1.6 (latest release)\n\tconnecting to online GitHub repo etcbc/phono ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/phono/tf/c:\n\tr1.2 (latest release)\n\tconnecting to online GitHub repo etcbc/parallels ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/parallels/tf/c:\n\tr1.2 (latest release)\n\tconnecting to online GitHub repo etcbc/valence ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/valence/tf/c:\n\tr1.1 (latest release)\n\tconnecting to online GitHub repo etcbc/lingo ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/lingo/heads/tf/c:\n\tr0.1 (latest release)\n\tconnecting to online GitHub repo ch-jensen/Semantic-mapping-of-participants ... connected\n\tdownloading https://github.com/ch-jensen/participants/releases/download/1.3/actor-tf-c.zip ... \n\tunzipping ... \n\tsaving data\n\tcould not save data to /Users/dirk/text-fabric-data/ch-jensen/Semantic-mapping-of-participants/actor/tf/c\n\tWill try something else\n\tactor/tf/c/actor.tf...downloaded\n\tactor/tf/c/coref.tf...downloaded\n\tactor/tf/c/prs_actor.tf...downloaded\n\tOK\nUsing data in /Users/dirk/text-fabric-data/ch-jensen/Semantic-mapping-of-participants/actor/tf/c:\n\tr1.3=#1c17398f92c0836c06de5e1798687c3fa18133cf (latest release)\n"
]
],
[
[
"You see that the features from the *etcbc/valence/tf* and *etcbc/lingo/heads/tf* modules have been added to the mix.\n\nIf you want to check for data updates, you can add an `check=True` argument.\n\nNote that edge features are in **_bold italic_**.",
"_____no_output_____"
],
[
"## sense from valence\n\nLet's find out about *sense*.",
"_____no_output_____"
]
],
[
[
"F.sense.freqList()",
"_____no_output_____"
]
],
[
[
"Which nodes have a sense feature?",
"_____no_output_____"
]
],
[
[
"{F.otype.v(n) for n in N() if F.sense.v(n)}",
"_____no_output_____"
],
[
"results = A.search('''\nword sense\n''')",
" 0.32s 47381 results\n"
]
],
[
[
"Let's show some of the rarer sense values:",
"_____no_output_____"
]
],
[
[
"results = A.search('''\nword sense=k.\n''')",
" 0.39s 54 results\n"
],
[
"A.table(results, end=5)",
"_____no_output_____"
]
],
[
[
"If we do a pretty display, the `sense` feature shows up.",
"_____no_output_____"
]
],
[
[
"A.show(results, start=1, end=1, withNodes=True)",
"_____no_output_____"
]
],
[
[
"## actor from semantic\n\nLet's find out about *actor*.",
"_____no_output_____"
]
],
[
[
"fl = F.actor.freqList()\nlen(fl)",
"_____no_output_____"
],
[
"fl[0:10]",
"_____no_output_____"
]
],
[
[
"Which nodes have an actor feature?",
"_____no_output_____"
]
],
[
[
"{F.otype.v(n) for n in N() if F.actor.v(n)}",
"_____no_output_____"
],
[
"results = A.search('''\nphrase_atom actor\n''')",
" 0.18s 2073 results\n"
]
],
[
[
"Let's show some of the rarer actor values:",
"_____no_output_____"
]
],
[
[
"results = A.search('''\nphrase_atom actor=KHN\n''')",
" 0.27s 30 results\n"
],
[
"A.table(results)",
"_____no_output_____"
],
[
"A.show(results, start=1, end=1)",
"_____no_output_____"
]
],
[
[
"# heads from lingo\n\nNow, `heads` is an edge feature, we cannot directly make it visible in pretty displays, but we can use it in queries.\n\nWe also want to make the feature `sense` visible, so we mention the feature in the query, without restricting the results.",
"_____no_output_____"
]
],
[
[
"results = A.search('''\nbook book=Genesis\n chapter chapter=1\n clause\n phrase\n -heads> word sense*\n'''\n)",
" 0.57s 402 results\n"
]
],
[
[
"We make the feature `sense` visible:",
"_____no_output_____"
]
],
[
[
"A.show(results, start=1, end=3, withNodes=True)",
"_____no_output_____"
]
],
[
[
"Note how the words that are **_heads_** of their phrases are highlighted within their phrases.",
"_____no_output_____"
],
[
"# All together!\n\nHere is a query that shows results with all features.",
"_____no_output_____"
]
],
[
[
"results = A.search('''\nbook book=Leviticus\n phrase sense*\n phrase_atom actor=KHN\n -heads> word\n''')",
" 0.74s 30 results\n"
],
[
"A.displaySetup(condensed=True, condenseType='verse')\nA.show(results, start=8, end=8)\nA.displaySetup()",
"_____no_output_____"
]
],
[
[
"# Features from custom locations",
"_____no_output_____"
],
[
"If you want to load your features from your own local github repositories, instead of from the data\nthat TF has downloaded for you into `~/text-fabric-data`, you can do so by passing the checkout parameter `checkout='clone'`.",
"_____no_output_____"
]
],
[
[
"A = use('bhsa', checkout='clone', hoist=globals())",
"Using TF-app in /Users/dirk/github/annotation/app-bhsa/code:\n\trepo clone offline under ~/github (local github)\nUsing data in /Users/dirk/github/etcbc/bhsa/tf/c:\n\trepo clone offline under ~/github (local github)\nUsing data in /Users/dirk/github/etcbc/phono/tf/c:\n\trepo clone offline under ~/github (local github)\nUsing data in /Users/dirk/github/etcbc/parallels/tf/c:\n\trepo clone offline under ~/github (local github)\n"
]
],
[
[
"Hover over the features to see where they come from, and you'll see they come from your local github repo.",
"_____no_output_____"
],
[
"You may load extra features by specifying locations and modules manually.\n\nHere we get the valence features, but not as a module, but in a custom way.",
"_____no_output_____"
]
],
[
[
"A = use('bhsa', locations='~/text-fabric-data/etcbc/valence/tf', modules='c', hoist=globals())",
"Using TF-app in /Users/dirk/github/annotation/app-bhsa/code:\n\trepo clone offline under ~/github (local github)\n\tconnecting to online GitHub repo etcbc/bhsa ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/bhsa/tf/c:\n\trv1.6 (latest release)\n\tconnecting to online GitHub repo etcbc/phono ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/phono/tf/c:\n\tr1.2 (latest release)\n\tconnecting to online GitHub repo etcbc/parallels ... connected\nUsing data in /Users/dirk/text-fabric-data/etcbc/parallels/tf/c:\n\tr1.2 (latest release)\n"
]
],
[
[
"Still, all features of the main corpus and the standard modules have been loaded.\n\nUsing `locations` and `modules` is useful if you want to load extra features from custom locations on your computer.",
"_____no_output_____"
],
[
"# Less features\n\nIf you want to load less features,\nyou can set up TF in the traditional way first, and then wrap the app API around it.\n\nHere we load just the minimal set of features to get going.",
"_____no_output_____"
]
],
[
[
"from tf.fabric import Fabric",
"_____no_output_____"
],
[
"TF = Fabric(locations='~/github/etcbc/bhsa/tf', modules='c')",
"This is Text-Fabric 7.5.4\nApi reference : https://annotation.github.io/text-fabric/Api/Fabric/\n\n114 features found and 0 ignored\n"
],
[
"api = TF.load('pdp vs vt gn nu ps lex')",
" 0.00s loading features ...\n | 0.10s B lex from /Users/dirk/github/etcbc/bhsa/tf/c\n | 0.11s B pdp from /Users/dirk/github/etcbc/bhsa/tf/c\n | 0.11s B vs from /Users/dirk/github/etcbc/bhsa/tf/c\n | 0.11s B vt from /Users/dirk/github/etcbc/bhsa/tf/c\n | 0.08s B gn from /Users/dirk/github/etcbc/bhsa/tf/c\n | 0.10s B nu from /Users/dirk/github/etcbc/bhsa/tf/c\n | 0.10s B ps from /Users/dirk/github/etcbc/bhsa/tf/c\n 4.58s All features loaded/computed - for details use loadLog()\n"
]
],
[
[
"And finally we wrap the app around it:",
"_____no_output_____"
]
],
[
[
"A = use('bhsa', api=api, hoist=globals())",
"Using TF-app in /Users/dirk/github/annotation/app-bhsa/code:\n\trepo clone offline under ~/github (local github)\n"
]
],
[
[
"This loads much quicker.",
"_____no_output_____"
],
[
"A small test: what are the verbal stems?",
"_____no_output_____"
]
],
[
[
"F.vs.freqList()",
"_____no_output_____"
]
],
[
[
"# Next steps\n\n* **[display](display.ipynb)** become an expert in creating pretty displays of your text structures\n* **[search](search.ipynb)** turbo charge your hand-coding with search templates\n* **[exportExcel](exportExcel.ipynb)** make tailor-made spreadsheets out of your results\n* **[export](export.ipynb)** export your dataset as an Emdros database\n\nBack to [start](start.ipynb)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
eca406263b471075b2c6df5186a4f7a872852b9c | 75,442 | ipynb | Jupyter Notebook | tests/.ipynb_checkpoints/SARSA-checkpoint.ipynb | Mufabo/py_inforce | 91b82da54322060e09ecad6b8d8f8f51de76e2d1 | [
"MIT"
] | 1 | 2021-12-12T23:47:41.000Z | 2021-12-12T23:47:41.000Z | tests/.ipynb_checkpoints/SARSA-checkpoint.ipynb | Mufabo/py_inforce | 91b82da54322060e09ecad6b8d8f8f51de76e2d1 | [
"MIT"
] | null | null | null | tests/.ipynb_checkpoints/SARSA-checkpoint.ipynb | Mufabo/py_inforce | 91b82da54322060e09ecad6b8d8f8f51de76e2d1 | [
"MIT"
] | null | null | null | 192.946292 | 62,764 | 0.895748 | [
[
[
"# SARSA\n## Linear Approximation and Deep version",
"_____no_output_____"
],
[
"Estimates Q via TD learning. \nWorks only with discrete action spaces.",
"_____no_output_____"
],
[
"## SARSA for Tabular Applications\n### Sarsa on FrozenLake",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"import gym\nimport numpy as np\n\ndef SARSA(env, alpha = 0.85, num_tau = 5000, DISC_FACTOR = .9): \n Q = np.zeros([env.nS, env.nA]) \n for i in range(num_tau): \n epsilon = 0.5\n done = False\n s = env.reset()\n a = env.action_space.sample() if np.random.rand() < epsilon else np.argmax(Q[s])\n while not done: \n s_prime, r, done, _ = env.step(a)\n a_prime = env.action_space.sample() if np.random.rand() < epsilon else np.argmax(Q[s_prime]) \n Q[s, a] += alpha * (r + DISC_FACTOR * Q[s_prime, a_prime] - Q[s, a]) \n #epsilon = epsilon * 0.99999999999999999 \n s = s_prime \n a = a_prime\n return Q\n",
"_____no_output_____"
],
[
"env = gym.make('FrozenLake-v0', is_slippery = False)\nq = SARSA(env, num_tau = 10000)\nstate = env.reset()\n\ndone = False\nret = 0\nwhile not done:\n a = np.argmax(q[state, :])\n\n state, rew, done, _ = env.step(a)\n ret += rew\nret",
"_____no_output_____"
],
[
"env = gym.make('FrozenLake-v0', is_slippery = True)\nq = SARSA(env, num_tau = 10000)\nstate = env.reset()\nret = 0\n\nfor i in range(100):\n done = False\n\n while not done:\n a = np.argmax(q[state, :])\n\n state, rew, done, _ = env.step(a)\n ret += rew\nret/100",
"_____no_output_____"
],
[
"\"\"\"\nSFFF\nFHFH\nFFFH\nHFFG\n\nLEFT = 0\nDOWN = 1\nRIGHT = 2\nUP = 3\n\n\"\"\"\nfor i in range(16):\n print(np.argmax(q[i]))",
"0\n2\n3\n3\n1\n0\n1\n0\n2\n2\n1\n0\n0\n2\n2\n0\n"
],
[
"import gym\nimport numpy as np\n\nenv = gym.make('Pendulum-v0')\n\nnp.argmax(env.action_space.sample())\nlen(env.observation_space.high)\nenv.action_space.high",
"_____no_output_____"
]
],
[
[
"## SARSA with Deep Learning\n### SARSA on CartPole",
"_____no_output_____"
]
],
[
[
"import torch\nimport numpy as np\n\ndef deep_SARSA(env, Q, epsilon = 0.5, num_samples = 10000, alpha=0.8, DISC_FACTOR=0.9, loss = torch.nn.MSELoss()):\n \"\"\"\n Online(no memory) SARSA that trains a neural network to do Q-estimation for an openai gym environment.\n \n Args:\n env (gym environment)\n q_estimator (state -> 1d array): Model that estimates q values for states of env\n num_samples (int): number of sarsa samples\n \n Returns\n \"\"\"\n optimizer = torch.optim.Adam(Q.parameters(), lr=alpha)\n \n for i in range(num_samples):\n s = env.reset()\n a = torch.tensor(env.action_space.sample()) if torch.rand(1) < epsilon else torch.argmax(Q.forward(torch.tensor(s, dtype=torch.float32)))\n \n done = False\n \n while not done:\n s_prime, r, done, _ = env.step(a.numpy())\n a_prime = torch.tensor(env.action_space.sample()) if torch.rand(1) < epsilon else torch.argmax(Q.forward(torch.tensor(s_prime, dtype=torch.float32)))\n \n optimizer.zero_grad()\n \n target = r + DISC_FACTOR * Q.forward(torch.tensor(s_prime, dtype=torch.float32))[a_prime]\n prediction = Q.forward(torch.tensor(s, dtype=torch.float32))[a]\n loss(target, prediction).backward()\n optimizer.step()\n \n i += 1\n \n if done or i == num_samples:\n break\n \n a = a_prime\n s = s_prime\n\nimport py_inforce as pin\nimport gym\n\nenv = gym.make('CartPole-v0')\nin_dim = env.observation_space.shape[0] # 4\nout_dim = env.action_space.n # 2\nQ = pin.MLP([in_dim, 128, 128, out_dim], torch.nn.ReLU)\n\ndeep_SARSA(env, Q)",
"_____no_output_____"
],
[
"torch.tensor(env.observation_space.sample(), dtype=torch.float32)",
"_____no_output_____"
],
[
"env.observation_space.sample()",
"_____no_output_____"
],
[
"torch.rand(1).numpy()",
"_____no_output_____"
],
[
"torch.tensor(env.action_space.sample()).numpy()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
eca40689d7b3b0afdcc90cc9652d776bfbfb304e | 33,118 | ipynb | Jupyter Notebook | Pandas/Assignment_02/Pandas - Assignment 02.ipynb | vishwesh5/simplilearn-AI-masters | 04a8297fee01f7d55e5036a763de44fd290ab581 | [
"MIT"
] | 1 | 2021-08-19T16:03:57.000Z | 2021-08-19T16:03:57.000Z | Pandas/Assignment_02/Pandas - Assignment 02.ipynb | vishwesh5/simplilearn-AI-masters | 04a8297fee01f7d55e5036a763de44fd290ab581 | [
"MIT"
] | null | null | null | Pandas/Assignment_02/Pandas - Assignment 02.ipynb | vishwesh5/simplilearn-AI-masters | 04a8297fee01f7d55e5036a763de44fd290ab581 | [
"MIT"
] | null | null | null | 32.184645 | 224 | 0.374207 | [
[
[
"<img src=\"http://cfs22.simplicdn.net/ice9/new_logo.svgz \"/>\n\n# Assignment 02: Evaluate the FDNY Dataset\n\n*The comments/sections provided are your cues to perform the assignment. You don't need to limit yourself to the number of rows/cells provided. You can add additional rows in each section to add more lines of code.*\n\n*If at any point in time you need help on solving this assignment, view our demo video to understand the different steps of the code.*\n\n**Happy coding!**\n\n* * *",
"_____no_output_____"
],
[
"#### 1: View and import the dataset",
"_____no_output_____"
]
],
[
[
"#Import the required libraries\nimport pandas as pd",
"_____no_output_____"
],
[
"#Import the Fire Department of New York City (FDNY) file\nfname = \"../FDNY/FDNY.csv\"\ndata = pd.read_csv(fname)",
"_____no_output_____"
]
],
[
[
"#### 2: Analyze the dataset",
"_____no_output_____"
]
],
[
[
"#View the content of the data\ndata.describe()",
"_____no_output_____"
],
[
"#View the first five records\ndata.head()",
"_____no_output_____"
],
[
"#Skip the duplicate header row\ndata = pd.read_csv(fname,skiprows=1)",
"_____no_output_____"
],
[
"#Verify if the dataset is fixed\ndata.head()",
"_____no_output_____"
],
[
"#View the data statistics (Hint: use describe() method)\ndata.describe()",
"_____no_output_____"
],
[
"#View the attributes of the dataset (Hint: view the column names)\ndata.columns",
"_____no_output_____"
],
[
"#View the index of the dataset\ndata.index",
"_____no_output_____"
]
],
[
[
"#### 3: Find the total number of fire department facilities in New York city",
"_____no_output_____"
]
],
[
[
"#Count number of records for each attribute\ndata.count()",
"_____no_output_____"
],
[
"#view the datatypes of all three attributes\ndata.dtypes",
"_____no_output_____"
]
],
[
[
"#### 4: Find the total number of fire department facilities in each borough",
"_____no_output_____"
]
],
[
[
"#Select FDNY information boroughwise\ndata_boroughwise = data.groupby('Borough')",
"_____no_output_____"
],
[
"#View FDNY informationn for each borough\ndata_boroughwise.size()",
"_____no_output_____"
]
],
[
[
"#### 5: Find the total number of fire department facilities in Manhattan",
"_____no_output_____"
]
],
[
[
"#Select FDNY information for Manhattan\ndata_manhattan = data_boroughwise.get_group('Manhattan')",
"_____no_output_____"
],
[
"#View FDNY information for Manhattan\ndata_manhattan",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
eca40a79f721cde830214c13b80db633adb91191 | 2,294 | ipynb | Jupyter Notebook | visualizations/bokeh/notebooks/glyphs/.ipynb_checkpoints/quad-checkpoint.ipynb | martinpeck/apryor6.github.io | 52a227cc65cf04fb7ca3162d405349b41c311a42 | [
"MIT"
] | 1 | 2016-11-06T23:46:58.000Z | 2016-11-06T23:46:58.000Z | visualizations/bokeh/notebooks/glyphs/.ipynb_checkpoints/quad-checkpoint.ipynb | martinpeck/apryor6.github.io | 52a227cc65cf04fb7ca3162d405349b41c311a42 | [
"MIT"
] | 1 | 2019-09-06T22:32:16.000Z | 2019-09-06T22:32:16.000Z | visualizations/bokeh/notebooks/glyphs/.ipynb_checkpoints/quad-checkpoint.ipynb | martinpeck/apryor6.github.io | 52a227cc65cf04fb7ca3162d405349b41c311a42 | [
"MIT"
] | 2 | 2019-09-06T13:54:42.000Z | 2020-03-11T09:33:59.000Z | 32.771429 | 219 | 0.581081 | [
[
[
"# Bokeh Quad Glyph",
"_____no_output_____"
]
],
[
[
"from bokeh.plotting import figure, output_file, show\nfrom bokeh.models import Range1d\nfrom bokeh.io import export_png\n\nfill_colors = ['#80b1d3','#8dd3c7','#ffffb3','#bebada']\nline_colors = ['#fb8072','#80b1d3','#b3de69','#bc80bd']\noutput_file(\"../../figures/quad.html\")\n\np = figure(plot_width=400, plot_height=400)\np.quad(left=0,right=0.5,bottom=0,top=0.5, fill_alpha=1,fill_color=fill_colors[0],\n line_alpha=1, line_color=line_colors[0], line_dash='solid', line_width=5)\np.quad(left=0.8,right=1.5,bottom=0.8,top=1.2, fill_alpha=1,fill_color=fill_colors[1],\n line_alpha=1, line_color=line_colors[1], line_dash='solid', line_width=5)\np.quad(left=0.75,right=1.5,bottom=0,top=0.5, fill_alpha=1,fill_color=fill_colors[2],\n line_alpha=1, line_color=line_colors[2], line_dash='solid', line_width=5)\np.quad(left=0,right=0.5,bottom=0.75,top=1.5, fill_alpha=1,fill_color=fill_colors[3],\n line_alpha=1, line_color=line_colors[3], line_dash='solid', line_width=5)\np.x_range = Range1d(-0.25,1.5, bounds=(-1,2))\np.y_range = Range1d(-0.25,1.5, bounds=(-1,2))\nshow(p)\nexport_png(p, filename=\"../../figures/quad.png\");",
"WARNING:bokeh.io:The webdriver raised a TimeoutException while waiting for a 'bokeh:idle' event to signify that the layout has rendered. Something may have gone wrong.\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
eca42e2c076f1e589cdd19481a64adfc472eb1b4 | 230,784 | ipynb | Jupyter Notebook | 08_Complex_Fourier_Transform/.ipynb_checkpoints/Complex IDFT-checkpoint.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | 2 | 2022-01-25T04:58:58.000Z | 2022-03-24T23:00:13.000Z | 08_Complex_Fourier_Transform/.ipynb_checkpoints/Complex IDFT-checkpoint.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | 1 | 2021-11-25T00:39:40.000Z | 2021-11-25T00:39:40.000Z | 08_Complex_Fourier_Transform/.ipynb_checkpoints/Complex IDFT-checkpoint.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | null | null | null | 1,451.471698 | 102,604 | 0.961141 | [
[
[
"# Complex IDFT\n\nThe Complex Inverse Discrete Fourier Transform is defined as:\n\n$$x[n] = \\sum\\limits^{N-1}_{k=0}{X[k]e^{j}\\frac{2\\pi k n}{N}} $$\n\nWhere $x[n]$ has $N-1$ points.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt ",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = (25,10)\nX = np.loadtxt(fname = \"dft.dat\", dtype=complex).flatten()\nplt.plot(np.absolute(X));",
"_____no_output_____"
],
[
"N = len(X)\nx = np.zeros(N, dtype=complex)\nfor n in range(N):\n for k in range(N):\n x[n] = x[n] + X[k]*np.exp(1j*2*np.pi*k*n/N)\n",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = (15,5)\n\nsamples = np.arange(len(x))\nnormalized_frequency = samples/len(x)\n\nplt.subplot(1, 1, 1)\nplt.plot(normalized_frequency, x, '.-')\nplt.title('Synthesis', size = 25)\nplt.xlabel('sample')\n\n\nplt.show()",
"_____no_output_____"
],
[
"np.savetxt('idft.dat', X)",
"_____no_output_____"
],
[
"ground_truth = np.loadtxt(fname = \"signal.dat\").flatten()\nplt.plot(ground_truth);\nplt.plot(x);",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
eca42f8acab9758aacf4d7a7580a8518cc05b713 | 15,669 | ipynb | Jupyter Notebook | docs/source/recipes/custom_parser.ipynb | vinayya/fiftyone | cadb54ba38e0db59abb6f9fb7ee630a41a517bef | [
"Apache-2.0"
] | 1 | 2020-08-26T20:41:10.000Z | 2020-08-26T20:41:10.000Z | docs/source/recipes/custom_parser.ipynb | vinayya/fiftyone | cadb54ba38e0db59abb6f9fb7ee630a41a517bef | [
"Apache-2.0"
] | null | null | null | docs/source/recipes/custom_parser.ipynb | vinayya/fiftyone | cadb54ba38e0db59abb6f9fb7ee630a41a517bef | [
"Apache-2.0"
] | null | null | null | 37.218527 | 880 | 0.59289 | [
[
[
"# Writing Custom Sample Parsers\n\nThis recipe demonstrates how to write a [custom SampleParser](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/samples.html#writing-a-custom-sampleparser) and use it to add samples in your custom format to a FiftyOne Dataset.",
"_____no_output_____"
],
[
"## Requirements\n\nIn this receipe we'll use the [TorchVision Datasets](https://pytorch.org/docs/stable/torchvision/datasets.html) library to download the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) to use as sample data to feed our custom parser.\n\nYou can install the necessary packages, if necessary, as follows:",
"_____no_output_____"
]
],
[
[
"# Modify as necessary (e.g., GPU install). See https://pytorch.org for options\n!pip install torch\n!pip install torchvision",
"_____no_output_____"
]
],
[
[
"## Writing a SampleParser\n\nFiftyOne provides a [SampleParser](https://voxel51.com/docs/fiftyone/api/fiftyone.utils.data.html#fiftyone.utils.data.parsers.SampleParser) interface that defines how it parses provided samples when methods such as [Dataset.add_labeled_images()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.html#fiftyone.core.dataset.Dataset.add_labeled_images) and [Dataset.ingest_labeled_images()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.html#fiftyone.core.dataset.Dataset.ingest_labeled_images) are used.\n\n`SampleParser` itself is an abstract interface; the concrete interface that you should implement is determined by the type of samples that you are importing. See [writing a custom SampleParser](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/samples.html#writing-a-custom-sampleparser) for full details.\n\nIn this recipe, we'll write a custom [LabeledImageSampleParser](https://voxel51.com/docs/fiftyone/api/fiftyone.utils.data.html#fiftyone.utils.data.parsers.LabeledImageSampleParser) that can parse labeled images from a [PyTorch Dataset](https://pytorch.org/docs/stable/data.html).",
"_____no_output_____"
],
[
"Here's the complete definition of the `SampleParser`:",
"_____no_output_____"
]
],
[
[
"import fiftyone as fo\nimport fiftyone.utils.data as foud\n\n\nclass PyTorchClassificationDatasetSampleParser(foud.LabeledImageSampleParser):\n \"\"\"Parser for image classification samples loaded from a PyTorch dataset.\n\n This parser can parse samples from a ``torch.utils.data.DataLoader`` that\n emits ``(img_tensor, target)`` tuples, where::\n\n - `img_tensor`: is a PyTorch Tensor containing the image\n - `target`: the integer index of the target class\n\n Args:\n classes: the list of class label strings\n \"\"\"\n\n def __init__(self, classes):\n self.classes = classes\n\n @property\n def has_image_path(self):\n \"\"\"Whether this parser produces paths to images on disk for samples\n that it parses.\n \"\"\"\n return False\n\n @property\n def has_image_metadata(self):\n \"\"\"Whether this parser produces\n :class:`fiftyone.core.metadata.ImageMetadata` instances for samples\n that it parses.\n \"\"\"\n return False\n\n @property\n def label_cls(self):\n \"\"\"The :class:`fiftyone.core.labels.Label` class returned by this\n parser.\n \"\"\"\n return fo.Classification\n\n def get_image(self):\n \"\"\"Returns the image from the current sample.\n\n Returns:\n a numpy image\n \"\"\"\n img_tensor = self.current_sample[0]\n return img_tensor.cpu().numpy()\n\n def get_label(self):\n \"\"\"Returns the label for the current sample.\n\n Returns:\n a :class:`fiftyone.core.labels.Label` instance\n \"\"\"\n target = self.current_sample[1]\n return self.classes[int(target)]",
"_____no_output_____"
]
],
[
[
"Note that `PyTorchClassificationDatasetSampleParser` specifies `has_image_path == False` and `has_image_metadata == False`, because the PyTorch dataset directly provides the in-memory image, not its path on disk.",
"_____no_output_____"
],
[
"## Ingesting samples into a dataset",
"_____no_output_____"
],
[
"In order to use `PyTorchClassificationDatasetSampleParser`, we need a PyTorch Dataset from which to feed it samples.\n\nLet's use the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) from the [TorchVision Datasets](https://pytorch.org/docs/stable/torchvision/datasets.html) library:",
"_____no_output_____"
]
],
[
[
"import torch\nimport torchvision\n\n\n# Downloads the test split of the CIFAR-10 dataset and prepares it for loading\n# in a DataLoader\ndataset = torchvision.datasets.CIFAR10(\n \"/tmp/fiftyone/custom-parser/pytorch\",\n train=False,\n download=True,\n transform=torchvision.transforms.ToTensor(),\n)\nclasses = dataset.classes\ndata_loader = torch.utils.data.DataLoader(dataset, batch_size=1)",
"Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to /tmp/fiftyone/custom-parser/pytorch/cifar-10-python.tar.gz\n"
]
],
[
[
"Now we can load the samples into the dataset. Since our custom sample parser declares `has_image_path == False`, we must use the [Dataset.ingest_labeled_images()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.html#fiftyone.core.dataset.Dataset.ingest_labeled_images) method to load the samples into a FiftyOne dataset, which will write the individual images to disk as they are ingested so that FiftyOne can access them.",
"_____no_output_____"
]
],
[
[
"dataset = fo.Dataset(\"cifar10-samples\")\n\nsample_parser = PyTorchClassificationDatasetSampleParser(classes)\n\n# The directory to use to store the individual images on disk\ndataset_dir = \"/tmp/fiftyone/custom-parser/fiftyone\"\n\n# Ingest the samples from the data loader\ndataset.ingest_labeled_images(data_loader, sample_parser, dataset_dir=dataset_dir)\n\nprint(\"Loaded %d samples\" % len(dataset))",
" 100% |███| 10000/10000 [6.7s elapsed, 0s remaining, 1.5K samples/s] \nLoaded 10000 samples\n"
]
],
[
[
"Let's inspect the contents of the dataset to verify that the samples were loaded as expected:",
"_____no_output_____"
]
],
[
[
"# Print summary information about the dataset\nprint(dataset)",
"Name: cifar10-samples\nPersistent: False\nNum samples: 10000\nTags: []\nSample fields:\n filepath: fiftyone.core.fields.StringField\n tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata)\n ground_truth: fiftyone.core.fields.StringField\n"
],
[
"# Print a few samples from the dataset\nprint(dataset.head())",
"<Sample: {\n 'dataset_name': 'cifar10-samples',\n 'id': '5f15aeab6d4e59654468a14e',\n 'filepath': '/tmp/fiftyone/custom-parser/fiftyone/000001.jpg',\n 'tags': BaseList([]),\n 'metadata': None,\n 'ground_truth': 'cat',\n}>\n<Sample: {\n 'dataset_name': 'cifar10-samples',\n 'id': '5f15aeab6d4e59654468a14f',\n 'filepath': '/tmp/fiftyone/custom-parser/fiftyone/000002.jpg',\n 'tags': BaseList([]),\n 'metadata': None,\n 'ground_truth': 'ship',\n}>\n<Sample: {\n 'dataset_name': 'cifar10-samples',\n 'id': '5f15aeab6d4e59654468a150',\n 'filepath': '/tmp/fiftyone/custom-parser/fiftyone/000003.jpg',\n 'tags': BaseList([]),\n 'metadata': None,\n 'ground_truth': 'ship',\n}>\n"
]
],
[
[
"We can also verify that the ingested images were written to disk as expected:",
"_____no_output_____"
]
],
[
[
"!ls -lah /tmp/fiftyone/custom-parser/fiftyone | head -n 10",
"total 0\r\ndrwxr-xr-x 10002 voxel51 wheel 313K Jul 20 10:34 .\r\ndrwxr-xr-x 4 voxel51 wheel 128B Jul 20 10:34 ..\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000001.jpg\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000002.jpg\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000003.jpg\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000004.jpg\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000005.jpg\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000006.jpg\r\n-rw-r--r-- 1 voxel51 wheel 0B Jul 20 10:34 000007.jpg\r\n"
]
],
[
[
"## Adding samples to a dataset",
"_____no_output_____"
],
[
"If our `LabeledImageSampleParser` declared `has_image_path == True`, then we could use [Dataset.add_labeled_images()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.html#fiftyone.core.dataset.Dataset.add_labeled_images) to add samples to FiftyOne datasets without creating a copy of the source images on disk.\n\nHowever, our sample parser does not provide image paths, so an informative error message is raised if we try to use it in an unsupported way:",
"_____no_output_____"
]
],
[
[
"dataset = fo.Dataset()\n\nsample_parser = PyTorchClassificationDatasetSampleParser(classes)\n\n# Won't work because our SampleParser does not provide paths to its source images on disk\ndataset.add_labeled_images(data_loader, sample_parser)",
"_____no_output_____"
]
],
[
[
"## Cleanup\n\nYou can cleanup the files generated by this recipe by running:",
"_____no_output_____"
]
],
[
[
"!rm -rf /tmp/fiftyone",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.