repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
GoogleCloudPlatform/bigquery-notebooks | notebooks/official/notebook_template.ipynb | apache-2.0 | [
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"<table class=\"bq-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"#\"><img src=\"./images/bigquery_32px.png\" />View on BigQuery Docs</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/bigquery-notebooks/blob/main/notebooks/official/notebook_template.ipynb\"><img src=\"./images/colab_32px.png\" />Run in Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/GoogleCloudPlatform/bigquery-notebooks/blob/main/notebooks/official/notebook_template.ipynb.ipynb\"><img src=\"./images/github_32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nOverview\n{TODO: Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.}\nDataset\n{TODO: Include a paragraph with Dataset information and where to obtain it.} \n{TODO: Make sure the dataset is accessible to the public. Googlers: Add your dataset to the public samples bucket within gs://cloud-samples-data/ai-platform-unified, if it doesn't already exist there.}\nObjective\nIn this notebook, you will learn how to {TODO: Complete the sentence explaining briefly what you will learn from the notebook, such as\ntraining, hyperparameter tuning, or serving}:\n* {TODO: Add high level bullets for the steps of what you will perform in the notebook}\n\nCosts\n{TODO: Update the list of billable products that your tutorial uses.}\nThis tutorial uses billable components of Google Cloud:\n\nBigQuery\nCloud Storage\n\n{TODO: Include links to pricing documentation for each product you listed above.}\nLearn about BigQuery\npricing, BigQuery ML pricing, and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets\nall the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nThe Google Cloud SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the\ncommand-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstall additional packages\nInstall additional package dependencies not installed in your notebook environment, such as {XGBoost, AdaNet, or TensorFlow Hub TODO: Replace with relevant packages for the tutorial}. Use the latest major GA version of each package.",
"import os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n! pip3 install {USER_FLAG} --upgrade tensorflow",
"Restart the kernel\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.",
"# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nSelect a GPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select \"Runtime --> Change runtime type > GPU\"\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the BigQuery API. {TODO: Update the APIs needed for your tutorial. Edit the API names, and update the link to append the API IDs, separating each one with a comma. For example, container.googleapis.com,cloudbuild.googleapis.com}\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.",
"PROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)",
"Otherwise, set your project ID here.",
"if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the Cloud Console, go to the Create service account key\n page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and\n click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not IS_GOOGLE_CLOUD_NOTEBOOK:\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\n{TODO: Adjust wording in the first paragraph to fit your use case - explain how your tutorial uses the Cloud Storage bucket. The example below shows how Vertex AI uses the bucket for training.}\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex AI runs\nthe code from this package. In this tutorial, Vertex AI also saves the\ntrained model that results from your job in the same bucket. Using this model artifact, you can then\ncreate Vertex AI model and endpoint resources in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"[your-region]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Import libraries and define constants\n{TODO: Put all your imports and installs up into a setup section.}",
"import os\nimport sys\n\nimport numpy as np\nimport tensorflow as tf",
"General style examples\nNotebook heading\n\nInclude the collapsed license at the top (this uses Colab's \"Form\" mode to hide the cells).\nOnly include a single H1 title.\nInclude the button-bar immediately under the H1.\nCheck that the Colab and GitHub links at the top are correct.\n\nNotebook sections\n\nUse H2 (##) and H3 (###) titles for notebook section headings.\nUse sentence case to capitalize titles and headings. (\"Train the model\" instead of \"Train the Model\")\nInclude a brief text explanation before any code cells.\nUse short titles/headings: \"Download the data\", \"Build the model\", \"Train the model\".\n\nWriting style\n\nUse present tense. (\"You receive a response\" instead of \"You will receive a response\")\nUse active voice. (\"The service processes the request\" instead of \"The request is processed by the service\")\nUse second person and an imperative style. \nCorrect examples: \"Update the field\", \"You must update the field\"\nIncorrect examples: \"Let's update the field\", \"We'll update the field\", \"The user should update the field\"\n\n\nGooglers: Please follow our branding guidelines.\n\nCode\n\nPut all your installs and imports in a setup section.\nSave the notebook with the Table of Contents open.\nWrite Python 3 compatible code.\nFollow the Google Python Style guide and write readable code.\nKeep cells small (max ~20 lines).\n\nTensorFlow code style\nUse the highest level API that gets the job done (unless the goal is to demonstrate the low level API). For example, when using Tensorflow:\n\n\nUse TF.keras.Sequential > keras functional api > keras model subclassing > ...\n\n\nUse model.fit > model.train_on_batch > manual GradientTapes.\n\n\nUse eager-style code.\n\n\nUse tensorflow_datasets and tf.data where possible.\n\n\nNotebook code style examples\n\n\nNotebooks are for people. Write code optimized for clarity.\n\n\nDemonstrate small parts before combining them into something more complex. Like below:",
"# Build the model\nmodel = tf.keras.Sequential(\n [\n tf.keras.layers.Dense(10, activation=\"relu\", input_shape=(None, 5)),\n tf.keras.layers.Dense(3),\n ]\n)\n\n# Run the model on a single batch of data, and inspect the output.\nresult = model(tf.constant(np.random.randn(10, 5), dtype=tf.float32)).numpy()\n\nprint(\"min:\", result.min())\nprint(\"max:\", result.max())\nprint(\"mean:\", result.mean())\nprint(\"shape:\", result.shape)\n\n# Compile the model for training\nmodel.compile(\n optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.categorical_crossentropy\n)",
"Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.\n\n\nFor a large example, don't try to fit all the code in the notebook. Add python files to tensorflow examples, and in the notebook run: \n! pip3 install git+https://github.com/tensorflow/examples\n\n\nCleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n{TODO: Include commands to delete individual resources below}",
"# Delete endpoint resource\n! gcloud ai endpoints delete $ENDPOINT_NAME --quiet --region $REGION_NAME\n\n# Delete model resource\n! gcloud ai models delete $MODEL_NAME --quiet\n\n# Delete Cloud Storage objects that were created\n! gsutil -m rm -r $JOB_DIR"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
briennakh/BIOF509 | Wk04/Wk04-GUI.ipynb | mit | [
"Graphical User Interfaces\nObject oriented programming and particularly inheritance is commonly used for creating GUIs. There are a large number of different frameworks supporting building GUIs. The following are particularly relevant:\n\nTkInter - This is the official/default GUI framework\nguidata - A GUI framework for dataset display and editing\nVTK - A GUI framework for data visualization\npyqtgraph - A GUI framework for data visualization, easily installed with conda install pyqtgraph\nmatplotlib - As well as creating plots matplotlib can support interaction\n\nTkInter\nTkInter is widely used with plenty of documentation available but may prove somewhat limited for more data intensive applications.\n\nDocumentation from the standard library\nFurther documentation from python.org\nTkDocs\n\nLet's look at a simple example from the documentation",
"import tkinter as tk\n\nclass Application(tk.Frame):\n def __init__(self, master=None):\n tk.Frame.__init__(self, master)\n self.pack()\n self.createWidgets()\n\n def createWidgets(self):\n self.hi_there = tk.Button(self)\n self.hi_there[\"text\"] = \"Hello World\\n(click me)\"\n self.hi_there[\"command\"] = self.say_hi\n self.hi_there.pack(side=\"top\")\n\n self.QUIT = tk.Button(self, text=\"QUIT\", fg=\"red\",\n command=root.destroy)\n self.QUIT.pack(side=\"bottom\")\n\n def say_hi(self):\n print(\"hi there, everyone!\")\n\nroot = tk.Tk()\napp = Application(master=root)\napp.mainloop()",
"Although this works, it is visually unappealing. We can improve on this using styles and themes.",
"import tkinter as tk\nfrom tkinter import ttk\n\n\nclass Application(ttk.Frame):\n def __init__(self, master=None):\n super().__init__(master, padding=\"3 3 12 12\")\n self.grid(column=0, row=0, )\n self.createWidgets()\n self.master.title('Test')\n\n def createWidgets(self):\n self.hi_there = ttk.Button(self)\n self.hi_there[\"text\"] = \"Hello World\\n(click me)\"\n self.hi_there[\"command\"] = self.say_hi\n\n self.QUIT = ttk.Button(self, text=\"QUIT\", style='Alert.TButton', command=root.destroy)\n\n for child in self.winfo_children(): \n child.grid_configure(padx=10, pady=10)\n\n def say_hi(self):\n print(\"hi there, everyone!\")\n\n \n\nroot = tk.Tk()\napp = Application(master=root)\ns = ttk.Style()\ns.configure('TButton', font='helvetica 24')\ns.configure('Alert.TButton', foreground='red')\nroot.mainloop()",
"As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site.",
"from tkinter import *\nfrom tkinter import ttk\n\ndef calculate(*args):\n try:\n value = float(feet.get())\n meters.set((0.3048 * value * 10000.0 + 0.5)/10000.0)\n except ValueError:\n pass\n \nroot = Tk()\nroot.title(\"Feet to Meters\")\n\nmainframe = ttk.Frame(root, padding=\"3 3 12 12\")\nmainframe.grid(column=0, row=0, sticky=(N, W, E, S))\nmainframe.columnconfigure(0, weight=1)\nmainframe.rowconfigure(0, weight=1)\n\nfeet = StringVar()\nmeters = StringVar()\n\nfeet_entry = ttk.Entry(mainframe, width=7, textvariable=feet)\nfeet_entry.grid(column=2, row=1, sticky=(W, E))\n\nttk.Label(mainframe, textvariable=meters).grid(column=2, row=2, sticky=(W, E))\nttk.Button(mainframe, text=\"Calculate\", command=calculate).grid(column=3, row=3, sticky=W)\n\nttk.Label(mainframe, text=\"feet\").grid(column=3, row=1, sticky=W)\nttk.Label(mainframe, text=\"is equivalent to\").grid(column=1, row=2, sticky=E)\nttk.Label(mainframe, text=\"meters\").grid(column=3, row=2, sticky=W)\n\nfor child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5)\n\nfeet_entry.focus()\nroot.bind('<Return>', calculate)\n\nroot.mainloop()",
"Matplotlib\nFor simple programs, displaying data and taking basic input, often a command line application will be much faster to implement than a GUI. The times when I have moved away from the command line it has been to interact with image data and plots. Here, matplotlib often works very well. Either it can be embedded in a larger application or it can be used directly.\nThere are a number of examples on the matplotlib site.\nHere is one stripped down example of one recent GUI I have used.",
"\"\"\"\nDo a mouseclick somewhere, move the mouse to some destination, release\nthe button. This class gives click- and release-events and also draws\na line or a box from the click-point to the actual mouseposition\n(within the same axes) until the button is released. Within the\nmethod 'self.ignore()' it is checked wether the button from eventpress\nand eventrelease are the same.\n\n\"\"\"\nfrom matplotlib.widgets import RectangleSelector\nimport matplotlib.pyplot as plt\nimport matplotlib.cbook as cbook\n\n\ndef line_select_callback(eclick, erelease):\n 'eclick and erelease are the press and release events'\n x1, y1 = eclick.xdata, eclick.ydata\n x2, y2 = erelease.xdata, erelease.ydata\n print (\"(%3.2f, %3.2f) --> (%3.2f, %3.2f)\" % (x1, y1, x2, y2))\n print (\" The button you used were: %s %s\" % (eclick.button, erelease.button))\n\n \ndef toggle_selector(event):\n print (' Key pressed.')\n if event.key in ['Q', 'q'] and toggle_selector.RS.active:\n print (' RectangleSelector deactivated.')\n toggle_selector.RS.set_active(False)\n if event.key in ['A', 'a'] and not toggle_selector.RS.active:\n print (' RectangleSelector activated.')\n toggle_selector.RS.set_active(True)\n\n\n \nimage_file = cbook.get_sample_data('grace_hopper.png')\nimage = plt.imread(image_file)\nfig, current_ax = plt.subplots()\nplt.imshow(image)\ntoggle_selector.RS = RectangleSelector(current_ax, \n line_select_callback,\n drawtype='box', useblit=True,\n button=[1,3], # don't use middle button\n minspanx=5, minspany=5,\n spancoords='pixels')\nplt.connect('key_press_event', toggle_selector)\nplt.show()\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
steelcolosus/static.tumbling.neural.network | Static_tumbling_nn.ipynb | mit | [
"Static tumbling neural network\nImports",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nimport pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical\nimport matplotlib.pyplot as plt\nfrom itertools import product",
"Load and prepare the data\nImport data from static tumbling csv file",
"static_tumbling = pd.read_csv('static-tumbling.csv')",
"Separate the data into features and targets",
"elements, score = static_tumbling['elements'], static_tumbling['score']",
"Generate global vocabulary",
"#Main vocabulary, based on the data set elements\nmain_vocab = set()\nfor line in elements:\n for element in line.split(\" \"):\n main_vocab.add(element)\n \nmain_vocab = list(main_vocab)\n\n#Expanded vocabulary based on 49 permutations of the posible transitions\n\nvocab = list(main_vocab)\n\nfor roll in product(main_vocab, repeat = 2 ):\n vocab.append(\"{} {}\".format(roll[0],roll[1]))\n \n",
"Create dictionary to map each element to an index",
"word2idx = {word: i for i, word in enumerate(vocab)}\n\nword2idx",
"Text to vector fucntion\nIt will convert the elements to a vector of words",
"def text_to_vector(text):\n word_vector = np.zeros(len(vocab), dtype=np.int_)\n text_vector = text.split(' ')\n \n #basic vocab matching\n for element in text_vector:\n idx = word2idx.get(element, None)\n if idx is None:\n continue\n else:\n word_vector[idx] += 1\n \n #Check for transition order\n for x in range(len(text_vector) -1 ):\n pair = \"{} {}\".format(text_vector[x],text_vector[x+1])\n idx2 = word2idx.get(pair, None)\n if idx2 is None:\n continue\n else:\n word_vector[idx2]+=1\n \n return np.array(word_vector)\n\ntext_to_vector(\"flick flick flick mortal\")",
"Convert all static tumbling passes to vectors",
"word_vectors = np.zeros((len(elements), len(vocab)), dtype=np.int_)\nfor ii, text in enumerate(elements):\n word_vectors[ii] = text_to_vector(text)\n\nword_vectors",
"Train, validation, Tests sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data.",
"Y = (score).astype(np.float_)\nrecords = len(score)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\n#Y values are one dimentional array of shape (1, N) in order to get the dot product we need it in the form\n# (N, 1) so that's why i'm using `Y.values[train_split,None]`\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], Y.values[train_split,None]\ntestX, testY = word_vectors[test_split,:], Y.values[test_split,None]\n\ntrainX",
"Building the network",
"# Network building\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n #Input\n net = tflearn.input_data([None, 56])\n #Hidden\n net = tflearn.fully_connected(net, 350, activation='sigmoid')\n net = tflearn.fully_connected(net, 150, activation='sigmoid')\n net = tflearn.fully_connected(net, 25, activation='sigmoid')\n #output layer as a linear activation function\n net = tflearn.fully_connected(net, 1, activation='linear')\n net = tflearn.regression(net, optimizer='sgd', loss='mean_square',metric='R2', learning_rate=0.01)\n model = tflearn.DNN(net)\n return model",
"Initializing the model",
"model = build_model()",
"Training the network",
"# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=2000)",
"I know total loss is still to high, but not that bad for the first round of hyper parameters, still room for total loss improvement \nSaving de model",
"# Load model\n# model.load('Checkpoints/model-with-transitions-with-3-layers.tfl')\n\n# Manually save model\nmodel.save(\"Checkpoints/model-with-transitions-with-3-layers.tfl\")",
"Testing",
"# Helper function that uses our model to get the score for the static tumbling pass\ndef test_score(sentence):\n score = model.predict([text_to_vector(sentence.lower())])\n print('Gym pass: {}'.format(sentence))\n print('Score: {}'.format(score))\n print()\n return score\n\n# Helper function that uses our model to compare static tumbling passes\ndef test_compare(pass1, pass2):\n score1 = test_score(pass1)\n score2 = test_score(pass2)\n if score1 > score2:\n print('Gym pass 1: {}'.format(pass1))\n elif score2 > score1:\n print('Gym pass 2: {}'.format(pass2))\n else:\n print('same difficulty')\n ",
"Now we check the accuracy of the mode, this test checks which static tumbling line is more difficult, the second one is not even in the data we trianed the neural network.\nFirst we compare to static tumblin pass that has the same elements but different transition cost or effort, \nacording to flick mortal and mortal flick it's harder to execute mortal flick",
"element1 = \"flick mortal\"\nelement2 = \"mortal flick\"\n\ntest_compare(element1,element2)",
"Now test the model with data that wasn't in the data set\nin this complex example the second element is a lot harder to execute",
"test_element1 = \"flick flick flick flick flick mortal giro giro giro2\"\ntest_element2 = \"mortal flick giro flick giro mortal giro2 giro2 giro2\"\n\ntest_compare(test_element1,test_element2)",
"Test data validation\nNow the test values we separeted from the begining are going to be compared with the actual values to check model accuracy",
"fig, ax = plt.subplots(figsize=(15,6))\npredictions = model.predict(testX)\nax.plot(predictions,label='Prediction')\nax.plot(testY, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io | examples/generative/ipynb/gan_ada.ipynb | apache-2.0 | [
"Data-efficient GANs with Adaptive Discriminator Augmentation\nAuthor: András Béres<br>\nDate created: 2021/10/28<br>\nLast modified: 2021/10/28<br>\nDescription: Generating images from limited data using the Caltech Birds dataset.\nIntroduction\nGANs\nGenerative Adversarial Networks (GANs) are a popular\nclass of generative deep learning models, commonly used for image generation. They\nconsist of a pair of dueling neural networks, called the discriminator and the generator.\nThe discriminator's task is to distinguish real images from generated (fake) ones, while\nthe generator network tries to fool the discriminator by generating more and more\nrealistic images. If the generator is however too easy or too hard to fool, it might fail\nto provide useful learning signal for the generator, therefore training GANs is usually\nconsidered a difficult task.\nData augmentation for GANS\nData augmentation, a popular technique in deep learning, is the process of randomly\napplying semantics-preserving transformations to the input data to generate multiple\nrealistic versions of it, thereby effectively multiplying the amount of training data\navailable. The simplest example is left-right flipping an image, which preserves its\ncontents while generating a second unique training sample. Data augmentation is commonly\nused in supervised learning to prevent overfitting and enhance generalization.\nThe authors of StyleGAN2-ADA show that discriminator\noverfitting can be an issue in GANs, especially when only low amounts of training data is\navailable. They propose Adaptive Discriminator Augmentation to mitigate this issue.\nApplying data augmentation to GANs however is not straightforward. Since the generator is\nupdated using the discriminator's gradients, if the generated images are augmented, the\naugmentation pipeline has to be differentiable and also has to be GPU-compatible for\ncomputational efficiency. Luckily, the\nKeras image augmentation layers\nfulfill both these requirements, and are therefore very well suited for this task.\nInvertible data augmentation\nA possible difficulty when using data augmentation in generative models is the issue of\n\"leaky augmentations\" (section 2.2), namely when the\nmodel generates images that are already augmented. This would mean that it was not able\nto separate the augmentation from the underlying data distribution, which can be caused\nby using non-invertible data transformations. For example, if either 0, 90, 180 or 270\ndegree rotations are performed with equal probability, the original orientation of the\nimages is impossible to infer, and this information is destroyed.\nA simple trick to make data augmentations invertible is to only apply them with some\nprobability. That way the original version of the images will be more common, and the\ndata distribution can be infered. By properly choosing this probability, one can\neffectively regularize the discriminator without making the augmentations leaky.\nSetup",
"import matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers",
"Hyperparameterers",
"# data\nnum_epochs = 10 # train for 400 epochs for good results\nimage_size = 64\n# resolution of Kernel Inception Distance measurement, see related section\nkid_image_size = 75\npadding = 0.25\ndataset_name = \"caltech_birds2011\"\n\n# adaptive discriminator augmentation\nmax_translation = 0.125\nmax_rotation = 0.125\nmax_zoom = 0.25\ntarget_accuracy = 0.85\nintegration_steps = 1000\n\n# architecture\nnoise_size = 64\ndepth = 4\nwidth = 128\nleaky_relu_slope = 0.2\ndropout_rate = 0.4\n\n# optimization\nbatch_size = 128\nlearning_rate = 2e-4\nbeta_1 = 0.5 # not using the default value of 0.9 is important\nema = 0.99",
"Data pipeline\nIn this example, we will use the\nCaltech Birds (2011) dataset for\ngenerating images of birds, which is a diverse natural dataset containing less then 6000\nimages for training. When working with such low amounts of data, one has to take extra\ncare to retain as high data quality as possible. In this example, we use the provided\nbounding boxes of the birds to cut them out with square crops while preserving their\naspect ratios when possible.",
"\ndef round_to_int(float_value):\n return tf.cast(tf.math.round(float_value), dtype=tf.int32)\n\n\ndef preprocess_image(data):\n # unnormalize bounding box coordinates\n height = tf.cast(tf.shape(data[\"image\"])[0], dtype=tf.float32)\n width = tf.cast(tf.shape(data[\"image\"])[1], dtype=tf.float32)\n bounding_box = data[\"bbox\"] * tf.stack([height, width, height, width])\n\n # calculate center and length of longer side, add padding\n target_center_y = 0.5 * (bounding_box[0] + bounding_box[2])\n target_center_x = 0.5 * (bounding_box[1] + bounding_box[3])\n target_size = tf.maximum(\n (1.0 + padding) * (bounding_box[2] - bounding_box[0]),\n (1.0 + padding) * (bounding_box[3] - bounding_box[1]),\n )\n\n # modify crop size to fit into image\n target_height = tf.reduce_min(\n [target_size, 2.0 * target_center_y, 2.0 * (height - target_center_y)]\n )\n target_width = tf.reduce_min(\n [target_size, 2.0 * target_center_x, 2.0 * (width - target_center_x)]\n )\n\n # crop image\n image = tf.image.crop_to_bounding_box(\n data[\"image\"],\n offset_height=round_to_int(target_center_y - 0.5 * target_height),\n offset_width=round_to_int(target_center_x - 0.5 * target_width),\n target_height=round_to_int(target_height),\n target_width=round_to_int(target_width),\n )\n\n # resize and clip\n # for image downsampling, area interpolation is the preferred method\n image = tf.image.resize(\n image, size=[image_size, image_size], method=tf.image.ResizeMethod.AREA\n )\n return tf.clip_by_value(image / 255.0, 0.0, 1.0)\n\n\ndef prepare_dataset(split):\n # the validation dataset is shuffled as well, because data order matters\n # for the KID calculation\n return (\n tfds.load(dataset_name, split=split, shuffle_files=True)\n .map(preprocess_image, num_parallel_calls=tf.data.AUTOTUNE)\n .cache()\n .shuffle(10 * batch_size)\n .batch(batch_size, drop_remainder=True)\n .prefetch(buffer_size=tf.data.AUTOTUNE)\n )\n\n\ntrain_dataset = prepare_dataset(\"train\")\nval_dataset = prepare_dataset(\"test\")",
"After preprocessing the training images look like the following:\n\nKernel inception distance\nKernel Inception Distance (KID) was proposed as a\nreplacement for the popular\nFrechet Inception Distance (FID)\nmetric for measuring image generation quality.\nBoth metrics measure the difference in the generated and training distributions in the\nrepresentation space of an InceptionV3\nnetwork pretrained on\nImageNet.\nAccording to the paper, KID was proposed because FID has no unbiased estimator, its\nexpected value is higher when it is measured on fewer images. KID is more suitable for\nsmall datasets because its expected value does not depend on the number of samples it is\nmeasured on. In my experience it is also computationally lighter, numerically more\nstable, and simpler to implement because it can be estimated in a per-batch manner.\nIn this example, the images are evaluated at the minimal possible resolution of the\nInception network (75x75 instead of 299x299), and the metric is only measured on the\nvalidation set for computational efficiency.",
"\nclass KID(keras.metrics.Metric):\n def __init__(self, name=\"kid\", **kwargs):\n super().__init__(name=name, **kwargs)\n\n # KID is estimated per batch and is averaged across batches\n self.kid_tracker = keras.metrics.Mean()\n\n # a pretrained InceptionV3 is used without its classification layer\n # transform the pixel values to the 0-255 range, then use the same\n # preprocessing as during pretraining\n self.encoder = keras.Sequential(\n [\n layers.InputLayer(input_shape=(image_size, image_size, 3)),\n layers.Rescaling(255.0),\n layers.Resizing(height=kid_image_size, width=kid_image_size),\n layers.Lambda(keras.applications.inception_v3.preprocess_input),\n keras.applications.InceptionV3(\n include_top=False,\n input_shape=(kid_image_size, kid_image_size, 3),\n weights=\"imagenet\",\n ),\n layers.GlobalAveragePooling2D(),\n ],\n name=\"inception_encoder\",\n )\n\n def polynomial_kernel(self, features_1, features_2):\n feature_dimensions = tf.cast(tf.shape(features_1)[1], dtype=tf.float32)\n return (features_1 @ tf.transpose(features_2) / feature_dimensions + 1.0) ** 3.0\n\n def update_state(self, real_images, generated_images, sample_weight=None):\n real_features = self.encoder(real_images, training=False)\n generated_features = self.encoder(generated_images, training=False)\n\n # compute polynomial kernels using the two sets of features\n kernel_real = self.polynomial_kernel(real_features, real_features)\n kernel_generated = self.polynomial_kernel(\n generated_features, generated_features\n )\n kernel_cross = self.polynomial_kernel(real_features, generated_features)\n\n # estimate the squared maximum mean discrepancy using the average kernel values\n batch_size = tf.shape(real_features)[0]\n batch_size_f = tf.cast(batch_size, dtype=tf.float32)\n mean_kernel_real = tf.reduce_sum(kernel_real * (1.0 - tf.eye(batch_size))) / (\n batch_size_f * (batch_size_f - 1.0)\n )\n mean_kernel_generated = tf.reduce_sum(\n kernel_generated * (1.0 - tf.eye(batch_size))\n ) / (batch_size_f * (batch_size_f - 1.0))\n mean_kernel_cross = tf.reduce_mean(kernel_cross)\n kid = mean_kernel_real + mean_kernel_generated - 2.0 * mean_kernel_cross\n\n # update the average KID estimate\n self.kid_tracker.update_state(kid)\n\n def result(self):\n return self.kid_tracker.result()\n\n def reset_state(self):\n self.kid_tracker.reset_state()\n",
"Adaptive discriminator augmentation\nThe authors of StyleGAN2-ADA propose to change the\naugmentation probability adaptively during training. Though it is explained differently\nin the paper, they use integral control on the augmentation\nprobability to keep the discriminator's accuracy on real images close to a target value.\nNote, that their controlled variable is actually the average sign of the discriminator\nlogits (r_t in the paper), which corresponds to 2 * accuracy - 1.\nThis method requires two hyperparameters:\n\ntarget_accuracy: the target value for the discriminator's accuracy on real images. I\nrecommend selecting its value from the 80-90% range.\nintegration_steps:\nthe number of update steps required for an accuracy error of 100% to transform into an\naugmentation probability increase of 100%. To give an intuition, this defines how slowly\nthe augmentation probability is changed. I recommend setting this to a relatively high\nvalue (1000 in this case) so that the augmentation strength is only adjusted slowly.\n\nThe main motivation for this procedure is that the optimal value of the target accuracy\nis similar across different dataset sizes (see figure 4 and 5 in the paper),\nso it does not have to be retuned, because the\nprocess automatically applies stronger data augmentation when it is needed.",
"# \"hard sigmoid\", useful for binary accuracy calculation from logits\ndef step(values):\n # negative values -> 0.0, positive values -> 1.0\n return 0.5 * (1.0 + tf.sign(values))\n\n\n# augments images with a probability that is dynamically updated during training\nclass AdaptiveAugmenter(keras.Model):\n def __init__(self):\n super().__init__()\n\n # stores the current probability of an image being augmented\n self.probability = tf.Variable(0.0)\n\n # the corresponding augmentation names from the paper are shown above each layer\n # the authors show (see figure 4), that the blitting and geometric augmentations\n # are the most helpful in the low-data regime\n self.augmenter = keras.Sequential(\n [\n layers.InputLayer(input_shape=(image_size, image_size, 3)),\n # blitting/x-flip:\n layers.RandomFlip(\"horizontal\"),\n # blitting/integer translation:\n layers.RandomTranslation(\n height_factor=max_translation,\n width_factor=max_translation,\n interpolation=\"nearest\",\n ),\n # geometric/rotation:\n layers.RandomRotation(factor=max_rotation),\n # geometric/isotropic and anisotropic scaling:\n layers.RandomZoom(\n height_factor=(-max_zoom, 0.0), width_factor=(-max_zoom, 0.0)\n ),\n ],\n name=\"adaptive_augmenter\",\n )\n\n def call(self, images, training):\n if training:\n augmented_images = self.augmenter(images, training)\n\n # during training either the original or the augmented images are selected\n # based on self.probability\n augmentation_values = tf.random.uniform(\n shape=(batch_size, 1, 1, 1), minval=0.0, maxval=1.0\n )\n augmentation_bools = tf.math.less(augmentation_values, self.probability)\n\n images = tf.where(augmentation_bools, augmented_images, images)\n return images\n\n def update(self, real_logits):\n current_accuracy = tf.reduce_mean(step(real_logits))\n\n # the augmentation probability is updated based on the dicriminator's\n # accuracy on real images\n accuracy_error = current_accuracy - target_accuracy\n self.probability.assign(\n tf.clip_by_value(\n self.probability + accuracy_error / integration_steps, 0.0, 1.0\n )\n )\n",
"Network architecture\nHere we specify the architecture of the two networks:\n\ngenerator: maps a random vector to an image, which should be as realistic as possible\ndiscriminator: maps an image to a scalar score, which should be high for real and low\nfor generated images\n\nGANs tend to be sensitive to the network architecture, I implemented a DCGAN architecture\nin this example, because it is relatively stable during training while being simple to\nimplement. We use a constant number of filters throughout the network, use a sigmoid\ninstead of tanh in the last layer of the generator, and use default initialization\ninstead of random normal as further simplifications.\nAs a good practice, we disable the learnable scale parameter in the batch normalization\nlayers, because on one hand the following relu + convolutional layers make it redundant\n(as noted in the\ndocumentation).\nBut also because it should be disabled based on theory when using spectral normalization\n(section 4.1), which is not used here, but is common\nin GANs. We also disable the bias in the fully connected and convolutional layers, because\nthe following batch normalization makes it redundant.",
"# DCGAN generator\ndef get_generator():\n noise_input = keras.Input(shape=(noise_size,))\n x = layers.Dense(4 * 4 * width, use_bias=False)(noise_input)\n x = layers.BatchNormalization(scale=False)(x)\n x = layers.ReLU()(x)\n x = layers.Reshape(target_shape=(4, 4, width))(x)\n for _ in range(depth - 1):\n x = layers.Conv2DTranspose(\n width, kernel_size=4, strides=2, padding=\"same\", use_bias=False,\n )(x)\n x = layers.BatchNormalization(scale=False)(x)\n x = layers.ReLU()(x)\n image_output = layers.Conv2DTranspose(\n 3, kernel_size=4, strides=2, padding=\"same\", activation=\"sigmoid\",\n )(x)\n\n return keras.Model(noise_input, image_output, name=\"generator\")\n\n\n# DCGAN discriminator\ndef get_discriminator():\n image_input = keras.Input(shape=(image_size, image_size, 3))\n x = image_input\n for _ in range(depth):\n x = layers.Conv2D(\n width, kernel_size=4, strides=2, padding=\"same\", use_bias=False,\n )(x)\n x = layers.BatchNormalization(scale=False)(x)\n x = layers.LeakyReLU(alpha=leaky_relu_slope)(x)\n x = layers.Flatten()(x)\n x = layers.Dropout(dropout_rate)(x)\n output_score = layers.Dense(1)(x)\n\n return keras.Model(image_input, output_score, name=\"discriminator\")\n",
"GAN model",
"\nclass GAN_ADA(keras.Model):\n def __init__(self):\n super().__init__()\n\n self.augmenter = AdaptiveAugmenter()\n self.generator = get_generator()\n self.ema_generator = keras.models.clone_model(self.generator)\n self.discriminator = get_discriminator()\n\n self.generator.summary()\n self.discriminator.summary()\n\n def compile(self, generator_optimizer, discriminator_optimizer, **kwargs):\n super().compile(**kwargs)\n\n # separate optimizers for the two networks\n self.generator_optimizer = generator_optimizer\n self.discriminator_optimizer = discriminator_optimizer\n\n self.generator_loss_tracker = keras.metrics.Mean(name=\"g_loss\")\n self.discriminator_loss_tracker = keras.metrics.Mean(name=\"d_loss\")\n self.real_accuracy = keras.metrics.BinaryAccuracy(name=\"real_acc\")\n self.generated_accuracy = keras.metrics.BinaryAccuracy(name=\"gen_acc\")\n self.augmentation_probability_tracker = keras.metrics.Mean(name=\"aug_p\")\n self.kid = KID()\n\n @property\n def metrics(self):\n return [\n self.generator_loss_tracker,\n self.discriminator_loss_tracker,\n self.real_accuracy,\n self.generated_accuracy,\n self.augmentation_probability_tracker,\n self.kid,\n ]\n\n def generate(self, batch_size, training):\n latent_samples = tf.random.normal(shape=(batch_size, noise_size))\n # use ema_generator during inference\n if training:\n generated_images = self.generator(latent_samples, training)\n else:\n generated_images = self.ema_generator(latent_samples, training)\n return generated_images\n\n def adversarial_loss(self, real_logits, generated_logits):\n # this is usually called the non-saturating GAN loss\n\n real_labels = tf.ones(shape=(batch_size, 1))\n generated_labels = tf.zeros(shape=(batch_size, 1))\n\n # the generator tries to produce images that the discriminator considers as real\n generator_loss = keras.losses.binary_crossentropy(\n real_labels, generated_logits, from_logits=True\n )\n # the discriminator tries to determine if images are real or generated\n discriminator_loss = keras.losses.binary_crossentropy(\n tf.concat([real_labels, generated_labels], axis=0),\n tf.concat([real_logits, generated_logits], axis=0),\n from_logits=True,\n )\n\n return tf.reduce_mean(generator_loss), tf.reduce_mean(discriminator_loss)\n\n def train_step(self, real_images):\n real_images = self.augmenter(real_images, training=True)\n\n # use persistent gradient tape because gradients will be calculated twice\n with tf.GradientTape(persistent=True) as tape:\n generated_images = self.generate(batch_size, training=True)\n # gradient is calculated through the image augmentation\n generated_images = self.augmenter(generated_images, training=True)\n\n # separate forward passes for the real and generated images, meaning\n # that batch normalization is applied separately\n real_logits = self.discriminator(real_images, training=True)\n generated_logits = self.discriminator(generated_images, training=True)\n\n generator_loss, discriminator_loss = self.adversarial_loss(\n real_logits, generated_logits\n )\n\n # calculate gradients and update weights\n generator_gradients = tape.gradient(\n generator_loss, self.generator.trainable_weights\n )\n discriminator_gradients = tape.gradient(\n discriminator_loss, self.discriminator.trainable_weights\n )\n self.generator_optimizer.apply_gradients(\n zip(generator_gradients, self.generator.trainable_weights)\n )\n self.discriminator_optimizer.apply_gradients(\n zip(discriminator_gradients, self.discriminator.trainable_weights)\n )\n\n # update the augmentation probability based on the discriminator's performance\n self.augmenter.update(real_logits)\n\n self.generator_loss_tracker.update_state(generator_loss)\n self.discriminator_loss_tracker.update_state(discriminator_loss)\n self.real_accuracy.update_state(1.0, step(real_logits))\n self.generated_accuracy.update_state(0.0, step(generated_logits))\n self.augmentation_probability_tracker.update_state(self.augmenter.probability)\n\n # track the exponential moving average of the generator's weights to decrease\n # variance in the generation quality\n for weight, ema_weight in zip(\n self.generator.weights, self.ema_generator.weights\n ):\n ema_weight.assign(ema * ema_weight + (1 - ema) * weight)\n\n # KID is not measured during the training phase for computational efficiency\n return {m.name: m.result() for m in self.metrics[:-1]}\n\n def test_step(self, real_images):\n generated_images = self.generate(batch_size, training=False)\n\n self.kid.update_state(real_images, generated_images)\n\n # only KID is measured during the evaluation phase for computational efficiency\n return {self.kid.name: self.kid.result()}\n\n def plot_images(self, epoch=None, logs=None, num_rows=3, num_cols=6, interval=5):\n # plot random generated images for visual evaluation of generation quality\n if epoch is None or (epoch + 1) % interval == 0:\n num_images = num_rows * num_cols\n generated_images = self.generate(num_images, training=False)\n\n plt.figure(figsize=(num_cols * 2.0, num_rows * 2.0))\n for row in range(num_rows):\n for col in range(num_cols):\n index = row * num_cols + col\n plt.subplot(num_rows, num_cols, index + 1)\n plt.imshow(generated_images[index])\n plt.axis(\"off\")\n plt.tight_layout()\n plt.show()\n plt.close()\n",
"Training\nOne can should see from the metrics during training, that if the real accuracy\n(discriminator's accuracy on real images) is below the target accuracy, the augmentation\nprobability is increased, and vice versa. In my experience, during a healthy GAN\ntraining, the discriminator accuracy should stay in the 80-95% range. Below that, the\ndiscriminator is too weak, above that it is too strong.\nNote that we track the exponential moving average of the generator's weights, and use that\nfor image generation and KID evaluation.",
"# create and compile the model\nmodel = GAN_ADA()\nmodel.compile(\n generator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),\n discriminator_optimizer=keras.optimizers.Adam(learning_rate, beta_1),\n)\n\n# save the best model based on the validation KID metric\ncheckpoint_path = \"gan_model\"\ncheckpoint_callback = tf.keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_path,\n save_weights_only=True,\n monitor=\"val_kid\",\n mode=\"min\",\n save_best_only=True,\n)\n\n# run training and plot generated images periodically\nmodel.fit(\n train_dataset,\n epochs=num_epochs,\n validation_data=val_dataset,\n callbacks=[\n keras.callbacks.LambdaCallback(on_epoch_end=model.plot_images),\n checkpoint_callback,\n ],\n)",
"Inference",
"# load the best model and generate images\nmodel.load_weights(checkpoint_path)\nmodel.plot_images()",
"Results\nBy running the training for 400 epochs (which takes 2-3 hours in a Colab notebook), one\ncan get high quality image generations using this code example.\nThe evolution of a random batch of images over a 400 epoch training (ema=0.999 for\nanimation smoothness):\n\nLatent-space interpolation between a batch of selected images:\n\nI also recommend trying out training on other datasets, such as\nCelebA for example. In my\nexperience good results can be achieved without changing any hyperparameters (though\ndiscriminator augmentation might not be necessary).\nGAN tips and tricks\nMy goal with this example was to find a good tradeoff between ease of implementation and\ngeneration quality for GANs. During preparation I have run numerous ablations using\nthis repository.\nIn this section I list the lessons learned and my recommendations in my subjective order\nof importance.\nI recommend checking out the DCGAN paper, this\nNeurIPS talk, and this\nlarge scale GAN study for others' takes on this subject.\nArchitectural tips\n\nresolution: Training GANs at higher resolutions tends to get more difficult, I\nrecommend experimenting at 32x32 or 64x64 resolutions initially.\ninitialization: If you see strong colorful patterns early on in the training, the\ninitalization might be the issue. Set the kernel_initializer parameters of layers to\nrandom normal, and\ndecrease the standard deviation (recommended value: 0.02, following DCGAN) until the\nissue disappears.\nupsampling: There are two main methods for upsampling in the generator.\nTransposed convolution\nis faster, but can lead to\ncheckerboard artifacts, which can be reduced by using\na kernel size that is divisible with the stride (recommended kernel size is 4 for a stride of 2).\nUpsampling +\nstandard convolution can have slightly\nlower quality, but checkerboard artifacts are not an issue. I recommend using nearest-neighbor\ninterpolation over bilinear for it.\nbatch normalization in discriminator: Sometimes has a high impact, I recommend\ntrying out both ways.\nspectral normalization:\nA popular technique for training GANs, can help with stability. I recommend\ndisabling batch normalization's learnable scale parameters along with it.\nresidual connections:\nWhile residual discriminators behave similarly, residual generators are more difficult to\ntrain in my experience. They are however necessary for training large and deep\narchitectures. I recommend starting with non-resiudal architectures.\ndropout: Using dropout before the last layer of the discriminator improves\ngeneration quality in my experience. Recommended dropout rate is below 0.5.\nleaky ReLU: Use leaky\nReLU activations in the discriminator to make its gradients less sparse. Recommended\nslope/alpha is 0.2 following DCGAN.\n\nAlgorithmic tips\n\nloss functions: Numerous losses have been proposed over the years for training\nGANs, promising improved performance and stability. I have implemented 5 of them in\nthis repository, and my experience is in\nline with this GAN study: no loss seems to\nconsistently outperform the default non-saturating GAN loss. I recommend using that as a\ndefault.\nAdam's beta_1 parameter: The beta_1 parameter in Adam can be interpreted as the\nmomentum of mean gradient estimation. Using 0.5 or even 0.0 instead of the default 0.9\nvalue was proposed in DCGAN and is important. This example would not work using its\ndefault value.\nseparate batch normalization for generated and real images: The forward pass of the\ndiscriminator should be separate for the generated and real images. Doing otherwise can\nlead to artifacts (45 degree stripes in my case) and decreased performance.\nexponential moving average of generator's weights: This helps to reduce the\nvariance of the KID measurement, and helps in averaging out the rapid color palette\nchanges during training.\ndifferent learning rate for generator and discriminator:\nIf one has the resources, it can help\nto tune the learning rates of the two networks separately. A similar idea is to update\neither network's (usually the discriminator's) weights multiple times for each of the\nother network's updates. I recommend using the same learning rate of 2e-4 (Adam),\nfollowing DCGAN for both networks, and only updating both of them once as a default.\nlabel noise: One-sided label smoothing (using\nless than 1.0 for real labels), or adding noise to the labels can regularize the\ndiscriminator not to get overconfident, however in my case they did not improve\nperformance.\nadaptive data augmentation: Since it adds another dynamic component to the training\nprocess, disable it as a default, and only enable it when the other components already\nwork well.\n\nRelated works\nOther GAN-related Keras code examples:\n\nDCGAN + CelebA\nWGAN + FashionMNIST\nWGAN + Molecules\nConditionalGAN + MNIST\nCycleGAN + Horse2Zebra\nStyleGAN\n\nModern GAN architecture-lines:\n\nSAGAN, BigGAN\nProgressiveGAN,\nStyleGAN,\nStyleGAN2,\nStyleGAN2-ADA,\nAliasFreeGAN\n\nConcurrent papers on discriminator data augmentation:\n1, 2, 3\nRecent literature overview on GANs: talk"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
peteWT/fcat_biomass | wood_fates.ipynb | mit | [
"import utils as ut\nfrom pint import UnitRegistry\nimport pandas as pd\nimport seaborn as sns\nfrom tabulate import tabulate\nfrom numpy import average as avg\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom functools import partial\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nsqdb = ut.sqlitedb('fcat_biomass')\n\nwood_dens = ut.gData('138FWlGeW57MKdcz2UkWxtWV4o50SZO8sduB1R6JOFp8', 1297253755)\n\nsathre4 = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY',546564075, hrow =3)\nsathre4.to_sql('so4',sqdb['cx'], if_exists = 'replace')",
"Wood products DF with and without logging slash utilization\nUsing studies from Sathre and O'Connor in the US.",
"HWu = pd.read_sql('''SELECT *\n FROM so4\n WHERE harvestslash = \"X\"\n AND processingresidues = \"X\" \n AND \"post-usewoodproduct\" = \"X\"\n AND stumps is null''', sqdb['cx'],index_col = 'index')\nHWo = pd.read_sql('''SELECT *\n FROM so4\n WHERE harvestslash is null\n AND processingresidues = \"X\" \n AND stumps is null\n AND \"post-usewoodproduct\" = \"X\"''', sqdb['cx'], index_col = 'index')\n\n\n#HWo\nprint tabulate(HWo[['reference','df']], headers = ['index','reference','displacement factor'],tablefmt=\"pipe\")\n\n#HWu\nprint tabulate(HWu[['reference','df']], headers = ['index','reference','displacement factor'],tablefmt=\"pipe\")\n\nconstants = {'me' : {'value':0.5,\n 'desc': 'Mill Efficiency'},\n 'DFu' : {'value': np.average(HWu.df),\n 'desc': 'Displacement factor with logging residual utilization',\n 'source': '''\\cite{Sathre2010}'''},\n 'DFo' : {'value': np.average(HWo.df),\n 'desc': 'Displacement factor without logging residual utilization'},\n 'wDens' : {'value': sum(wood_dens.pct/100 * wood_dens.density_lbscuft),\n 'units' : 'lbs/cuft',\n 'desc': 'average harvested wood density weighted by species harvested',\n 'source': '\\cite{Mciver2012}'}\n }\n\nconstants['wDens']['value']",
"Timber Products Output\nThe TPO estimates logging redisues produced from commercial timber harvesting operations. The follwoing is in million cubic feet (MCF)",
"tpoData = ut.gData('1GDdquzrCoq2cxVN2fbCpP4gwi2yrMnONNrWbfhZKZu4', 872275354, hrow=1)\ntpoData.to_sql('tpo', sqdb['cx'], if_exists = 'replace')\nprint tabulate(tpoData, headers = ['Ownership','Roundwood Products','Logging Residues', 'Year'],tablefmt=\"pipe\")",
"Board of Equalization Data\nThe board of equalization",
"pd.read_csv('boe_hh.csv').to_sql('boe',sqdb['cx'], if_exists = 'replace')\n\npd.read_sql('select * from boe', sqdb['cx'])",
"McIver and Morgan annual in cubic\nFigure 2 from morgan and mciver presents total roundwood harvest from 1947 through 2012 in MMBF. To convert MMBF to MCF we use a sawlog conversion of 5.44. This is an approximation as the actual sawlog conversion varies with the log size on average over time has changed.",
"mm_histHarvest = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY', 2081880100).fillna(value=0)\n\nmm_histHarvest.to_sql('mm_hist', sqdb['cx'], if_exists = 'replace')\n\nmm_histHarvest",
"Bioenergy consumption\nTo apply the apropriate DF for harvested wood we need to know what fraction of the logging residues were utilized as bioenergy feedstock. McIver and Morgan (Table 6) reports bioenergy consumption from 2000 forward. For years previous, we use the average bioenergy consumption from 2000 -- 2012.",
"bioEnergy = ut.gData('138FWlGeW57MKdcz2UkWxtWV4o50SZO8sduB1R6JOFp8', 529610043)\nbioEnergy.set_index('producttype').transpose().to_sql('mciver_bio', sqdb['cx'], if_exists = 'replace')\nbio_pct = pd.read_sql('select \"index\" as year,\"Bioenergy\"/100 as biopct from mciver_bio where \"Bioenergy\" is not null', sqdb['cx'])\nbio_dict = bio_pct.set_index('year').to_dict('index')\nprint tabulate(bio_pct, headers = ['year', 'bioenergy % of harvest'],tablefmt=\"pipe\")\n\ndef bioPct(year):\n# if year < 1980:\n# return 0\n if year in bio_dict.keys():\n return bio_dict[year]['biopct']\n else:\n return np.average(bio_pct.biopct)",
"Logging residuals\nThe BOE data does not specifically estimate logging residuals, it simply reports harvested roundwood. To accurately ascribe fate to roundwood harvested, an estimation of logging residuals must be made\nCalculating emissions reductions\nThe following functions calculate the displaced emissions resulting from wood harvested with and without logging residue utilization. They return estimates in metric tons of CO2 equivalents.",
"def WPu (rw_harvest, lr, year, mill_efficiency = constants['me']['value'], wdens = constants['wDens']['value'], df = constants['DFu']['value']):\n '''\n Calculates the emissions reduction resulting from harvested wood with with utilization of loggin residuals for bioenergy\n '''\n # establish the aproporiate bioenergy consumption, if no data on bioenergy consumption exists, use average from 2000-2012\n if year in bio_dict.keys():\n bioe_pct = bio_dict[year]['biopct']\n else:\n bioe_pct = np.average(bio_pct.biopct)\n #Calculate total volume used in bioenergy\n bioevol = bioe_pct * rw_harvest\n #Establish utilization ratio for bioenergy\n lrUsed = bioevol/lr\n #Calcuate roundwood harvest volume fromwhich loggin residues were utilized\n HWu = lrUsed * rw_harvest\n #Calculate volume of final wood product produced using mill efficiency. S&O use volume of wood product not sawlogs for DF\n WPu = HWu * mill_efficiency * wdens * 1000000 / 2204.62 * 0.5 * df\n #Per comment from Roger Sathre,needs to be reduced by 50% before applying the DF as the DF is meant for tC not tWood..\n #This is in MT\n \n return WPu\n \n\ndef WPo (rw_harvest, lr, year, mill_efficiency = constants['me']['value'], wdens = constants['wDens']['value'], df = constants['DFo']['value']):\n '''\n Calculates the emissions reduction resulting from harvested wood without utilization of \n logging residuals for bioenergy\n '''\n # establish the aproporiate bioenergy consumption lever for a given year, if no data on bioenergy consumption exists, use average from 2000-2012\n if year in bio_dict.keys():\n bioe_pct = bio_dict[year]['biopct']\n else:\n bioe_pct = np.average(bio_pct.biopct)\n #Calculate total volume used in bioenergy\n bioevol = bioe_pct * rw_harvest\n #Establish utilization ratio for bioenergy\n lrUsed = bioevol/lr\n #Calcuate roundwood harvest volume fromwhich loggin residues were utilized\n HWo = (1-lrUsed) * rw_harvest\n #Calculate volume of final wood product produced using mill efficiency. S&O use volume of wood product not sawlogs for DF\n WPo = HWo * mill_efficiency * wdens * 1000000 / 2204.62 * 0.5 * df\n #This is in MT\n \n return WPo",
"Emissions reduction from harvested wood with LR utilized\nEmissions reductions resulting from harvested roundwood with logging residue utilized in bioenergy",
"erWPu = []\nfor row in tpoData.index:\n rw,lr,yr = tpoData.iloc[row][['roundwoodproducts','loggingresidues', 'year']].tolist()\n erWPu.append(WPu(rw,lr,yr))\ntpoData['erWPu'] = erWPu\n\nerWPo = []\nfor row in tpoData.index:\n rw,lr,yr = tpoData.iloc[row][['roundwoodproducts','loggingresidues', 'year']].tolist()\n erWPo.append(WPo(rw,lr,yr))\ntpoData['erWPo'] = erWPo",
"Emissions reduction from harvested wood without LR utilization\nEmissions reductions resulting from harvested roundwood without logging residue utilized in bioenergy. Though wood with LR utilization rate has a higher displacement factor, the majority of loggin residues wer not utilized.",
"tpoData['erTotal'] = tpoData.erWPo+tpoData.erWPu\ntpoData.to_sql('tpo_emreduc', sqdb['cx'], if_exists='replace')\ntpoData['bioe_pct'] = tpoData.year.apply(bioPct)\ntpoData['bioe_t'] = tpoData.bioe_pct * tpoData.loggingresidues * 1e6* constants['wDens']['value']/2204.62",
"Using M&M Historical data",
"erWPo = []\nfor row in mm_histHarvest.index:\n r = mm_histHarvest.iloc[row]\n yr = r['year'] ## year\n rw = (r.state+r.private+r.tribal+r.blm+r.nat_forest)/5.44 \n qry = 'select avg(loggingresidues/roundwoodproducts) lr from tpo where year = {}'.format(yr)\n if yr in tpoData.year.tolist():\n lr = pd.read_sql(qry, sqdb['cx'])*rw\n else:\n lr = pd.read_sql('select avg(loggingresidues/roundwoodproducts) lr from tpo', sqdb['cx'])*rw\n erWPo.append(WPo(rw,lr,yr).lr[0])\nmm_histHarvest['erWPo'] = erWPo\n\nerWPu = []\nlrVect = []\ntHarv = []\nfor row in mm_histHarvest.index:\n r = mm_histHarvest.iloc[row]\n yr = r['year'] ## year\n rw = (r.state+r.private+r.tribal+r.blm+r.nat_forest)/5.44 \n qry = 'select avg(loggingresidues/roundwoodproducts) lr from tpo where year = {}'.format(yr)\n if yr in tpoData.year.tolist():\n lr = pd.read_sql(qry, sqdb['cx'])*rw\n else:\n lr = pd.read_sql('select avg(loggingresidues/roundwoodproducts) lr from tpo', sqdb['cx'])*rw\n lrVect.append(lr.lr[0])\n tHarv.append(rw)\n erWPu.append(WPu(rw,lr,yr).lr[0])\nmm_histHarvest['erWPu'] = erWPu\nmm_histHarvest['loggingresidues'] = lrVect\nmm_histHarvest['totalharvest'] = tHarv",
"Total emissions reduction from harvested wood products\nSum of emissions reductions from harvested wood with and without LR utilization",
"mm_histHarvest['erTotal'] = mm_histHarvest.erWPo+mm_histHarvest.erWPu\nmm_histHarvest.to_sql('mm_emreduc', sqdb['cx'], if_exists='replace')\nmm_histHarvest['bioe_pct'] = mm_histHarvest.year.apply(bioPct)\nmm_histHarvest['bioe_t'] = mm_histHarvest.bioe_pct * mm_histHarvest.loggingresidues * 1e6* constants['wDens']['value']/2204.62\nmm_histHarvest.to_sql()\n\nmm_histHarvest\n\nsns.set_style(\"whitegrid\")\nfig2, ax2 = plt.subplots(figsize=(12, 10))\nax2 = sns.barplot(x ='year', y='erTotal', data=mm_histHarvest.sort_values('year'))\nax2.set_ylabel('Emissions reduction (MT CO2e)')\nax2.set_title('Emissions reductions resulting \\nfrom roundwood harvest in CA')\nax2.set_xticklabels(ax2.get_xticklabels(),rotation=90)\n\n[fig2.savefig('graphics/ann_hh_em_reduc.{}'.format(i)) for i in ['pdf','png']]\n\nsns.set_style(\"whitegrid\")\nfig, ax = plt.subplots()\nax = sns.barplot(x ='year', y='erTotal', hue=\"ownership\", data=tpoData.sort_values('year'))\nax.set_ylabel('Emissions reduction (MT CO2e)')\nax.set_title('Emissions reductions resulting from roundwood harvest in CA')\n\n[fig.savefig('graphics/harv_em_reductions.{}'.format(i)) for i in ['pdf','png']]",
"Total emissions reductions from roundwood harvesting in CA, 2012",
"pd.read_sql('select sum(\"erTotal\") from tpo_emreduc where year = \"2012\"', sqdb['cx'])",
"Emissions from un-utilized logging residuals\nFrom logging residuals not used in bioenergy, emmisions are produced from combustion of the residual material or from decomposition of the material over time. To calculate the ratio of burned to decompsed logging residues I begin with the CARB estimate of PM2.5 produced from forest management:",
"tName = 'cpe_allyears'\nsqdb['crs'].executescript('drop table if exists {0};'.format(tName))\nfor y in [2000, 2005, 2010, 2012, 2015]:\n url = 'http://www.arb.ca.gov/app/emsinv/2013/emsbyeic.csv?F_YR={0}&F_DIV=0&F_SEASON=A&SP=2013&SPN=2013_Almanac&F_AREA=CA'\n df = pd.read_csv(url.format(y))\n df.to_sql(tName, sqdb['cx'], if_exists = 'append')\n\npmAnn = pd.read_sql('''\n select year,\n eicsoun,\n \"PM2_5\"*365 an_pm25_av\n from cpe_allyears\n where eicsoun = 'FOREST MANAGEMENT';\n ''', sqdb['cx'])\npmAnn",
"Estimate biomass, CO2, CH4 and BC from PM2.5\nTo estimate total biomass from PM2.5 I assume 90% consumption of biomass in piles and use the relationship of pile tonnage to PM emissions calculated using the Piled Fuels Biomass and Emissions Calculator provided by the Washington State Department of Natural resources. This calculator is based on the Consume fire behavior model published by the US Forest Service.",
"pfbec = pd.read_csv('fera_pile_cemissions.csv', header=1)\nward = ut.gData('13UQtRfNBSJ81PXxbYSnB2LrjHePNcvhJhrsxRBjHpoY', 475419971)\ndef sp2bio(pm, species = 'PM2.5 (tons)'):\n return pm * (pfbec[species]/pfbec['Pile Biomass (tons)'])\n\ndef bioPm(pm):\n return pm * (pfbec['Pile Biomass (tons)']/pfbec['PM2.5 (tons)'])\nco2t = lambda x: sp2bio(x,'CO2 (tons)')\nch4t = lambda x: sp2bio(x,'CH4 (tons)')\n\npmAnn['biomass_t']=pmAnn.an_pm25_av.apply(bioPm)\npmAnn['co2_t'] = pmAnn.biomass_t.apply(co2t)\npmAnn['ch4_t'] = pmAnn.biomass_t.apply(ch4t)\npmAnn['ch4_co2e'] = pmAnn.ch4_t * 56\npmAnn['bc_co2e']= pmAnn.an_pm25_av.apply(ut.pm2bcgwpPiles)\npmAnn['t_co2e']= pmAnn.co2_t + pmAnn.ch4_co2e + pmAnn.bc_co2e\n\nprint tabulate(pmAnn[['YEAR','EICSOUN','co2_t','ch4_co2e','bc_co2e','t_co2e']], headers = ['Year','Emissions source','CO2 (t)', 'CH4 (tCO2e)', 'BC (tCO2e)', 'Pile Burn Total (tCO2e)'],tablefmt=\"pipe\")",
"Estimating GHG emissions from decomposition of unitilized logging slash\nTo provide a full picture of the emissions from residual material produced from commercial timber harvesting in California, decomposition of unutilized logging residuals left on-site that are not burned must be accounted for. To establish the fraction of logging residue that is left to decompose, residues burned and used in bioenergy are subtracted from the total reported by the TPO:\n\nTo calculate the GHG emissions from decomposition of piles we use the following equation:",
"annLrAvg = pd.read_sql('''with ann as (select sum(loggingresidues) lr\n from tpo\n group by year)\n select avg(lr) foo\n from ann;''', sqdb['cx'])['foo'][0]\npctLR_bio = (np.average(pmAnn.biomass_t)/1e6)/annLrAvg\n\nannLrAvg\n\npmAnn\n\nlr_t = 1e6*tpoData.loggingresidues*constants['wDens']['value']/2204.62\ntpoData['unused_lr'] = 1e6*(lr_t-(pctLR_bio*lr_t))\ntpoData['burned_lr'] = 1e6*lr_t*(np.average(pmAnn.biomass_t)/(annLrAvg*1e6))\ntpoData['unburned_lr'] = (lr_t*1e6) - tpoData.bioe_t - tpoData.burned_lr\ntpoData['unburned_lr_co2e'] = tpoData.unburned_lr.apply(ut.co2eDecomp)\ntpoData",
"Biomass residuals from non-commercial management activities\nData from TPO does not account for forest management activities that do not result in commercial products (timber sales, biomass sales). To estimate the amount of residual material produced from non commercial management activities we use data from the US Forest Service (FACTS) and from CalFires timber harvest plan data. \nForest Service ACtivity Tracking System (FACTS)\nData from TPO does not account for forest management activities that do not result in commercial products (timber sales, biomass sales). We use a range of 10-35 BDT/acre to convert acres reported in FACTS to volume.",
"pd.read_excel('lf/FACTS_Tabular_092115.xlsx', sheetname = 'CategoryCrosswalk').to_sql('facts_cat', sqdb['cx'], if_exists = 'replace')\npd.read_csv('pd/facts_notimber.csv').to_sql('facts_notimber', sqdb['cx'], if_exists='replace')",
"Querying FACTS\nThe USFS reports Hazardous Fuels Treatment (HFT) activities as well as Timber Sales (TS) derived from the FACTS database. We use these two datasets to estimate the number of acres treated that did not produce commercial material (sawlogs or biomass) and where burning was not used. The first step is to elimina all treatments in the HFT dataset that included timber sales. We accomplish this by eliminating all rows in the HFT dataset that have identical FACTS_ID fields in the TS dataset. We further filter the HFT dataset by removing any planned but not executed treatements (nbr_units1 >0 below -- nbr_units1 references NBR_UNITS_ACCOMPLISHED in the USFS dataset, see metadata for HFT here), and use text matching in the 'ACTIVITY' and 'METHOD' fields to remove any rows that contain reference to 'burning' or 'fire'. Finally, we remove all rows that that reference 'Biomass' in the method category as it is assumed that this means material was removed for bioenergy.",
"usfs_acres = pd.read_sql('''select\n sum(nbr_units1) acres,\n method,\n strftime('%Y',date_compl) year,\n cat.\"ACTIVITY\" activity,\n cat.\"TENTATIVE_CATEGORY\" r5_cat\n from facts_notimber n \n join facts_cat cat\n on (n.activity = cat.\"ACTIVITY\") \n where date_compl is not null\n and nbr_units1 > 0\n and cat.\"TENTATIVE_CATEGORY\" != 'Burning'\n and cat.\"ACTIVITY\" not like '%ire%'\n and method not like '%Burn%'\n and method != 'Biomass'\n group by cat.\"ACTIVITY\",\n year,\n method,\n cat.\"TENTATIVE_CATEGORY\"\n order by year;''', con = sqdb['cx'])",
"Converting acres to cubic feet\nFACTS reports in acres. To estimate the production of biomass from acres treated we use a range of 10-35 BDT/acre. We assume that actual biomass residuals per acre are normally distributed with a mean of 22.5 and a standard deviation of (35-10)/4 = 6.25",
"def sumBDT(ac, maxbdt = 35, minbdt = 10):\n av = (maxbdt + minbdt)/2\n stdev = (float(maxbdt) - float(minbdt))/4 \n d_frac = (ac-np.floor(ac))*np.random.normal(av, stdev, 1).clip(min=0)[0]\n t_bdt = np.sum(np.random.normal(av,stdev,np.floor(ac)).clip(min=0))\n return d_frac+t_bdt\n\nusfs_acres['bdt'] = usfs_acres['acres'].apply(sumBDT)\nusfs_an_bdt = usfs_acres.groupby(['year']).sum()",
"Weighted average wood density\nAverage wood density weighted by harvested species percent. Derived from McIver and Morgan, Table 4",
"wood_dens = ut.gData('138FWlGeW57MKdcz2UkWxtWV4o50SZO8sduB1R6JOFp8', 1297253755)\nwavg_dens =sum(wood_dens.pct/100 * wood_dens.density_lbscuft)",
"Annual unutilized management residuals\n\n[x] Public lands non-commercial management residuals \n[ ] Private land non-commercial management residuals\n[x] Public lands logging residuals\n[x] Private lands logging residuals",
"cat_codes = {'nf_ncmr': 'Unburned, non-commercial management residuals from National Forest lands',\n 'nf_lr': 'Logging residuals generated from timber sales on National Forest lands',\n 'opriv_lr': 'Logging residuals generated from timber sales on non-industrial private forest lands',\n 'fi_lr': 'Logging residuals generated from timber sales on industrial private lands',\n 'opub_lr': 'Logging residuals generated from timber sales on industrial private lands'}\n\nusfs_an_bdt['cuft']= usfs_an_bdt.bdt *wavg_dens\nresid_stats=pd.DataFrame((usfs_an_bdt.iloc[6:,2]/1000000).describe())\nresid_stats.columns = ['nf_ncmr']\nresid_stats['nf_lr']=tpoData[tpoData.ownership.str.contains('National Forest')]['loggingresidues'].describe()\nresid_stats['opriv_lr']=tpoData[tpoData.ownership.str.contains('Other Private')]['loggingresidues'].describe()\nresid_stats['fi_lr']=tpoData[tpoData.ownership.str.contains('Forest Industry')]['loggingresidues'].describe()\nresid_stats['opub_lr']=tpoData[tpoData.ownership.str.contains('Other Public')]['loggingresidues'].describe()\nresid_stats\n\nprint tabulate(resid_stats, headers = resid_stats.columns.tolist(), tablefmt ='pipe')",
"Estimating combined GHG and SLCP emissions from unutilized residues\nOnly a fraction of the",
"ureg = UnitRegistry()\nureg.define('cubic foot = cubic_centimeter/ 3.53147e-5 = cubic_foot' )\nureg.define('million cubic foot = cubic_foot*1000000 = MMCF' )\nureg.define('board foot sawlog = cubic_foot / 5.44 = BF_saw')\nureg.define('board foot veneer = cubic_foot / 5.0 = BF_vo')\nureg.define('board foot bioenergy = cubic_foot / 1.0 = BF_bio')\nureg.define('bone-dry unit = cubic_foot * 96 = BDU')\n"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fja05680/pinkfish | examples/120.sell-short/strategy.ipynb | mit | [
"sell-short-in-may-and-go-away\nsee: https://en.wikipedia.org/wiki/Sell_in_May\nThe reason for this example is to demonstrate short selling (algo), and short selling using adjust_percent function (algo2). \nalgo - Sell short in May and go away, buy to cover in Nov\nalgo2 - first trading day of the month, adjust position to 50% \n(Select the one you want to call in the Strategy.run() function",
"import datetime\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport pinkfish as pf\n\n# Format price data\npd.options.display.float_format = '{:0.2f}'.format\n\n%matplotlib inline\n\n# Set size of inline plots\n'''note: rcParams can't be in same cell as import matplotlib\n or %matplotlib inline\n \n %matplotlib notebook: will lead to interactive plots embedded within\n the notebook, you can zoom and resize the figure\n \n %matplotlib inline: only draw static images in the notebook\n'''\nplt.rcParams[\"figure.figsize\"] = (10, 7)\n\npf.DEBUG = False",
"Some global data",
"#symbol = '^GSPC'\nsymbol = 'SPY'\ncapital = 10000\nstart = datetime.datetime(2015, 10, 30)\n#start = datetime.datetime(*pf.SP500_BEGIN)\nend = datetime.datetime.now()",
"Define Strategy Class",
"class Strategy:\n\n def __init__(self, symbol, capital, start, end):\n\n self.symbol = symbol\n self.capital = capital\n self.start = start\n self.end = end\n \n self.ts = None\n self.tlog = None\n self.dbal = None\n self.stats = None\n\n def _algo(self):\n pf.TradeLog.cash = self.capital\n\n for i, row in enumerate(self.ts.itertuples()):\n\n date = row.Index.to_pydatetime()\n close = row.close; \n end_flag = pf.is_last_row(self.ts, i)\n shares = 0\n\n # Buy to cover (at the open on first trading day in Nov)\n if self.tlog.shares > 0:\n if (row.month == 11 and row.first_dotm) or end_flag:\n shares = self.tlog.buy2cover(date, row.open)\n\n # Sell short (at the open on first trading day in May)\n else:\n if row.month == 5 and row.first_dotm:\n shares = self.tlog.sell_short(date, row.open)\n\n if shares > 0:\n pf.DBG(\"{0} SELL SHORT {1} {2} @ {3:.2f}\".format(\n date, shares, self.symbol, row.open))\n elif shares < 0:\n pf.DBG(\"{0} BUY TO COVER {1} {2} @ {3:.2f}\".format(\n date, -shares, self.symbol, row.open))\n # Record daily balance\n self.dbal.append(date, close)\n\n def _algo2(self):\n pf.TradeLog.cash = self.capital\n\n for i, row in enumerate(self.ts.itertuples()):\n\n date = row.Index.to_pydatetime()\n close = row.close; \n end_flag = pf.is_last_row(self.ts, i)\n shares = 0\n\n # On the first day of the month, adjust short position to 50%\n if (row.first_dotm or end_flag):\n weight = 0 if end_flag else 0.5\n self.tlog.adjust_percent(date, close, weight, pf.Direction.SHORT)\n\n # Record daily balance\n self.dbal.append(date, close)\n\n def run(self):\n self.ts = pf.fetch_timeseries(self.symbol)\n self.ts = pf.select_tradeperiod(self.ts, self.start, self.end,\n use_adj=True)\n # add calendar columns\n self.ts = pf.calendar(self.ts)\n \n self.tlog = pf.TradeLog(self.symbol)\n self.dbal = pf.DailyBal()\n \n self.ts, self.start = pf.finalize_timeseries(self.ts, self.start)\n\n # Pick either algo or algo2\n self._algo()\n #self._algo2()\n \n self._get_logs()\n self._get_stats()\n \n \n def _get_logs(self):\n self.rlog = self.tlog.get_log_raw()\n self.tlog = self.tlog.get_log()\n self.dbal = self.dbal.get_log(self.tlog)\n\n def _get_stats(self):\n self.stats = pf.stats(self.ts, self.tlog, self.dbal, self.capital)",
"Run Strategy",
"s = Strategy(symbol, capital, start, end)\ns.run()\n\ns.rlog.head()\n\ns.tlog.head()\n\ns.dbal.tail()",
"Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats",
"benchmark = pf.Benchmark(symbol, s.capital, s.start, s.end)\nbenchmark.run()",
"Plot Equity Curves: Strategy vs Benchmark",
"pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)",
"Plot Trades",
"pf.plot_trades(s.dbal, benchmark=benchmark.dbal)",
"Bar Graph: Strategy vs Benchmark",
"df = pf.plot_bar_graph(s.stats, benchmark.stats)\ndf"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
HumanCompatibleAI/imitation | experiments/mce_irl.ipynb | mit | [
"Demonstration of MCE IRL code & environments\nThis is just tabular environments & vanilla MCE IRL.",
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport copy\n\nimport numpy as np\nimport seaborn as sns\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy\nimport torch as th\n\nimport imitation.algorithms.tabular_irl as tirl\nimport imitation.envs.examples.model_envs as menv\n\nsns.set(context=\"notebook\")\n\nnp.random.seed(42)",
"IRL on a random MDP\nTesting both linear reward models & MLP reward models.",
"mdp = menv.RandomMDP(\n n_states=16,\n n_actions=3,\n branch_factor=2,\n horizon=10,\n random_obs=True,\n obs_dim=5,\n generator_seed=42,\n)\nV, Q, pi = tirl.mce_partition_fh(mdp)\nDt, D = tirl.mce_occupancy_measures(mdp, pi=pi)\ndemo_counts = D @ mdp.observation_matrix\n(obs_dim,) = demo_counts.shape\n\nrmodel = tirl.LinearRewardModel(obs_dim)\nopt = th.optim.Adam(rmodel.parameters(), lr=0.1)\nD_fake = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=1e-1)\n\nrmodel = tirl.MLPRewardModel(obs_dim, [32, 32])\nopt = th.optim.Adam(rmodel.parameters(), lr=0.1)\nD_fake = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=1e-2)",
"Same thing, but on grid world\nThe true reward here is not linear in the reduced feature space (i.e $(x,y)$ coordinates). Finding an appropriate linear reward is impossible (as I will demonstration), but an MLP should Just Work(tm).",
"# Same experiments, but on grid world\nmdp = menv.CliffWorld(width=7, height=4, horizon=8, use_xy_obs=True)\nV, Q, pi = tirl.mce_partition_fh(mdp)\nDt, D = tirl.mce_occupancy_measures(mdp, pi=pi)\ndemo_counts = D @ mdp.observation_matrix\n(obs_dim,) = demo_counts.shape\nrmodel = tirl.LinearRewardModel(obs_dim)\nopt = th.optim.Adam(rmodel.parameters(), lr=1.0)\nD_fake = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=0.1)\n\nmdp.draw_value_vec(D)\nplt.title(\"Cliff World $p(s)$\")\nplt.xlabel(\"x-coord\")\nplt.ylabel(\"y-coord\")\nplt.show()\n\nmdp.draw_value_vec(D_fake)\nplt.title(\"Occupancy for linear reward function\")\nplt.show()\nplt.subplot(1, 2, 1)\nmdp.draw_value_vec(rmodel(th.as_tensor(mdp.observation_matrix)).detach().numpy())\nplt.title(\"Inferred reward\")\nplt.subplot(1, 2, 2)\nmdp.draw_value_vec(mdp.reward_matrix)\nplt.title(\"True reward\")\nplt.show()\n\nrmodel = tirl.MLPRewardModel(\n obs_dim,\n [\n 1024,\n ],\n activation=th.nn.ReLU,\n)\nopt = th.optim.Adam(rmodel.parameters(), lr=1e-3)\nD_fake_mlp = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=3e-2, print_interval=250)\nmdp.draw_value_vec(D_fake_mlp)\nplt.title(\"Occupancy for MLP reward function\")\nplt.show()\nplt.subplot(1, 2, 1)\nmdp.draw_value_vec(rmodel(th.as_tensor(mdp.observation_matrix)).detach().numpy())\nplt.title(\"Inferred reward\")\nplt.subplot(1, 2, 2)\nmdp.draw_value_vec(mdp.reward_matrix)\nplt.title(\"True reward\")\nplt.show()",
"Notice that the inferred reward is absolutely nothing like the true reward, but the occupancy measure still (roughly) matches the true occupancy measure."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kratzert/RRMPG | examples/model_api_example.ipynb | mit | [
"Model API Example\nIn this notebook, we'll explore some functionality of the models of this package. We'll work with the coupled CemaneigeGR4j model that is implemented in rrmpg.models module. The data we'll use, comes from the CAMELS [1] data set. For some basins, the data is provided within this Python library and can be easily imported using the CAMELSLoader class implemented in the rrmpg.data module.\nIn summary we'll look at:\n- How you can create a model instance.\n- How we can use the CAMELSLoader.\n- How you can fit the model parameters to observed discharge by:\n - Using one of SciPy's global optimizer\n - Monte-Carlo-Simulation\n- How you can use a fitted model to calculate the simulated discharge.\n[1] Addor, N., A.J. Newman, N. Mizukami, and M.P. Clark, 2017: The CAMELS data set: catchment attributes and meteorology for large-sample studies. version 2.0. Boulder, CO: UCAR/NCAR. doi:10.5065/D6G73C3Q",
"# Imports and Notebook setup\nfrom timeit import timeit\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom rrmpg.models import CemaneigeGR4J\nfrom rrmpg.data import CAMELSLoader\nfrom rrmpg.tools.monte_carlo import monte_carlo\nfrom rrmpg.utils.metrics import calc_nse",
"Create a model\nAs a first step let us have a look how we can create one of the models implemented in rrmpg.models. Basically, for all models we have two different options:\n1. Initialize a model without specific model parameters.\n2. Initialize a model with specific model parameters.\nThe documentation provides a list of all model parameters. Alternatively we can look at help() for the model (e.g. help(CemaneigeGR4J)).\nIf no specific model parameters are provided upon intialization, random parameters will be generated that are in between the default parameter bounds. We can look at these bounds by calling .get_param_bounds() method on the model object and check the current parameter values by calling .get_params() method.\nFor now we don't know any specific parameter values, so we'll create one with random parameters.",
"model = CemaneigeGR4J()\nmodel.get_params()",
"Here we can see the six model parameters of CemaneigeGR4J model and their current values.\nUsing the CAMELSLoader\nTo have data to start with, we can use the CAMELSLoader class to load data of provided basins from the CAMELS dataset. To get a list of all available basins that are provided within this library, we can use the .get_basin_numbers() method. For now we will use the provided basin number 01031500.",
"df = CAMELSLoader().load_basin('01031500')\ndf.head()",
"Next we will split the data into a calibration period, which we will use to find a set of good model parameters, and a validation period, we will use the see how good our model works on unseen data. As in the CAMELS data set publication, we will use the first 15 hydrological years for calibration. The rest of the data will be used for validation.\nBecause the index of the dataframe is in pandas Datetime format, we can easily split the dataframe into two parts",
"# calcute the end date of the calibration period\nend_cal = pd.to_datetime(f\"{df.index[0].year + 15}/09/30\", yearfirst=True)\n\n# validation period starts one day later\nstart_val = end_cal + pd.DateOffset(days=1)\n\n# split the data into two parts\ncal = df[:end_cal].copy()\nval = df[start_val:].copy()",
"Fit the model to observed discharge\nAs already said above, we'll look at two different methods implemented in this library:\n1. Using one of SciPy's global optimizer\n2. Monte-Carlo-Simulation\nUsing one of SciPy's global optimizer\nEach model has a .fit() method. This function uses the global optimizer differential evolution from the scipy package to find the set of model parameters that produce the best simulation, regarding the provided observed discharge array.\nThe inputs for this function can be found in the documentation or the help().",
"help(model.fit)",
"We don't know any values for the initial states of the storages, so we will ignore them for now. For the missing mean temperature, we calculate a proxy from the minimum and maximum daily temperature. The station height can be retrieved from the CAMELSLoader class via the .get_station_height() method.",
"# calculate mean temp for calibration and validation period\ncal['tmean'] = (cal['tmin(C)'] + cal['tmax(C)']) / 2\nval['tmean'] = (val['tmin(C)'] + val['tmax(C)']) / 2\n\n# load the gauge station height\nheight = CAMELSLoader().get_station_height('01031500')",
"Now we are ready to fit the model and retrieve a good set of model parameters from the optimizer. Again, this will be done with the calibration data. Because the model methods also except pandas Series, we can call the function as follows.",
"# We don't have an initial value for the snow storage, so we omit this input\nresult = model.fit(cal['QObs(mm/d)'], cal['prcp(mm/day)'], cal['tmean'], \n cal['tmin(C)'], cal['tmax(C)'], cal['PET'], height)",
"result is an object defined by the scipy library and contains the optimized model parameters, as well as some more information on the optimization process. Let us have a look at this object:",
"result",
"The relevant information here is:\n- fun is the final value of our optimization criterion (the mean-squared-error in this case)\n- message describes the cause of the optimization termination\n- nfev is the number of model simulations\n- sucess is a flag wether or not the optimization was successful\n- x are the optimized model parameters\nNext, let us set the model parameters to the optimized ones found by the search. Therefore we need to create a dictonary containing one key for each model parameter and as the corresponding value the optimized parameter. As mentioned before, the list of model parameter names can be retrieved by the model.get_parameter_names() function. We can then create the needed dictonary by the following lines of code:",
"params = {}\n\nparam_names = model.get_parameter_names()\n\nfor i, param in enumerate(param_names):\n params[param] = result.x[i]\n\n# This line set the model parameters to the ones specified in the dict\nmodel.set_params(params)\n\n# To be sure, let's look at the current model parameters\nmodel.get_params()",
"Also it might not be clear at the first look, this are the same parameters as the ones specified in result.x. In result.x they are ordered according to the ordering of the _param_list specified in each model class, where ass the dictonary output here is alphabetically sorted.\nMonte-Carlo-Simulation\nNow let us have a look how we can use the Monte-Carlo-Simulation implemented in rrmpg.tools.monte_carlo.",
"help(monte_carlo)",
"As specified in the help text, all model inputs needed for a simulation must be provided as keyword arguments. The keywords need to match the names specified in the model.simulate() function. Let us create a new model instance and see how this works for the CemaneigeGR4J model.",
"model2 = CemaneigeGR4J()\n\n# Let use run MC for 1000 runs, which is in the same range as the above optimizer\nresult_mc = monte_carlo(model2, num=10000, qobs=cal['QObs(mm/d)'], \n prec=cal['prcp(mm/day)'], mean_temp=cal['tmean'],\n min_temp=cal['tmin(C)'], max_temp=cal['tmax(C)'],\n etp=cal['PET'], met_station_height=height)\n\n# Get the index of the best fit (smallest mean squared error)\nidx = np.argmin(result_mc['mse'][~np.isnan(result_mc['mse'])])\n\n# Get the optimal parameters and set them as model parameters\noptim_params = result_mc['params'][idx]\n\nparams = {}\n\nfor i, param in enumerate(param_names):\n params[param] = optim_params[i]\n\n# This line set the model parameters to the ones specified in the dict\nmodel2.set_params(params)",
"Calculate simulated discharge\nWe now have two models, optimized by different methods. Let's calculate the simulated streamflow of each model and compare the results! Each model has a .simulate() method, that returns the simulated discharge for the inputs we provide to this function.",
"# simulated discharge of the model optimized by the .fit() function\nval['qsim_fit'] = model.simulate(val['prcp(mm/day)'], val['tmean'], \n val['tmin(C)'], val['tmax(C)'], \n val['PET'], height)\n\n# simulated discharge of the model optimized by monte-carlo-sim\nval['qsim_mc'] = model2.simulate(val['prcp(mm/day)'], val['tmean'], \n val['tmin(C)'], val['tmax(C)'], \n val['PET'], height)\n\n# Calculate and print the Nash-Sutcliff-Efficiency for both simulations\nnse_fit = calc_nse(val['QObs(mm/d)'], val['qsim_fit'])\nnse_mc = calc_nse(val['QObs(mm/d)'], val['qsim_mc'])\n\nprint(\"NSE of the .fit() optimization: {:.4f}\".format(nse_fit))\nprint(\"NSE of the Monte-Carlo-Simulation: {:.4f}\".format(nse_mc))",
"What do this number mean? Let us have a look at some window of the simulated timeseries and compare them to the observed discharge:",
"# Plot last full hydrological year of the simulation\n%matplotlib notebook\nstart_date = pd.to_datetime(\"2013/10/01\", yearfirst=True)\nend_date = pd.to_datetime(\"2014/09/30\", yearfirst=True)\nplt.plot(val.loc[start_date:end_date, 'QObs(mm/d)'], label='Qobs')\nplt.plot(val.loc[start_date:end_date, 'qsim_fit'], label='Qsim .fit()')\nplt.plot(val.loc[start_date:end_date, 'qsim_mc'], label='Qsim mc')\nplt.legend()",
"The result is not perfect, but it is not bad either! And since this package is also about speed, let us also check how long it takes to simulate the discharge for the entire validation period (19 years of data).",
"%%timeit \nmodel.simulate(val['prcp(mm/day)'], val['tmean'], \n val['tmin(C)'], val['tmax(C)'], \n val['PET'], height)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
matmodlab/matmodlab2 | notebooks/PoroplasticFitting.ipynb | bsd-3-clause | [
"Poroplastic Data Fitting",
"import numpy as np\nfrom numpy import *\nfrom bokeh import *\nfrom bokeh.plotting import *\noutput_notebook()\nfrom matmodlab2 import *\nfrom pandas import read_excel\nfrom scipy.optimize import leastsq\ndiff = lambda x: np.ediff1d(x, to_begin=0.)\ntrace = lambda x, s='SIG': x[s+'11'] + x[s+'22'] + x[s+'33']\nRTJ2 = lambda x: sqrt(((x['SIG11']-x['SIG22'])**2 + \n (x['SIG22']-x['SIG33'])**2 + \n (x['SIG33']-x['SIG22'])**2)/6.)",
"Summary\nIn the cells to follow, the following material parameters were found\n$$\\begin{align}\nB_0 &= 14617807286.8\\\nB_1 &= 40384983097.2\\\nB_2 &= 385649437.858\\\nP_0 & = −164761936.257 \\\nP_1 & = 3.20119273834e−10\\\nP_2 & = 7.39166987894e−18\\\nP_3 & = 0.0983914345654\\\nG_1 & = 9647335534.93 \\\nG_2 & = 2.3838775292e−09 \\\nG_3 & = −7.40942609805e−07\\\n\\end{align}$$\nRead in the Data\nRead in the hydrostatic data and compute derived values.",
"df = read_excel('porodata.xlsx', sheetname='hydrostatic')\ndf['EV'] = trace(df, 'STRAIN')\ndf['I1'] = trace(df, 'SIG')\ndf['dEV'] = diff(df['EV'])\ndf['dI1'] = diff(df['I1'])",
"Hydrostatic Response\nElastic Unloading Curve\nPlot the pressure vs. volume strain curve and determine the section in which elastic unloading occurs",
"plot = figure(x_axis_label='Volume Strain', y_axis_label='Pressure')\nplot.circle(-df['EV'], -df['I1']/3.)\nplot.text(-df['EV'], -df['I1']/3.,\n text=range(len(df)),text_color=\"#333333\",\n text_align=\"left\", text_font_size=\"5pt\")\nshow(plot)",
"It appears that the unloading occurs at data point 101 and continues until the end of the data. This curve will be used to fit the bulk modulus parameters. Below, scipy is used to optimize the parameters to the curve.",
"kfun = lambda B0, B1, B2, I1: B0 + B1 * exp(-B2 / abs(I1))\ndef kmm_bulk(x, fac, I1, K):\n B0, B1, B2 = x * fac\n return K - kfun(B0, B1, B2, I1)\n\nimax = 101\ndf1 = df.iloc[imax:].copy()\nK = np.array(df1['dI1'] / 3. / df1['dEV'])\nb0 = np.array((K[-1], K[0] - K[-1], 1e9))\nfac = 1e9\nB, icov = leastsq(kmm_bulk, b0/fac, args=(fac, df1['I1'], K))\nB0, B1, B2 = B * fac\nB0, B1, B2\n\nplot = figure(x_axis_label='Bulk Modulus', y_axis_label='Pressure')\nplot.circle(-df1['I1']/3., K)\nplot.line(-df['I1']/3., kfun(B0, B1, B2, df['I1']), color='red')\nshow(plot)",
"Poro response\nWith the bulk response determined, find the porosity parameters",
"df['EP'] = df['I1'] / 3. / kfun(B0, B1, B2, df['I1']) - df['EV']\np3 = max(df['EP'])\ndf['PORO'] = p3 - df['EP']\nplot = figure(x_axis_label='Plastic Strain', y_axis_label='Pressure')\nplot.circle(df['EP'], -df['I1']/3.)\nshow(plot)\n\nplot = figure(x_axis_label='Pressure', y_axis_label='PORO')\ndf2 = df.iloc[:imax].copy()\nplot.circle(-df2['I1']/3., df2['PORO'])\nshow(plot)\n\ndef pfun(P0, P1, P2, P3, I1):\n xi = -I1 / 3. + P0\n return P3 * exp(-(P1 + P2 * xi) * xi)\n \ndef kmm_poro(x, fac, I1, P):\n p0, p1, p2, p3 = asarray(x) * fac\n return P - pfun(p0, p1, p2, p3, I1)\n\np0 = (1, 1, 1, p3)\nfac = np.array([1e8, 1e-10, 1e-18, 1])\np, icov = leastsq(kmm_poro, p0, args=(fac, df2['I1'], df2['PORO']))\nP0, P1, P2, P3 = p * fac\nP0, P1, P2, P3\n\nplot = figure(x_axis_label='Pressure', y_axis_label='PORO')\nplot.circle(-df2['I1']/3., df2['PORO'], legend='Data')\nplot.line(-df2['I1']/3., pfun(P0, P1, P2, P3, df2['I1']), color='red', legend='Fit')\nshow(plot)",
"Shear Response",
"keys = (2.5, 5.0, 7.5, 10.0, 12.5, 15.0, 22.5, 30.0)\ncolors = ('red', 'blue', 'orange', 'purple', \n 'green', 'black', 'magenta', 'teal', 'cyan')\ndf2 = {}\np = figure(x_axis_label='I1', y_axis_label='Sqrt[J2]')\np1 = figure(x_axis_label='Axial Strain', y_axis_label='Axial Stress')\nfor (i, key) in enumerate(keys):\n key = 'txc p={0:.01f}MPa'.format(key)\n x = read_excel('porodata.xlsx', sheetname=key)\n x['I1'] = trace(x, 'SIG')\n x['RTJ2'] = RTJ2(x)\n df2[key] = x\n p.circle(-df2[key]['I1'], df2[key]['RTJ2'], legend=key[4:], color=colors[i])\n \n # determine where hydrostatic preload ends\n j = nonzero(x['SIG11'] - x['SIG22'])[0]\n E0, S0 = df2[key]['STRAIN11'][j[0]], df2[key]['SIG11'][j[0]]\n p1.circle(-df2[key]['STRAIN11'][j]+E0, -df2[key]['SIG11'][j]+S0,\n legend=key[4:], color=colors[i])\n\np.legend.orientation = 'horizontal'\nshow(p1)\nshow(p)",
"The axial stress versus axial strain plot shows that the response is linear, meaning that the elastic modulus is constant.",
"key = 'txc p=2.5MPa'\nj = nonzero(df2[key]['SIG11'] - df2[key]['SIG22'])[0]\ndf3 = df2[key].iloc[j].copy()\nE0, S0 = df3['STRAIN11'].iloc[0], df3['SIG11'].iloc[0]\nEF, SF = df3['STRAIN11'].iloc[-1], df3['SIG11'].iloc[-1]\nE = (SF - S0) / (EF - E0)\nprint('{0:E}'.format(E))",
"The shear modulus can now be determined",
"G = lambda I1: 3 * kfun(B0, B1, B2, I1) * E / (9 * kfun(B0, B1, B2, I1) - E)\ngfun = lambda g0, g1, g2, rtj2: g0 * (1 - g1 * exp(-g2 * rtj2)) / (1 - g1)\ndef kmm_shear(x, fac, rtj2, G):\n g0, g1, g2 = asarray(x) * fac\n return G - gfun(g0, g1, g2, rtj2)\n\ng = asarray(G(df3['I1']))\ng0 = (g[0], .0001, 0)\nfac = 1.\ng, icov = leastsq(kmm_shear, g0, args=(fac, RTJ2(df3), g))\nG0, G1, G2 = g * fac\nG0, G1, G2\n\np2 = figure(x_axis_label='Sqrt[J2]', y_axis_label='Shear Modulus')\np2.circle(RTJ2(df3), G(df3['I1']))\np2.line(RTJ2(df3), gfun(G0, G1, G2, RTJ2(df3)), color='red')\nshow(p2)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
widdowquinn/notebooks | stan_model_radon.ipynb | mit | [
"Bayesian Multilevel Modelling using PyStan\nThis is a tutorial, following through Chris Fonnesbeck's primer on using PyStan with Bayesian Multilevel Modelling.\n\nMultilevel model: a regression model in which constituent model parameters are given probability models, which implies that they can vary by group. These are generalisations of regression modelling.\nHierarchical model: a multilevel model where parameters are nested within one another.\n\nExample: Radon contamination\nRadon is a radioactive gas that enters homes through contact points with the ground. The EPA conducted a study of radon levels in 80,000 houses. There were two important predictors:\n\nmeasurement in the basement, or ground floor (radon expected to be higher in basements)\nuranium level (correlates positively with radon level)\n\nWe will model radon levels in a single state: Minnesota. The hierarchy in this example is households, which exist within counties.\nComments\nIn the first instance, we have a model where output is measured radon level as a function of the floor of the house at which the radon was measured (basement or ground floor), and the prevailing radon level.\nOur estimate of the parameter of prevailing radon level can be considered a prediction of the prevailing radon level.\nThe prevailing radon level may be taken to be that for the state (counties pooled) or that for the county (unpooled), or as some intermediate representation.\nThe model is multilevel because we are sampling the two parameters of prevailing radon level, and the effect of changing floor, from a probabilistic distribution.\nThe model is hierarchical because households exist within counties (which exist within the state).\nWe already have the model 'outputs': data for household radon level measurements, with their counties; and inputs: the floor level at which the measurements were taken. We are attempting to estimate the parameters for alternative formulations of the model, and to assess which model is the best explanation for the observed data/best predictor for prevailing radon level. With a good model, we could go forward to predict new radon levels, given a county and floor.\nBuilding the model\nWe first import the necessary modules:\n\npylab: MatLab-like Python inline matrix maths and visualisation\nnumpy: Numerical approaches in Python\npandas: R-like dataframes in Python\nseaborn: prettier graphics than the pylab default\npystan: Python implementation of Stan",
"%pylab inline\n\nimport numpy as np\nimport pandas as pd\nimport pystan\nimport seaborn as sns\n\nsns.set_context('notebook')",
"Data import and cleanup\nNext we import the radon data. For cleanup, we strip whitespace from column headers, restrict data to Minnesota (MN) and add a unique numerical identifier for each county.",
"# Import radon data\nsrrs2 = pd.read_csv('data/srrs2.dat')\nsrrs2.columns = srrs2.columns.map(str.strip)\n\n# Make a combined state and county ID, by household\nsrrs_mn = srrs2.assign(fips=srrs2.stfips * 1000 + srrs2.cntyfips)[srrs2.state == 'MN']\n\n# Check data\nsrrs_mn.head()",
"We import uranium data for each county, creating a unique identifier for each county to match that in srrs.",
"# Obtain the uranium level as a county-level predictor\ncty = pd.read_csv('data/cty.dat')\ncty_mn = cty[cty.st == 'MN'].copy() # MN only data\n\n# Make a combined state and county id, by county\ncty_mn['fips'] = 1000 * cty_mn.stfips + cty_mn.ctfips\n\n# Check data\ncty_mn.head()",
"It is convenient to bring all the data into a single dataframe with radon and uranium data byhousehold, so we merge on the basis of the unique county identifier, to assign uranium data across all households in a county.",
"# Combine data into a single dataframe\nsrrs_mn = srrs_mn.merge(cty_mn[['fips', 'Uppm']], on='fips') # Get uranium level by household (on county basis)\nsrrs_mn = srrs_mn.drop_duplicates(subset='idnum') # Lose duplicate houses\nu = np.log(srrs_mn.Uppm) # log-transform uranium level\nn = len(srrs_mn) # number of households\n\n# Check data\nsrrs_mn.head()\n\nsrrs_mn.columns",
"We create a dictionary associating each county with a unique index code, for use in Stan.",
"# Index counties with a lookup dictionary\nsrrs_mn.county = srrs_mn.county.str.strip()\nmn_counties = srrs_mn.county.unique()\ncounties = len(mn_counties)\ncounty_lookup = dict(zip(mn_counties, range(len(mn_counties))))",
"For construction of the Stan model, it is convenient to have the relevant variables as local copies - this aids readability.\n\nindex code for each county\nradon activity\nlog radon activity\nwhich floor measurement was taken",
"# Make local copies of variables\ncounty = srrs_mn['county_code'] = srrs_mn.county.replace(county_lookup).values\nradon = srrs_mn.activity\nsrrs_mn['log_radon'] = log_radon = np.log(radon + 0.1).values\nfloor_measure = srrs_mn.floor.values",
"Modelling distribution of radon in MN\nVisual inspection of the variation in (log) observed radon levels shows a broad range of values. We aim to determine the contributions of the prevailing radon level and the floor at which radon level is measured, to produce this distribution of observed values.",
"srrs_mn.activity.apply(lambda x: np.log(x + 0.1)).hist(bins=25);",
"Conventional approaches\nTwo conventional alternatives to modelling, pooling and not pooling represent two extremes of a tradeoff between variance and bias.\nThe bias-variance tradeoff\nWhere the variable we are trying to predict is $Y$, as a function of covariates $X$, we assume a relationship $Y = f(X) + \\epsilon$ where the error term $\\epsilon$ is distributed normally with mean zero: $\\epsilon \\sim N(0, \\sigma_{\\epsilon})$.\nWe estimate a model $\\hat{f}(X)$ of $f(X)$ using some technique. This gives us squared prediction error: $\\textrm{Err}(x) = E[(Y − \\hat{f}(x))^2]$. That squared error can be decomposed into:\n$$\\textrm{Err}(x)=(E[\\hat{f} (x)] − f(x))^2 + E[(\\hat{f}(x) − E[\\hat{f}(x)])^2] + \\sigma^2_e$$\nwhere\n\n$E[\\hat{f} (x)] − f(x))^2$ is the square of the difference between the model $\\hat{f}(x)$ and the 'true' relationship $f(x)$, i.e. the square of the bias\n$E[(\\hat{f}(x) − E[\\hat{f}(x)])^2]$ is the square of the difference between the mean behaviour of the model and the observed behaviour of this model, i.e. the square of the variance\n$\\sigma^2_e$ is the noise of the 'true' relationship that cannot be captured in any model, i.e. the irreducible error\n\nWith a known true model, and an infinite amount of data, it is in principle possible to reduce both bias and variance to zero. In reality, both sources of error exist, and we choose to minimise bias and/or variance.\nThe trade-off in the radon model\nTaking $y = \\log(\\textrm{radon})$, floor measurements (basement or ground) as $x$, where $i$ indicates the house, and $j[i]$ is the county to which a house 'belongs'. Then $\\alpha$ is the radon level across all counties, and $\\alpha_{j[i]}$ is the radon level in a single county; $\\beta$ is the influence of the choice of floor at which measurement is made; and $\\epsilon$ is some other error (measurement error, temporal variation in a house, or variation among houses).\nWe take two approaches:\n\nComplete pooling - treat all counties the same, and estimate a single radon level: $y_i = \\alpha + \\beta x_i + \\epsilon_i$\nNo pooling - treat each county independently: $y_i = \\alpha_{j[i]} + \\beta x_i + \\epsilon_i$\n\nWhen we do not pool, we will likely obtain quite different parameter estimates $\\alpha_{j[i]}$ for each county - especially when there are few observations in a county. As new data is gathered, these estimates are likely to change radically. This is therefore a model with high variance.\nAlternatively, by pooling all counties, we will obtain a single estimate for $\\alpha$, but this value may deviate quite far from the true situation in some or all counties. This is therefore a model with high bias.\nSo, if we treat all counties as the same, we have a biased estimate, but if we treat them as individuals, we have high variance - the bias-variance tradeoff. It may be the case that neither extreme produces a good model for the real behaviour: models that minimise bias to produce a high variance error are overfit; those that minimise variance to produce a strong bias error are underfit.\nSpecifying the pooled model in Stan\nTo build a model in Stan, we need to define data, parameters, and the model itself. This is done by creating strings in the Stan language, rather than having an API that provides a constructor for the model.\nWe construct the data block to comprise the number of samples (N, int), with vectors of log-radon measurements (y, a vector of length N) and the floor measurement covariates (x, vector, length N).",
"# Construct the data block.\npooled_data = \"\"\"\ndata {\n int<lower=0> N;\n vector[N] x;\n vector[N] y;\n}\n\"\"\"",
"Next we initialise parameters, which here are linear model coefficients (beta, a vector of length 2) that represent both $\\alpha$ and $\\beta$ in the pooled model definition, as beta[1] and beta[2] are assumed to lie on a Normal distribution, and the Normal distribution scale parameter sigma defining errors in the model's prediction of the output (y, defined later), which is constrained to be positive.",
"# Initialise parameters\npooled_parameters = \"\"\"\nparameters {\n vector[2] beta;\n real<lower=0> sigma;\n}\n\"\"\"",
"Finally we specify the model, with log(radon) measurements as a normal sample, having a mean that is a function of the choice of floor at which the measurement was made, $y \\sim N(\\beta[1] + \\beta[2]x, \\sigma_e)$",
"pooled_model = \"\"\"\nmodel {\n y ~ normal(beta[1] + beta[2] * x, sigma);\n}\n\"\"\"",
"Running the pooled model in Stan\nWe need to map Python variables to those in the stan model, and pass the data, parameters and model strings above to stan. We also need to specify how many iterations of sampling we want, and how many parallel chains to sample (here, 1000 iterations of 2 chains).\nThis is where explicitly-named local variables are convenient for definition of Stan models.\nCalling pystan.stan doesn't just define the model, ready to fit - it runs the fitting immediately.",
"pooled_data_dict = {'N': len(log_radon),\n 'x': floor_measure,\n 'y': log_radon}\n\npooled_fit = pystan.stan(model_code=pooled_data + pooled_parameters + pooled_model,\n data=pooled_data_dict,\n iter=1000,\n chains=2)",
"Once the fit has been run, the sample can be extracted for visualisation and summarisation. Specifying permuted=True means that all fitting chains are merged and warmup samples are discarded and that a dictionary is returned, with samples for each parameter:",
"# Collect the sample\npooled_sample = pooled_fit.extract(permuted=True)",
"The output is an OrderedDict with two keys of interest to us: beta and sigma. sigma describes the estimated error term, and beta describes the estimated values of $\\alpha$ and $\\beta$ for each iteration:",
"# Inspect the sample\npooled_sample['beta']",
"While it can be very interesting to see the results for individual iterations (and how they vary), for now we are interested in the mean values of these estimates:",
"# Get mean values for parameters, from the sample\n# b0 = common radon value across counties (alpha)\n# m0 = variation in radon level with change in floor (beta)\nb0, m0 = pooled_sample['beta'].T.mean(1)\n\n# What are the fitted parameters\nprint(\"alpha: {0}, beta: {1}\".format(b0, m0))",
"We can visualise how well this pooled model fits the observed data:",
"# Plot the fitted model (red line) against observed values (blue points)\nplt.scatter(srrs_mn.floor, np.log(srrs_mn.activity + 0.1))\nxvals = np.linspace(-0.1, 1.2)\nplt.plot(xvals, m0 * xvals + b0, 'r--')\nplt.title(\"Fitted model\")\nplt.xlabel(\"Floor\")\nplt.ylabel(\"log(radon)\");",
"The answer is: not terribly badly (the fitted line runs convincingly through the centre of the data, and plausibly describes the trend), but not terribly well, either. The observed points vary widely about the fitted model, implying that the prevailing radon level varies quite widely, and we might expect different gradients if we chose different subsets of the data.\nThe main error in this model fit is due to bias, because the pooling approach is an an inaccurate representation of the underlying radon level, taken across all measurements.\nSpecifying the unpooled model in Stan\nFor the unpooled model, we have the parameter $\\alpha_{j[i]}$, representing a list of (independent) mean values, one for each county. Otherwise the model is the same as for the pooled example, with shared parameters for the effect of which floor is being measured, and the standard deviation of the error.\nWe construct the data, parameters and model blocks in a similar way to before. We define the number of samples (N, int), and two vectors of log-radon measurements (y, length N) and floor measurement covariates (x, length N). The main difference to before is that we define a list of counties (these are the indices 1..85 defined above, rather than county names), one for each sample:",
"unpooled_data = \"\"\"\ndata {\n int<lower=0> N;\n int<lower=1, upper=85> county[N];\n vector[N] x;\n vector[N] y;\n}\n\"\"\"",
"We define three parameters: $\\alpha_{j[i]}$ - one radon level per county (a - as a vector of length 85, one value per county); change in radon level by floor, $\\beta$ (beta, a real value), and the Normal distribution scale parameter sigma, as before:",
"unpooled_parameters = \"\"\"\nparameters {\n vector[85] a;\n real beta;\n real<lower=0, upper=100> sigma;\n}\n\"\"\"",
"We also define transformed parameters, for convenience. This defines a new variable $\\hat{y}$ (y_hat, a vector with one value per sample) which is our estimate/prediction of log(radon) value per household. This could equally well be done in the model block - we don't need to generate a transformed parameter, but for more complex models this is a useful technique to improve readability and maintainability.",
"unpooled_transformed_parameters = \"\"\"\ntransformed parameters {\n vector[N] y_hat;\n \n for (i in 1:N)\n y_hat[i] <- beta * x[i] + a[county[i]];\n}\n\"\"\"",
"Using this transformed parameter, the model form is now $y \\sim N(\\hat{y}, \\sigma_e)$, making explicit that we are fitting parameters that result in the model predicting a household radon measurement, and we are estimating the error of this prediction against the observed values:",
"unpooled_model = \"\"\"\nmodel {\n y ~ normal(y_hat, sigma);\n}\n\"\"\"",
"Running the unpooled model in Stan\nWe again map Python variables to those used in the stan model, then pass the data, parameters (transformed and untransformed) and the model to stan. We again specify 1000 iterations of 2 chains.\nNote that we have to offset our Python indices for counties by 1, as Python counts from zero, but Stan counts from 1.",
"# Map data\nunpooled_data_dict = {'N': len(log_radon),\n 'county': county + 1, # Stan counts start from 1\n 'x': floor_measure,\n 'y': log_radon}\n\n# Fit model\nunpooled_fit = pystan.stan(model_code=unpooled_data + unpooled_parameters +\n unpooled_transformed_parameters + unpooled_model,\n data=unpooled_data_dict,\n iter=1000,\n chains=2)",
"We can extract the sample from the fit for visualisation and summarisation. This time we do not use the permuted=True option. This returns a StanFit4Model object, from which we can extract the fitted estimates for a parameter using indexing, like a dictionary, e.g. unpooled_fit['beta'], and this will return a numpy ndarray of values. For $\\alpha$ (a) we get a 1000x85 array, for $\\beta$ (beta) we get a 1000x1 array. Mean and standard deviation (and other summary statistics) can be calculated from these.\nWhen extracting vectors of $\\alpha_{j[i]}$ (radon levels per county) and the associated standard errors, we use a pd.Series object, for compatibility with pandas. This allows us to specify an index, which is the list of county names in mn_counties.",
"# Extract fit of radon by county\nunpooled_estimates = pd.Series(unpooled_fit['a'].mean(0), index=mn_counties)\nunpooled_se = pd.Series(unpooled_fit['a'].std(0), index=mn_counties)\n\n# Inspect estimates\nunpooled_estimates.head()",
"To inspect the variation in predicted radon levels at county resolution, we can plot the mean of each estimate with its associated standard error. To structure this visually, we'll reorder the counties such that we plot counties from lowest to highest.",
"# Get row order of estimates as an index: low to high radon\norder = unpooled_estimates.sort_values().index\n\n# Plot mean radon estimates with stderr, following low to high radon order\nplt.scatter(range(len(unpooled_estimates)), unpooled_estimates[order])\nfor i, m, se in zip(range(len(unpooled_estimates)),\n unpooled_estimates[order],\n unpooled_se[order]):\n plt.plot([i,i], [m - se, m + se], 'b-')\nplt.xlim(-1, 86)\nplt.ylim(-1, 4)\nplt.xlabel('Ordered county')\nplt.ylabel('Radon estimate');",
"From this visual inspection, we can see that there is one county with a relatively low predicted radon level, and about five with relatively high levels. This reinforces our suggestion that a pooled estimate is likely to exhibit significant bias.\nPlot comparison of pooled and unpooled estimates\nWe can make direct visual comparisons between pooled and unpooled estimates for all counties, but here we do so for a specific subset:",
"# Define subset of counties\nsample_counties = ('LAC QUI PARLE', 'AITKIN', 'KOOCHICHING',\n 'DOUGLAS', 'CLAY', 'STEARNS', 'RAMSEY',\n 'ST LOUIS')\n\n# Make plot\nfig, axes = plt.subplots(2, 4, figsize=(12, 6),\n sharex=True, sharey=True)\naxes = axes.ravel() # turn axes into a flattened array\nm = unpooled_fit['beta'].mean(0)\nfor i, c in enumerate(sample_counties):\n # Get unpooled estimates and set common x values\n b = unpooled_estimates[c]\n xvals = np.linspace(-0.2, 1.2)\n \n # Plot household data\n x = srrs_mn.floor[srrs_mn.county == c]\n y = srrs_mn.log_radon[srrs_mn.county == c]\n axes[i].scatter(x + np.random.randn(len(x)) * 0.01, y, alpha=0.4)\n \n # Plot models\n axes[i].plot(xvals, m * xvals + b) # unpooled\n axes[i].plot(xvals, m0 * xvals + b0, 'r--') # pooled\n \n # Add labels and ticks\n axes[i].set_xticks([0, 1])\n axes[i].set_xticklabels(['basement', 'floor'])\n axes[i].set_ylim(-1, 3)\n axes[i].set_title(c)\n if not i % 2:\n axes[i].set_ylabel('log radon level')",
"By visual inspection, we can see that using unpooled county estimates for prevailing radon level has resulted in models that deviate from the pooled estimates, correcting for its bias. However, we can also see that for counties with few observations, the fitted estimates track the observations very closely, suggesting that there has been overfitting. The attempt to minimise error due to bias has resulted in the introduction of greater error due to variance in the dataset.\nConclusion\nNeither model does perfectly:\n\nFor identification of counties with a predicted prevailing high radon level, pooling is useless (because all counties are modelled with the same level)\nHowever, we ought not to trust any unpooled estimates that were produced using few observations on a county\n\nIdeally, we would have an intermediate form of model that optimally minimises the errors due to both bias and variance.\nPooling and Multilevel/Hierarchical Models\npooled model\nWhen we pool data, we imply that they are sampled from the same model. This ignores all variation (other than sampling variation) among the units being sampled. That is to say, observations $y_1, y_2, \\ldots, y_k$ share common parameter(s) $\\theta$:\n\nunpooled model\nIf we analyse our data with an unpooled model, we separate our data out into groups (which may be as extreme as one group per sample), which implies that the groups are sampled independently from separate models because the differences between sampling units are too great for them to be reasonably combined. That is to say, observations (or grouped observations) $y_1, y_2, \\ldots, y_k$ have independent parameters $\\theta_1, \\theta_2, \\ldots, \\theta_k$.\n\npartial pooling/hierarchical modelling\nIn a hierarchical, or partial pooling model, model parameters are instead viewed as a sample from a population distribution of parameters, so the unpooled model parameters $\\theta_1, \\theta_2, \\ldots, \\theta_k$ can be sampled from a single distribution $N(\\mu, \\sigma^2)$.\n\nOne of the great advantages of Bayesian modelling (as opposed to linear regression modelling) is the relative ease with which one can specify multilevel models and fit them using Hamiltonian Monte Carlo.\nPartial Pooling\nA simple model\nThe simplest possible partial pooling model for the radon dataset is one that estimates radon levels, with no other predictors (i.e. ignoring the effect of floor). This is a compromise between pooled (mean of all counties) and unpooled (county-level means), and approximates a weighted average (by sample size) of unpooled county means, and the pooled mean:\n$$\\hat{\\alpha} \\approx \\frac{(n_j/\\sigma_y^2)\\bar{y}j + (1/\\sigma{\\alpha}^2)\\bar{y}}{(n_j/\\sigma_y^2) + (1/\\sigma_{\\alpha}^2)}$$\n\n$\\hat{\\alpha}$ - partially-pooled estimate of radon level\n$n_j$ - number of samples in county $j$\n$\\bar{y}_j$ - estimated mean for county $j$\n$\\sigma_y^2$ - s.e. of $\\bar{y}_j$, variability of the county mean\n$\\bar{y}$ - pooled mean estimate for $\\alpha$\n$\\sigma_{\\alpha}^2$ - s.e. of $\\bar{y}$\n\nSpecifying the model\nWe can define this in stan, specifying data, parameters, transformed parameters and model blocks. The model is built up as follows.\nOur observed log(radon) measurements ($y$ approximate an intermediate transformed parameter $\\hat{y}$, which is normally distributed with variance $\\sigma_y^2$:\n$$y \\sim N(\\hat{y}, \\sigma_y^2)$$\nThe transformed variable $\\hat{y}$ is the value of $\\alpha$ associated with the county $i$ ($i=1,\\ldots,N$) in which each household is found.\n$$\\hat{y} = {\\alpha_1, \\ldots, \\alpha_N}$$\nThe value of $\\alpha$ for each county $i$, is Normally distributed with mean $10\\mu_{\\alpha}$ and variance $\\sigma_{\\alpha}^2$. That is, there is a common mean and variance underlying each of the prevailing radon levels in each county.\n$$\\alpha_i \\sim N(10\\mu_{\\alpha}, \\sigma_{\\alpha}^2), i = 1,\\ldots,N$$\nThe value $\\mu_{\\alpha}$ is Normally distributed around 0, with unit variance:\n$$\\mu_{\\alpha} \\sim N(0, 1)$$\nIn data:\n* N will be the number of samples (int)\n* county will be a list of N values from 1-85, specifying the county index each measurement\n* y will be a vector of log(radon) measurements, one per household/sample.\nWe define parameters:\n\na (vector, one value per county), representing $\\alpha$, the vector of prevailing radon levels for each county.\nmu_a, a real corresponding to $\\mu_{alpha}$, the mean radon level underlying the distribution from which the county levels are drawn.\nsigma_a is $\\sigma_{\\alpha}$, the standard deviation of the radon level distribution underlying the county levels: variability of county means about the average.\nsigma_y is $\\sigma_y$, the standard deviation of the measurement/sampling error: residual error of the observations.",
"partial_pooling = \"\"\"\ndata {\n int<lower=0> N;\n int<lower=1,upper=85> county[N];\n vector[N] y;\n}\nparameters {\n vector[85] a;\n real mu_a;\n real<lower=0,upper=100> sigma_a;\n real<lower=0,upper=100> sigma_y;\n}\ntransformed parameters {\n vector[N] y_hat;\n for(i in 1:N)\n y_hat[i] <- a[county[i]];\n}\nmodel {\n mu_a ~ normal(0, 1);\n a ~ normal(10 * mu_a, sigma_a);\n \n y ~ normal(y_hat, sigma_y);\n}\n\"\"\"",
"We map Python variables onto the model data (remembering to offset counts/indices by 1, as Stan counts from 1, not from 0):",
"partial_pool_data = {'N': len(log_radon),\n 'county': county + 1,\n 'y': log_radon}",
"Finally, we fit the model, to estimate $\\mu_{\\alpha}$, and $\\alpha_i, i=1,\\ldots,N$:",
"partial_pool_fit = pystan.stan(model_code=partial_pooling,\n data=partial_pool_data,\n iter=1000, chains=2)",
"We're interested primarily in the county-level estimates of prevailing radon levels, so we obtain the sample estimates for a:",
"sample_trace = partial_pool_fit['a'] \nmeans = sample_trace.mean(axis=0) # county-level estimates\nsd = sample_trace.std(axis=0)\nsamples, counties = sample_trace.shape\nn_county = srrs_mn.groupby('county')['idnum'].count() # number of samples from each county",
"We're going to compare the results from our partially-pooled model to the unpooled model above.",
"# Obtain unpooled estimates\nunpooled = pd.DataFrame({'n': n_county,\n 'm': unpooled_estimates,\n 'sd': unpooled_se})\nunpooled['se'] = unpooled.sd/np.sqrt(unpooled.n)\n\n# Construct axes for results\nfig, axes = plt.subplots(1, 2, figsize=(14,6),\n sharex=True, sharey=True)\njitter = np.random.normal(scale=0.1, size=counties) # avoid overplotting counties\n\n# Plot unpooled estimates\naxes[0].plot(unpooled.n + jitter, unpooled.m, 'b.') # means\nfor j, row in zip(jitter, unpooled.iterrows()):\n name, dat = row\n axes[0].plot([dat.n + j, dat.n + j], [dat.m - dat.se, dat.m + dat.se], 'b-')\n\n# Plot partially-pooled estimates\naxes[1].scatter(n_county.values + jitter, means)\nfor j, n, m, s in zip(jitter, n_county.values, means, sd):\n axes[1].plot([n + j, n + j], [m - s, m + s], 'b-')\n\n# Add line for underlying mean\nfor ax in axes:\n ax.hlines(sample_trace.mean(), 0.9, 100, linestyles='--') # underlying mean from partial model\n\n# Set axis limits/scale (shared x/y - need only to set one axis)\naxes[0].set_xscale('log')\naxes[0].set_xlim(1, 100)\naxes[0].set_ylim(-0.5, 3.5)\n\n# Set axis titles\naxes[0].set_title(\"Unpooled model estimates\")\naxes[1].set_title(\"Partially pooled model estimates\");",
"By inspection, there is quite a difference between unpooled and partially-pooled estimates of prevailing county-level radon level, especially as smaller sample sizes. The unpooled estimates at smaller sample sizes are both more extreme, and more imprecise.\nPartial pooling: varying intercept\nWe can extend this partial pooling to a linear model of the relationship between measured log(radon), the prevailing county radon level, and the floor at which the measurement was made. In the linear model, the measured radon level in a household $y_i$ is a function of the floor at which measurement took place, $x_i$, with parameters $\\alpha_{j[i]}$ (the prevailing radon level in the county) and $\\beta$ (the influence of the floor), and residual error $\\epsilon_i$.\n$$y_i = \\alpha_{j[i]} + \\beta x_i + \\epsilon_i$$\nIn this linear model, the prevailing radon level $\\alpha_j[i]$ is the intercept, with random Normal effect:\n$$\\alpha_{j[i]} \\sim N(\\mu_{\\alpha}, \\sigma_{\\alpha}^2$$\nThe residual error is also sampled from a Normal distribution:\n$$\\epsilon_i \\sim N(0, \\sigma_y^2$$\nThis approach is similar to a least squares regression, but the multilevel modelling approach allows parameter distributions - information to be shared across groups, which can lead to more reasonable estimates of parameters with relatively little data. In this example, using a common distribution for prevailing county-level radon spreads the information about likely radon levels such that our estimates for counties with few observations should be less extreme.\nSpecifying the model\nWe define the model in stan, as usual specifying data, parameters, transformed parameters and model blocks. The model is built up as follows.\nOur observed log(radon) measurements ($y$ approximate an intermediate transformed parameter $\\hat{y}$, which is normally distributed with variance $\\sigma_y^2$. $\\sigma_y$ is sampled from a Uniform distribution.\n$$y \\sim N(\\hat{y}, \\sigma_y^2)$$\n$$\\sigma_{y} \\sim U(0, 100)$$\nThe transformed variable $\\hat{y}$ is a linear function of $x_i$, the floor at which radon is measured. The parameters are the value of $\\alpha$ associated with the county $i$ ($i=1,\\ldots,N$) in which each household is found, and the effect due to which floor is used for measurement.\n$$\\hat{y_i} = {\\alpha_{j[i]} + \\beta x_i}$$\nThe value of $\\alpha$ for each county $i$, is Normally distributed with mean $\\mu_{\\alpha}$ and variance $\\sigma_{\\alpha}^2$. $\\sigma_{\\alpha}$ is sampled from a Uniform distribution, between 0 and 100. $\\mu_{\\alpha}$ is an unconstrained real value. There is a common mean and variance underlying each of the prevailing radon levels in each county.\n$$\\alpha_i \\sim N(\\mu_{\\alpha}, \\sigma_{\\alpha}^2)$$\n$$\\sigma_{\\alpha} \\sim U(0, 100)$$\nThe value of $\\beta$ is assumed to be Normally distributed about zero, with unit variance:\n$$\\beta \\sim N(0, 1)$$\nIn data:\n* J is the number of counties (int)\n* N is the number of samples (int)\n* county is a list of N values from 1-85, specifying the county index each measurement\n* x is a vector of indices for which floor the radon measurements were taken at each household\n* y is a vector of log(radon) measurements, one per household/sample.\nWe define parameters:\n\na (vector, one value per county), representing $\\alpha$, the vector of prevailing radon levels for each county.\nb (real) representing $\\beta$, the effect of floor choice\nmu_a, a real corresponding to $\\mu_{alpha}$, the mean radon level underlying the distribution from which the county levels are drawn.\nsigma_a is $\\sigma_{\\alpha}$, the standard deviation of the radon level distribution underlying the county levels: variability of county means about the average.\nsigma_y is $\\sigma_y$, the standard deviation of the measurement/sampling error: residual error of the observations.",
"varying_intercept = \"\"\"\ndata {\n int<lower=0> J;\n int<lower=0> N;\n int<lower=1,upper=J> county[N];\n vector[N] x;\n vector[N] y;\n}\nparameters {\n vector[J] a;\n real b;\n real mu_a;\n real<lower=0,upper=100> sigma_a;\n real<lower=0,upper=100> sigma_y;\n}\ntransformed parameters {\n vector[N] y_hat;\n for (i in 1:N)\n y_hat[i] <- a[county[i]] + x[i] * b;\n}\nmodel {\n sigma_a ~ uniform(0, 100);\n a ~ normal(mu_a, sigma_a);\n \n b ~ normal(0,1);\n \n sigma_y ~ uniform(0, 100);\n y ~ normal(y_hat, sigma_y);\n}\n\"\"\"",
"As usual, we map Python variables to those in the model, and run the fit:",
"varying_intercept_data = {'N': len(log_radon),\n 'J': len(n_county),\n 'county': county + 1,\n 'x': floor_measure,\n 'y': log_radon}\n\nvarying_intercept_fit = pystan.stan(model_code=varying_intercept,\n data=varying_intercept_data,\n iter=1000, chains=2)",
"We can then collect the county-level estimates of prevailing radon, the intercept of the model, $\\alpha_{j[i]}$, from a (1000 iterations x 85 counties):",
"a_sample = pd.DataFrame(varying_intercept_fit['a'])",
"We can visualise the distribution of these estimates, by county, with a boxplot:",
"plt.figure(figsize=(16, 6))\ng = sns.boxplot(data=a_sample, whis=np.inf, color=\"c\")\ng.set_xticklabels(mn_counties, rotation=90) # label counties\ng;\n\n# 2x2 plot of parameter estimate data\nfig, axes = plt.subplots(2, 2, figsize=(10, 6))\n\n# density plot of sigma_a estimate\nsns.kdeplot(varying_intercept_fit['sigma_a'], ax=axes[0][0])\naxes[0][0].set_xlim(varying_intercept_fit['sigma_a'].min(), varying_intercept_fit['sigma_a'].max())\n\n# scatterplot of sigma_a estimate\naxes[0][1].plot(varying_intercept_fit['sigma_a'], 'o', alpha=0.3)\n\n# density plot of beta estimate\nsns.kdeplot(varying_intercept_fit['b'], ax=axes[1][0])\naxes[1][0].set_xlim(varying_intercept_fit['b'].min(), varying_intercept_fit['b'].max())\n\n# scatterplot of beta estimate\naxes[1][1].plot(varying_intercept_fit['b'], 'o', alpha=0.3)\n\n# titles/labels\naxes[0][0].set_title(\"sigma_a\")\naxes[1][0].set_title(\"b\")\naxes[0][0].set_ylabel(\"frequency\")\naxes[1][0].set_ylabel(\"frequency\")\naxes[0][0].set_xlabel(\"value\")\naxes[1][0].set_xlabel(\"value\");\naxes[0][1].set_ylabel(\"sigma_a\")\naxes[1][1].set_ylabel(\"b\")\naxes[0][1].set_xlabel(\"iteration\")\naxes[1][1].set_xlabel(\"iteration\");\n\nvarying_intercept_fit['sigma_a'].min(), varying_intercept_fit['sigma_a'].max()\n\npystan.__version__"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sassoftware/sas-viya-programming | communities/Exporting Data from CAS using Python.ipynb | apache-2.0 | [
"Exporting Data from CAS using Python\nWhile the save action can export data to many formats and data sources, there are also ways of easily converting CAS table data to formats on the client as well. Keep in mind though that while you can export large data sets on the server, you may not want to attempt to bring tens of gigabytes of data down to the client using these methods.\nWhile you can always use the fetch action to get the data from a CAS table, you might just want to export the data to a file. To make this easier, the CASTable objects support the same to_XXX methods as Pandas DataFrames. This includes to_csv, to_dict, to_excel, to_html, and others. Behind the scenes, the fetch action is called and the resulting DataFrame is exported to the file corresponding to the export method used. Let's look at some examples.\nFirst we need a connection to the server.",
"import swat\n\nconn = swat.CAS(host, port, username, password)",
"For purposes of this example, we will load some data into the server to work with. You may already have tables in your server that you can use.",
"tbl = conn.read_csv('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv')\ntbl\n\ntbl.head()",
"Now that we have a CASTable object to work with, we can export the data from the CAS table that it references to a local file. We'll start with CSV. The to_csv method will return a string of CSV data if you don't specify a filename. We'll do it that way in the following code.",
"print(tbl.to_csv())\n\nprint(tbl.to_html())\n\nprint(tbl.to_latex())",
"There are many other to_XXX methods on the CASTable object, each of which corresponds to the same to_XXX method on Pandas DataFrames. The CASTable methods take the same arguments as the DataFrame counterparts, so you can read the Pandas documentation for more information.",
"conn.close()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsacademybr/PythonFundamentos | Cap03/Notebooks/DSA-Python-Cap03-05-Metodos.ipynb | gpl-3.0 | [
"<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>\nDownload: http://github.com/dsacademybr",
"# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())",
"Métodos",
"# Criando uma lista\nlst = [100, -2, 12, 65, 0]\n\n# Usando um método do objeto lista\nlst.append(10)\n\n# Imprimindo a lista\nlst\n\n# Usando um método do objeto lista\nlst.count(10)\n\n# A função help() explica como utilizar cada método de um objeto\nhelp(lst.count)\n\n# A função dir() mostra todos os métodos e atributos de um objeto\ndir(lst)\n\na = 'Isso é uma string'\n\n# O método de um objeto pode ser chamado dentro de uma função, como print()\nprint (a.split())",
"Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ProbablyOverthinkingIt | negative_binomial.ipynb | mit | [
"Binomial and negative binomial distributions\nToday's post is prompted by this question from Reddit:\n\nHow do I calculate the distribution of the number of selections (with replacement) \nI need to make before obtaining k? For example, let's say I am picking marbles from \na bag with replacement. There is a 10% chance of green and 90% of black. I want k=5 green \nmarbles. What is the distribution number of times I need to take a marble before getting 5? \nI believe this is a geometric distribution. I see how to calculate the cumulative \nprobability given n picks, but I would like to generalize it so that for any value of k \n(number of marbles I want), I can tell you the mean, 10% and 90% probability for the \nnumber of times I need to pick from it.\nAnother way of saying this is, how many times do I need to pull on a slot machine \nbefore it pays out given that each pull is independent?\n\nNote: I've changed the notation in the question to be consistent with convention.",
"from __future__ import print_function, division\n\nimport thinkplot\nfrom thinkstats2 import Pmf, Cdf\n\nfrom scipy import stats\nfrom scipy import special\n\n%matplotlib inline",
"Solution\nThere are two ways to solve this problem. One is to relate the desired distribution to the binomial distribution. \nIf the probability of success on every trial is p, the probability of getting the kth success on the nth trial is\nPMF(n; k, p) = BinomialPMF(k-1; n-1, p) p\n\nThat is, the probability of getting k-1 successes in n-1 trials, times the probability of getting the kth success on the nth trial.\nHere's a function that computes it:",
"def MakePmfUsingBinom(k, p, high=100):\n pmf = Pmf()\n for n in range(1, high):\n pmf[n] = stats.binom.pmf(k-1, n-1, p) * p\n return pmf",
"And here's an example using the parameters in the question.",
"pmf = MakePmfUsingBinom(5, 0.1, 200)\nthinkplot.Pdf(pmf)",
"We can solve the same problem using the negative binomial distribution, but it requires some translation from the parameters of the problem to the conventional parameters of the binomial distribution.\nThe negative binomial PMF is the probability of getting r non-terminal events before the kth terminal event. (I am using \"terminal event\" instead of \"success\" and \"non-terminal\" event instead of \"failure\" because in the context of the negative binomial distribution, the use of \"success\" and \"failure\" is often reversed.)\nIf n is the total number of events, n = k + r, so\nr = n - k\n\nIf the probability of a terminal event on every trial is p, the probability of getting the kth terminal event on the nth trial is\nPMF(n; k, p) = NegativeBinomialPMF(n-k; k, p) p\n\nThat is, the probability of n-k non-terminal events on the way to getting the kth terminal event.\nHere's a function that computes it:",
"def MakePmfUsingNbinom(k, p, high=100):\n pmf = Pmf()\n for n in range(1, high):\n r = n-k\n pmf[n] = stats.nbinom.pmf(r, k, p)\n return pmf",
"Here's the same example:",
"pmf2 = MakePmfUsingNbinom(5, 0.1, 200)\nthinkplot.Pdf(pmf2)",
"And confirmation that the results are the same within floating point error.",
"diffs = [abs(pmf[n] - pmf2[n]) for n in pmf]\nmax(diffs)",
"Using the PMF, we can compute the mean and standard deviation:",
"pmf.Mean(), pmf.Std()",
"To compute percentiles, we can convert to a CDF (which computes the cumulative sum of the PMF)",
"cdf = Cdf(pmf)\nscale = thinkplot.Cdf(cdf)",
"And here are the 10th and 90th percentiles.",
"cdf.Percentile(10), cdf.Percentile(90)",
"Copyright 2016 Allen Downey\nMIT License: http://opensource.org/licenses/MIT"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
giacomov/3ML | docs/notebooks/Building_Plugins_from_TimeSeries.ipynb | bsd-3-clause | [
"%matplotlib notebook\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom threeML import *\nfrom threeML.io.package_data import get_path_of_data_file\n\nimport warnings\nwarnings.simplefilter('ignore')",
"Constructing plugins from TimeSeries\nMany times we encounter event lists or sets of spectral histograms from which we would like to derive a single or set of plugins. For this purpose, we provide the TimeSeriesBuilder which provides a unified interface to time series data. Here we will demonstrate how to construct plugins from different data types.\nConstructing time series objects from different data types\nThe TimeSeriesBuilder currently supports reading of the following data type:\n* A generic PHAII data file\n* GBM TTE/CSPEC/CTIME files\n* LAT LLE files\nIf you would like to build a time series from your own custom data, consider creating a TimeSeriesBuilder.from_your_data() class method.\nGBM Data\nBuilding plugins from GBM is achieved in the following fashion",
"cspec_file = get_path_of_data_file('datasets/glg_cspec_n3_bn080916009_v01.pha')\ntte_file = get_path_of_data_file('datasets/glg_tte_n3_bn080916009_v01.fit.gz')\ngbm_rsp = get_path_of_data_file('datasets/glg_cspec_n3_bn080916009_v00.rsp2')\n\n\ngbm_cspec = TimeSeriesBuilder.from_gbm_cspec_or_ctime('nai3_cspec',\n cspec_or_ctime_file=cspec_file,\n rsp_file=gbm_rsp)\n\ngbm_tte = TimeSeriesBuilder.from_gbm_tte('nai3_tte',\n tte_file=tte_file,\n rsp_file=gbm_rsp)",
"LAT LLE data\nLAT LLE data is constructed in a similar fashion",
"lle_file = get_path_of_data_file('datasets/gll_lle_bn080916009_v10.fit')\nft2_file = get_path_of_data_file('datasets/gll_pt_bn080916009_v10.fit')\nlle_rsp = get_path_of_data_file('datasets/gll_cspec_bn080916009_v10.rsp')\n\nlat_lle = TimeSeriesBuilder.from_lat_lle('lat_lle',\n lle_file=lle_file,\n ft2_file=ft2_file,\n rsp_file=lle_rsp)",
"Viewing Lightcurves and selecting source intervals\nAll time series objects share the same commands to get you to a plugin. \nLet's have a look at the GBM TTE lightcurve.",
"threeML_config['lightcurve']['lightcurve color'] = '#07AE44'\n\nfig = gbm_tte.view_lightcurve(start=-20,stop=200)",
"Perhaps we want to fit the time interval from 0-10 seconds. We make a selection like this:",
"threeML_config['lightcurve']['selection color'] = '#4C3CB7'\n\ngbm_tte.set_active_time_interval('0-10')\nfig = gbm_tte.view_lightcurve(start=-20,stop=200);",
"For event list style data like time tagged events, the selection is exact. However, pre-binned data in the form of e.g. PHAII files will have the selection automatically adjusted to the underlying temporal bins.\nSeveral discontinuous time selections can be made.\nFitting a polynomial background\nIn order to get to a plugin, we need to model and create an estimated background in each channel ($B_i$) for our interval of interest. The process that we have implemented is to fit temporal off-source regions to polynomials ($P(t;\\vec{\\theta})$) in time. First, a polynomial is fit to the total count rate. From this fit we determine the best polynomial order via a likelihood ratio test, unless the user supplies a polynomial order in the constructor or directly via the polynomial_order attribute. Then, this order of polynomial is fit to every channel in the data.\nFrom the polynomial fit, the polynomial is integrated in time over the active source interval to estimate the count rate in each channel. The estimated background and background errors then stored for each channel.\n$$ B_i = \\int_{T_1}^{T_2}P(t;\\vec{\\theta}) {\\rm d}t $$",
"threeML_config['lightcurve']['background color'] = '#FC2530'\n\ngbm_tte.set_background_interval('-24--5','100-200')\ngbm_tte.view_lightcurve(start=-20,stop=200);",
"For event list data, binned or unbinned background fits are possible. For pre-binned data, only a binned fit is possible.",
"gbm_tte.set_background_interval('-24--5','100-200',unbinned=False)",
"Saving the background fit\nThe background polynomial coefficients can be saved to disk for faster manipulation of time series data.",
"gbm_tte.save_background('background_store',overwrite=True)\n\ngbm_tte_reloaded = TimeSeriesBuilder.from_gbm_tte('nai3_tte',\n tte_file=tte_file,\n rsp_file=gbm_rsp,\n restore_background='background_store.h5')\n\nfig = gbm_tte_reloaded.view_lightcurve(-10,200)",
"Creating a plugin\nWith our background selections made, we can now create a plugin instance. In the case of GBM data, this results in a DispersionSpectrumLike\nplugin. Please refer to the Plugins documentation for more details.",
"gbm_plugin = gbm_tte.to_spectrumlike()\n\ngbm_plugin.display()",
"Time-resolved binning and plugin creation\nIt is possible to temporally bin time series. There are up to four methods provided depending on the type of time series being used:\n\nConstant cadence (all time series)\nCustom (all time series)\nSignificance (all time series)\nBayesian Blocks (event lists)\n\nConstant Cadence\nConstant cadence bins are defined by a start and a stop time along with a time delta.",
"gbm_tte.create_time_bins(start=0, stop=10, method='constant', dt=2.)\n\ngbm_tte.bins.display()",
"Custom\nCustom time bins can be created by providing a contiguous list of start and stop times.",
"time_edges = np.array([.5,.63,20.,21.])\n\nstarts = time_edges[:-1]\n\nstops = time_edges[1:]\n\ngbm_tte.create_time_bins(start=starts, stop=stops, method='custom')\n\ngbm_tte.bins.display()",
"Significance\nTime bins can be created by specifying a significance of signal to background if a background fit has been performed.",
"gbm_tte.create_time_bins(start=0., stop=50., method='significance', sigma=25)\n\ngbm_tte.bins.display()",
"Bayesian Blocks\nThe Bayesian Blocks algorithm (Scargle et al. 2013) can be used to bin event list by looking for significant changes in the rate.",
"gbm_tte.create_time_bins(start=0., stop=50., method='bayesblocks', p0=.01, use_background=True)\n\ngbm_tte.bins.display()",
"Working with bins\nThe light curve can be displayed by supplying the use_binner option to display the time binning",
"fig = gbm_tte.view_lightcurve(use_binner=True)",
"The bins can all be writted to a PHAII file for analysis via OGIPLike.",
"gbm_tte.write_pha_from_binner(file_name='out', overwrite=True,\n force_rsp_write = False) # if you need to write the RSP to a file. We try to choose the best option for you.",
"Similarly, we can create a list of plugins directly from the time series.",
"my_plugins = gbm_tte.to_spectrumlike(from_bins=True)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wd15/fmks | sandbox/ch-benchmark.ipynb | mit | [
"Using Dask with fMKS\nfMKS is currently being developed with Dask support. Currently the generate_cahn_hilliard_data function generates data using Dask. This is an embarrisegly parallel workflow as typically for MKS many Cahn-Hilliard simulations are required to calibrate the model. The following is tested using both the threaded and multiprocessing schedulers. Currently the author can not get the distributed scheduler working.",
"import numpy as np\nimport dask.array as da\nfrom fmks.data.cahn_hilliard import generate_cahn_hilliard_data\nimport dask.threaded\nimport dask.multiprocessing",
"The function time_ch calls generate_cahn_hilliard_data to generate the data. generate_cahn_hilliard_data returns the microstructure and response as a tuple. compute is called on the response field with certain number of workers and with a scheduler.",
"def time_ch(num_workers,\n get,\n shape=(48, 200, 200),\n chunks=(1, 200, 200),\n n_steps=100):\n generate_cahn_hilliard_data(shape,\n chunks=chunks,\n n_steps=n_steps)[1].compute(num_workers=num_workers,\n get=get)",
"Threaded Timings",
"for n_proc in (8, 4, 2, 1):\n print(n_proc, \"thread(s)\")\n %timeit time_ch(n_proc, dask.threaded.get)",
"Multiprocessing Timings",
"for n_proc in (8, 4, 2, 1):\n print(n_proc, \"process(es)\")\n %timeit time_ch(n_proc, dask.multiprocessing.get)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nick-youngblut/SIPSim | ipynb/bac_genome/priming_exp/validation_sample/.ipynb_checkpoints/X12C.700.45.01_fracRichness-checkpoint.ipynb | mit | [
"Running SIPSim pipeline to simulate priming_exp gradient dataset\n\nBasing simulation params off of priming_exp dataset\nBasing starting community diversity on mean percent abundances in all fraction samples for the gradient\nOther parameters are 'default'\n\nSetting variables",
"workDir = '/home/nick/notebook/SIPSim/dev/priming_exp/validation_sample/X12C.700.45_fracRichness/'\ngenomeDir = '/home/nick/notebook/SIPSim/dev/priming_exp/genomes/'\nallAmpFrags = '/home/nick/notebook/SIPSim/dev/bac_genome1210/validation/ampFrags.pkl'\notuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'\nmetaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'\nprimerFile = '/home/nick/notebook/SIPSim/dev/515F-806R.fna'\n\ncdhit_dir = '/home/nick/notebook/SIPSim/dev/priming_exp/CD-HIT/'\nR_dir = '/home/nick/notebook/SIPSim/lib/R/'\nfigureDir = '/home/nick/notebook/SIPSim/figures/'\n\n# simulation params\ncomm_richness = 6901\nseq_per_fraction = ['lognormal', 10.096, 1.116]\n\n# for making genome_map file for genome fragment simulation\ntaxonMapFile = os.path.join(cdhit_dir, 'target_taxa.txt')\ngenomeFilterFile = os.path.join(cdhit_dir, 'genomeFile_seqID_filt.txt')\nabundFile = os.path.join('/home/nick/notebook/SIPSim/dev/priming_exp/exp_info', 'X12C.700.45_frac_OTU.txt')\n\n# misc\nnprocs = 20",
"Init",
"import glob\nimport cPickle as pickle\nimport copy\nfrom IPython.display import Image\n\n%load_ext rpy2.ipython\n\n%%R\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(gridExtra)\n\nif not os.path.isdir(workDir):\n os.makedirs(workDir)",
"Creating a community file from the fraction relative abundances",
"%%R -i abundFile\n# reading priming experiment OTU table\ntbl.abund = read.delim(abundFile, sep='\\t')\ntbl.abund %>% head\n\n%%R\ntbl.comm = tbl.abund %>%\n rename('taxon_name' = OTUId,\n 'rel_abund_perc' = mean_perc_abund) %>%\n select(taxon_name, rel_abund_perc) %>%\n mutate(library = '1',\n rank = row_number(-rel_abund_perc)) %>%\n arrange(rank)\n \ntbl.comm %>% head\n\n%%R\n# rescaling rel_abund_perc so sum(rel_abund_perc) = 100\ntbl.comm = tbl.comm %>%\n group_by(library) %>%\n mutate(total = sum(rel_abund_perc)) %>% \n ungroup() %>%\n mutate(rel_abund_perc = rel_abund_perc * 100 / total) %>%\n select(library, taxon_name, rel_abund_perc, rank)\n \ntbl.comm %>% head\n\n%%R -i comm_richness\n# number of OTUs\nn.OTUs = tbl.comm$taxon_name %>% unique %>% length\ncat('Number of OTUs:', n.OTUs, '\\n')\n\n# assertion\ncat('Community richness = number of OTUs? ', comm_richness == n.OTUs, '\\n')\n\n%%R -i workDir\n\ncommFile = paste(c(workDir, 'comm.txt'), collapse='/')\nwrite.table(tbl.comm, commFile, sep='\\t', quote=F, row.names=F)",
"Plotting community distribution",
"%%R -i workDir\n\ncommFile = paste(c(workDir, 'comm.txt'), collapse='/')\ncomm = read.delim(commFile, sep='\\t')\ncomm %>% head\n\n%%R -w 900 -h 350\n\nggplot(comm, aes(rank, rel_abund_perc)) +\n geom_point() +\n labs(x='Rank', y='% relative abundance', title='Priming experiment community abundance distribution') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Simulating fragments\nMaking a genome index file to map genome fasta files to OTUs\n\nWill be used for community simulation\nJust OTUs with association to genomes",
"%%R -i taxonMapFile -i genomeFilterFile \n\ntaxonMap = read.delim(taxonMapFile, sep='\\t') %>%\n select(target_genome, OTU) %>%\n distinct()\ntaxonMap %>% nrow %>% print\ntaxonMap %>% head(n=3) %>% print\n\nbreaker = '----------------\\n'\ncat(breaker)\n\ngenomeFilter = read.delim(genomeFilterFile, sep='\\t', header=F) \ngenomeFilter %>% nrow %>% print\ngenomeFilter %>% head(n=3) %>% print\n\ncat(breaker)\n\ncomm = read.delim(commFile, sep='\\t') \ncomm %>% nrow %>% print\ncomm %>% head(n=3) %>% print\n\n%%R\ntaxonMap$OTU %>% table %>% sort(decreasing=T) %>% head\n\n%%R\n\ntbl.j = inner_join(taxonMap, genomeFilter, c('target_genome' = 'V1')) %>%\n rename('fasta_file' = V2) %>%\n select(OTU, fasta_file, target_genome)\n\ntbl.j %>% head(n=3)\n\n%%R\ntbl.j$OTU %>% table %>% sort(decreasing=T) %>% head\n\n%%R\ntbl.j2 = inner_join(tbl.j, comm, c('OTU' = 'taxon_name')) \n\nn.target.genomes = tbl.j2$OTU %>% unique %>% length\ncat('Number of target OTUs: ', n.target.genomes, '\\n')\ncat('--------', '\\n')\ntbl.j2 %>% head(n=3)\n\n%%R -i workDir\n\noutFile = paste(c(workDir, 'target_genome_index.txt'), collapse='/')\nwrite.table(tbl.j2, outFile, sep='\\t', quote=F, row.names=F, col.names=F)",
"Plotting community abundance distribution of target genomes",
"%%R -w 900 -h 350\n\nggplot(tbl.j2, aes(rank, rel_abund_perc)) +\n geom_point(size=3, shape='O', color='red') +\n labs(x='Rank', y='% relative abundance', title='Priming experiment community abundance distribution') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Simulating fragments of genomes that match priming_exp bulk OTUs",
"!cd $workDir; \\\n SIPSim fragments \\\n target_genome_index.txt \\\n --fp $genomeDir \\\n --fr $primerFile \\\n --fld skewed-normal,9000,2500,-5 \\\n --flr None,None \\\n --nf 10000 \\\n --np $nprocs \\\n 2> ampFrags.log \\\n > ampFrags.pkl ",
"Appending fragments from randomly selected genomes of total dataset (n=1210)\n\nThis is to obtain the richness of the bulk soil community\nRandom OTUs will be named after non-target OTUs in comm file\n\nMaking list of non-target OTUs",
"%%R -i workDir\n# loading files\n\n## target genome index (just OTUs with associated genome)\ninFile = paste(c(workDir, 'target_genome_index.txt'), collapse='/')\ntbl.target = read.delim(inFile, sep='\\t', header=F)\ncolnames(tbl.target) = c('OTUId', 'fasta_file', 'genome_name')\n\n## comm file of total community OTUs \ncommFile = paste(c(workDir, 'comm.txt'), collapse='/')\ntbl.comm = read.delim(commFile, sep='\\t')\n\n%%R\n# just OTUs w/out an associated genome\ntbl.j = anti_join(tbl.comm, tbl.target, c('taxon_name' = 'OTUId'))\nn.nontarget.genomes = tbl.j$taxon_name %>% length\ncat('Number of non-target genomes: ', n.nontarget.genomes, '\\n')\ncat('---------\\n')\ntbl.j %>% head(n=5)\n\n%%R -i comm_richness\n# checking assumptions\ncat('Target + nonTarget richness = total community richness?: ',\n n.target.genomes + n.nontarget.genomes == comm_richness, '\\n')\n\n%%R -i workDir\n# writing out non-target OTU file\noutFile = paste(c(workDir, 'comm_nonTarget.txt'), collapse='/')\nwrite.table(tbl.j, outFile, sep='\\t', quote=F, row.names=F)",
"Randomly selecting amplicon fragment length-GC KDEs from total genome pool",
"# List of non-target OTUs\ninFile = os.path.join(workDir, 'comm_nonTarget.txt')\nnonTarget = pd.read_csv(inFile, sep='\\t')['taxon_name'].tolist()\n\nprint 'Number of non-target OTUs: {}'.format(len(nonTarget))\nnonTarget[:4]\n\n# loading amplicon fragments from full genome KDE dataset\ninFile = os.path.join(workDir, 'ampFrags.pkl')\nampFrag_target = []\nwith open(inFile, 'rb') as iFH:\n ampFrag_target = pickle.load(iFH)\nprint 'Target OTU richness: {}'.format(len(ampFrag_target))\n\n# loading amplicon fragments from full genome KDE dataset\nampFrag_all = []\nwith open(allAmpFrags, 'rb') as iFH:\n ampFrag_all = pickle.load(iFH)\nprint 'Count of frag-GC KDEs for all genomes: {}'.format(len(ampFrag_all)) \n\n# random selection from list\n#target_richness = len(ampFrag_target)\n\ntarget_richness = len(ampFrag_target)\nrichness_needed = comm_richness - target_richness\nprint 'Number of random taxa needed to reach richness: {}'.format(richness_needed)\n\nif richness_needed > 0:\n index = range(target_richness)\n index = np.random.choice(index, richness_needed)\n \n ampFrag_rand = []\n for i in index:\n sys.stderr.write('{},'.format(i))\n ampFrag_rand.append(copy.deepcopy(ampFrag_all[i]))\nelse:\n ampFrag_rand = []\n\n# renaming randomly selected KDEs by non-target OTU-ID\nfor i in range(len(ampFrag_rand)):\n ampFrag_rand[i][0] = nonTarget[i]\n\n# appending random taxa to target taxa and writing\noutFile = os.path.join(workDir, 'ampFrags_wRand.pkl')\n\nwith open(outFile, 'wb') as oFH:\n x = ampFrag_target + ampFrag_rand\n print 'Number of taxa in output: {}'.format(len(x))\n pickle.dump(x, oFH)",
"Converting fragments to kde object",
"!cd $workDir; \\\n SIPSim fragment_kde \\\n ampFrags_wRand.pkl \\\n > ampFrags_wRand_kde.pkl",
"Adding diffusion",
"!cd $workDir; \\\n SIPSim diffusion \\\n ampFrags_wRand_kde.pkl \\\n --np $nprocs \\\n > ampFrags_wRand_kde_dif.pkl ",
"Making an incorp config file",
"!cd $workDir; \\\n SIPSim incorpConfigExample \\\n --percTaxa 0 \\\n --percIncorpUnif 100 \\\n > PT0_PI100.config",
"Adding isotope incorporation to BD distribution",
"!cd $workDir; \\\n SIPSim isotope_incorp \\\n ampFrags_wRand_kde_dif.pkl \\\n PT0_PI100.config \\\n --comm comm.txt \\\n --np $nprocs \\\n > ampFrags_wRand_kde_dif_incorp.pkl",
"Calculating BD shift from isotope incorporation",
"!cd $workDir; \\\n SIPSim BD_shift \\\n ampFrags_wRand_kde_dif.pkl \\\n ampFrags_wRand_kde_dif_incorp.pkl \\\n --np $nprocs \\\n > ampFrags_wRand_kde_dif_incorp_BD-shift.txt",
"Simulating gradient fractions",
"!cd $workDir; \\\n SIPSim gradient_fractions \\\n comm.txt \\\n > fracs.txt",
"Simulating an OTU table",
"!cd $workDir; \\\n SIPSim OTU_table \\\n ampFrags_wRand_kde_dif_incorp.pkl \\\n comm.txt \\\n fracs.txt \\\n --abs 1e9 \\\n --np $nprocs \\\n > OTU_abs1e9.txt",
"Plotting taxon abundances",
"%%R -i workDir\nsetwd(workDir)\n\n# loading file\ntbl = read.delim('OTU_abs1e9.txt', sep='\\t')\n\n%%R\n## BD for G+C of 0 or 100\nBD.GCp0 = 0 * 0.098 + 1.66\nBD.GCp100 = 1 * 0.098 + 1.66\n\n%%R -w 800 -h 300\n# plotting absolute abundances\n\ntbl.s = tbl %>%\n group_by(library, BD_mid) %>%\n summarize(total_count = sum(count))\n\n## plot\np = ggplot(tbl.s, aes(BD_mid, total_count)) +\n geom_area(stat='identity', alpha=0.3, position='dodge') +\n geom_histogram(stat='identity') +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16) \n )\np\n\n%%R -w 800 -h 300\n# plotting number of taxa at each BD\n\ntbl.nt = tbl %>%\n filter(count > 0) %>%\n group_by(library, BD_mid) %>%\n summarize(n_taxa = n())\n\n## plot\np = ggplot(tbl.nt, aes(BD_mid, n_taxa)) +\n geom_area(stat='identity', alpha=0.3, position='dodge') +\n geom_histogram(stat='identity') +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np\n\n%%R -w 800 -h 250\n# plotting relative abundances\n\n## plot\np = ggplot(tbl, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 800 -h 250\n\np + geom_area(stat='identity', position='fill')",
"Subsampling from the OTU table",
"dist,loc,scale = seq_per_fraction\n\n!cd $workDir; \\\n SIPSim OTU_subsample \\\n --dist $dist \\\n --dist_params mean:$loc,sigma:$scale \\\n --walk 2 \\\n --min_size 10000 \\\n --max_size 200000 \\\n OTU_abs1e9.txt \\\n > OTU_abs1e9_sub.txt ",
"Testing/Plotting seq count distribution of subsampled fraction samples",
"%%R -h 300 -i workDir\nsetwd(workDir)\n\ntbl = read.csv('OTU_abs1e9_sub.txt', sep='\\t') \n\ntbl.s = tbl %>% \n group_by(library, fraction) %>%\n summarize(total_count = sum(count)) %>%\n ungroup() %>%\n mutate(library = as.character(library))\n\nggplot(tbl.s, aes(total_count)) +\n geom_density(fill='blue')\n\n%%R -h 300 -w 600\nsetwd(workDir)\n\ntbl.s = tbl %>%\n group_by(fraction, BD_min, BD_mid, BD_max) %>%\n summarize(total_count = sum(count)) \n\nggplot(tbl.s, aes(BD_mid, total_count)) +\n geom_point() +\n geom_line() +\n labs(x='Buoyant density', y='Total sequences') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Getting list of target taxa",
"%%R -i workDir\n\ninFile = paste(c(workDir, 'target_genome_index.txt'), collapse='/')\n\ntbl.target = read.delim(inFile, sep='\\t', header=F)\ncolnames(tbl.target) = c('OTUId', 'genome_file', 'genome_ID', 'X', 'Y', 'Z')\ntbl.target = tbl.target %>% distinct(OTUId)\n\n\ncat('Number of target OTUs: ', tbl.target$OTUId %>% unique %>% length, '\\n')\ncat('----------\\n')\ntbl.target %>% head(n=3)",
"Plotting abundance distributions",
"%%R -w 800 -h 250\n# plotting relative abundances\n\ntbl = tbl %>% \n group_by(fraction) %>%\n mutate(rel_abund = count / sum(count))\n\n\n## plot\np = ggplot(tbl, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 800 -h 250\n\np = ggplot(tbl, aes(BD_mid, rel_abund, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity')",
"Abundance distribution of just target taxa",
"%%R\n\ntargets = tbl.target$OTUId %>% as.vector %>% unique \n\ntbl.f = tbl %>%\n filter(taxon %in% targets)\n\ntbl.f %>% head\n\n%%R -w 800 -h 250\n# plotting absolute abundances\n\n## plot\np = ggplot(tbl.f, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 800 -h 250\n# plotting relative abundances\n\np = ggplot(tbl.f, aes(BD_mid, rel_abund, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity')",
"Plotting 'true' taxon abundance distribution (from priming exp dataset)",
"%%R -i metaDataFile\n# loading priming_exp metadata file\n\nmeta = read.delim(metaDataFile, sep='\\t')\nmeta %>% head(n=4)\n\n%%R -i otuTableFile\n# loading priming_exp OTU table \n\ntbl.otu.true = read.delim(otuTableFile, sep='\\t') %>%\n select(OTUId, starts_with('X12C.700.28')) \ntbl.otu.true %>% head(n=3)\n\n%%R\n# editing table\ntbl.otu.true.w = tbl.otu.true %>%\n gather('sample', 'count', 2:ncol(tbl.otu.true)) %>%\n mutate(sample = gsub('^X', '', sample)) %>%\n group_by(sample) %>%\n mutate(rel_abund = count / sum(count)) %>%\n ungroup() %>%\n filter(count > 0)\ntbl.otu.true.w %>% head(n=5)\n\n%%R\ntbl.true.j = inner_join(tbl.otu.true.w, meta, c('sample' = 'Sample'))\ntbl.true.j %>% as.data.frame %>% head(n=3)\n\n%%R -w 800 -h 300 -i workDir\n# plotting number of taxa at each BD\n\ntbl = read.csv('OTU_abs1e9_sub.txt', sep='\\t') \n\ntbl.nt = tbl %>%\n filter(count > 0) %>%\n group_by(library, BD_mid) %>%\n summarize(n_taxa = n())\n\n## plot\np = ggplot(tbl.nt, aes(BD_mid, n_taxa)) +\n geom_area(stat='identity', alpha=0.5) +\n geom_point() +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np\n\n%%R -w 700 -h 350\n\ntbl.true.j.s = tbl.true.j %>%\n filter(count > 0) %>%\n group_by(sample, Density) %>%\n summarize(n_taxa = sum(count > 0))\n\nggplot(tbl.true.j.s, aes(Density, n_taxa)) +\n geom_area(stat='identity', alpha=0.5) +\n geom_point() +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )",
"Plotting total counts for each sample",
"%%R -h 300 -w 600\ntbl.true.j.s = tbl.true.j %>%\n group_by(sample, Density) %>%\n summarize(total_count = sum(count)) \n\nggplot(tbl.true.j.s, aes(Density, total_count)) +\n geom_point() +\n geom_line() +\n labs(x='Buoyant density', y='Total sequences') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Plotting abundance distribution of target OTUs",
"%%R\ntbl.true.j.f = tbl.true.j %>%\n filter(OTUId %in% targets) %>%\n arrange(OTUId, Density) %>%\n group_by(sample)\ntbl.true.j.f %>% head(n=3) %>% as.data.frame\n\n%%R -w 800 -h 250\n# plotting relative abundances\n\n## plot\nggplot(tbl.true.j.f, aes(Density, rel_abund, fill=OTUId)) +\n geom_area(stat='identity') +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )",
"Combining true and simulated OTU tables for target taxa",
"%%R\ntbl.f.e = tbl.f %>%\n mutate(library = 'simulation') %>%\n rename('density' = BD_mid) %>%\n select(-BD_min, -BD_max)\n\ntbl.true.e = tbl.true.j.f %>% \n select('taxon' = OTUId,\n 'fraction' = sample,\n 'density' = Density,\n count, rel_abund) %>%\n mutate(library = 'true') \n \n \ntbl.sim.true = rbind(tbl.f.e, tbl.true.e) %>% as.data.frame\ntbl.f.e = data.frame()\ntbl.true.e = data.frame()\n\ntbl.sim.true %>% head(n=3)\n\n%%R\n# check\ncat('Number of target taxa: ', tbl.sim.true$taxon %>% unique %>% length, '\\n')",
"Abundance distributions of each target taxon",
"%%R -w 900 -h 3500\n\ntbl.sim.true.f = tbl.sim.true %>%\n ungroup() %>%\n filter(density >= 1.677) %>%\n filter(density <= 1.761) %>%\n group_by(taxon) %>%\n mutate(mean_rel_abund = mean(rel_abund)) %>%\n ungroup()\n\ntbl.sim.true.f$taxon = reorder(tbl.sim.true.f$taxon, -tbl.sim.true.f$mean_rel_abund)\n\nggplot(tbl.sim.true.f, aes(density, rel_abund, color=library)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n facet_wrap(~ taxon, ncol=4, scales='free_y')\n\n%%R\ntbl.otu.true.w %>% \n filter(OTUId == 'OTU.1') %>%\n as.data.frame()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ecervera/mindstorms-nb | task/navigation_teacher.ipynb | mit | [
"Exercici de navegació\n<span title=\"Roomba navigating around furniture\"><img src=\"img/roomba.jpg\" align=\"right\" width=200></span>\nUn robot mòbil com el Roomba de la imatge ha d'evitar xocar amb els obstacles del seu entorn, i si arriba a col·lisionar, ha de reaccionar per a no fer, ni fer-se mal.\nAmb el sensor de tacte no podem evitar el xoc, però si detectar-lo un cop es produeix, i reaccionar.\nL'objectiu d'aquest exercici és programar el següent comportament en el robot:\n\nmentre no detecte res, el robot va cap avant\nsi el sensor detecta un xoc, el robot anirà cap enrere i girarà\n\nConnecteu el robot:",
"from functions import connect, touch, forward, backward, left, right, stop, disconnect\nfrom time import sleep\nconnect()",
"Versió 1.0\nUtilitzeu el codi de l'exemple anterior del bucle while: només heu d'afegir que, quan xoque, el robot vaja cap enrere, gire una mica (cap al vostre costat preferit), i pare.",
"while not touch():\n forward()\nbackward()\nsleep(1)\nleft()\nsleep(1)\nstop()",
"Versió 2.0\nSe suposa que la maniobra del robot li permet evitar l'obstacle, i per tant tornar a anar cap avant. Com ho podem programar?\nCal repetir tot el bloc d'instruccions del comportament, incloent el bucle. Cap problema, els llenguatges de programació permeten posar un bucle dins d'un altre, el que s'anomena bucles anidats.\nUtilitzeu un bucle for per a repetir 5 vegades el codi anterior.",
"for ...:\n while ...:\n ...\n ...\n\nfor i in range(5):\n while not touch():\n forward()\n backward()\n sleep(1)\n left()\n sleep(1)\n stop()",
"Versió 3.0\n<img src=\"img/interrupt.png\" align=\"right\">\nI si en lloc de repetir 10 o 20 vegades, volem que el robot continue fins que el parem nosaltres? Ho podem fer amb un bucle infinit, i indicarem al programa que pare amb el botó interrupt kernel.\nEn Python, un bucle infinit s'escriu així:\npython\nwhile True:\n statement\nQuan s'interromp el programa, s'abandona la instrucció que s'estava executant en eixe moment, i cal parar el robot. En Python, aquest procés s'anomena excepció i es gestiona d'aquesta manera:\npython\ntry:\n while True:\n statement # ací anirà el comportament\nexcept KeyboardInterrupt:\n statement # ací pararem el robot\nUtilitzeu un bucle infinit per a repetir el comportament del robot fins que el pareu.",
"try:\n while True:\n while not touch():\n forward()\n backward()\n sleep(1)\n left()\n sleep(1)\nexcept KeyboardInterrupt:\n stop()",
"Versió 4.0\nEl comportament del robot, girant sempre cap al mateix costat, és una mica previsible, no vos sembla?\nAnem a introduir un component d'atzar: en els llenguatges de programació, existeixen els generadors de números aleatoris, que són com els daus dels ordinadors.\nExecuteu el següent codi vàries vegades amb Ctrl+Enter i comproveu els resultats.",
"from random import random\nrandom()",
"La funció random és com llançar un dau, però en compte de donar una valor d'1 a 6, dóna un número real entre 0 i 1.\nAleshores, el robot pot utilitzar eixe valor per a decidir si gira a esquerra o dreta. Com? Doncs si el valor és major que 0.5, gira a un costat, i si no, cap a l'altre. Aleshores, girarà a l'atzar, amb una probabilitat del 50% per a cada costat.\nIncorporeu la decisió a l'atzar per a girar al codi de la versió anterior:",
"try:\n while True:\n while not touch():\n forward()\n backward()\n sleep(1)\n if random() > 0.5:\n left()\n else:\n right()\n sleep(1)\nexcept KeyboardInterrupt:\n stop()",
"Recapitulem\nAbans de continuar, desconnecteu el robot:",
"disconnect()",
"Tot el que hem vist en aquest exercici:\n\nbucles anidats\nexcepcions\nnúmeros aleatoris\n\nNo està malament, quasi hem vist el temari d'un primer curs de programació, i això només amb un sensor!\nPassem a vore doncs el següent sensor.\n>>> Sensor de so"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ericmjl/Network-Analysis-Made-Simple | archive/7-game-of-thrones-case-study-instructor.ipynb | mit | [
"Let's change gears and talk about Game of thrones or shall I say Network of Thrones.\nIt is suprising right? What is the relationship between a fatansy TV show/novel and network science or python(it's not related to a dragon).\nIf you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is the hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books.\n\nAndrew J. Beveridge, an associate professor of mathematics at Macalester College, and Jie Shan, an undergraduate created a network from the book A Storm of Swords by extracting relationships between characters to find out the most important characters in the book(or GoT).\nThe dataset is publicly avaiable for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions. \nCredits:\nBlog: https://networkofthrones.wordpress.com\nMath Horizons Article: https://www.maa.org/sites/default/files/pdf/Mathhorizons/NetworkofThrones%20%281%29.pdf",
"import pandas as pd\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport community\nimport numpy as np\nimport warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline",
"Let's load in the datasets",
"book1 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book1-edges.csv')\nbook2 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book2-edges.csv')\nbook3 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book3-edges.csv')\nbook4 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book4-edges.csv')\nbook5 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book5-edges.csv')",
"The resulting DataFrame book1 has 5 columns: Source, Target, Type, weight, and book. Source and target are the two nodes that are linked by an edge. A network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number.",
"book1.head()",
"Once we have the data loaded as a pandas DataFrame, it's time to create a network. We create a graph for each book. It's possible to create one MultiGraph instead of 5 graphs, but it is easier to play with different graphs.",
"G_book1 = nx.Graph()\nG_book2 = nx.Graph()\nG_book3 = nx.Graph()\nG_book4 = nx.Graph()\nG_book5 = nx.Graph()",
"Let's populate the graph with edges from the pandas DataFrame.",
"for row in book1.iterrows():\n G_book1.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\n\nfor row in book2.iterrows():\n G_book2.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\nfor row in book3.iterrows():\n G_book3.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\nfor row in book4.iterrows():\n G_book4.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\nfor row in book5.iterrows():\n G_book5.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\n\nbooks = [G_book1, G_book2, G_book3, G_book4, G_book5]",
"Let's have a look at these edges.",
"list(G_book1.edges(data=True))[16]\n\nlist(G_book1.edges(data=True))[400]",
"Finding the most important node i.e character in these networks.\nIs it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no \"correct\" way of calculating the most important node in a network, every metric has a different meaning.\nFirst, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality.\nUsing this measure, let's extract the top ten important characters from the first book (book[0]) and the fifth book (book[4]).",
"deg_cen_book1 = nx.degree_centrality(books[0])\n\ndeg_cen_book5 = nx.degree_centrality(books[4])\n\nsorted(deg_cen_book1.items(), key=lambda x:x[1], reverse=True)[0:10]\n\nsorted(deg_cen_book5.items(), key=lambda x:x[1], reverse=True)[0:10]\n\n# Plot a histogram of degree centrality\nplt.hist(list(nx.degree_centrality(G_book4).values()))\nplt.show()\n\nd = {}\nfor i, j in dict(nx.degree(G_book4)).items():\n if j in d:\n d[j] += 1\n else:\n d[j] = 1\nx = np.log2(list((d.keys())))\ny = np.log2(list(d.values()))\nplt.scatter(x, y, alpha=0.9)\nplt.show()",
"Exercise\nCreate a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure.",
"def weighted_degree(G, weight):\n result = dict()\n for node in G.nodes():\n weight_degree = 0\n for n in G.edges([node], data=True):\n weight_degree += n[2]['weight']\n result[node] = weight_degree\n return result\n\nplt.hist(list(weighted_degree(G_book1, 'weight').values()))\nplt.show()\n\nsorted(weighted_degree(G_book1, 'weight').items(), key=lambda x:x[1], reverse=True)[0:10]",
"Let's do this for Betweeness centrality and check if this makes any difference\nHaha, evil laugh",
"# First check unweighted, just the structure\n\nsorted(nx.betweenness_centrality(G_book1).items(), key=lambda x:x[1], reverse=True)[0:10]\n\n# Let's care about interactions now\n\nsorted(nx.betweenness_centrality(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]",
"PageRank\nThe billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.",
"# by default weight attribute in pagerank is weight, so we use weight=None to find the unweighted results\nsorted(nx.pagerank_numpy(G_book1, weight=None).items(), key=lambda x:x[1], reverse=True)[0:10]\n\nsorted(nx.pagerank_numpy(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]",
"Is there a correlation between these techniques?\nExercise\nFind the correlation between these four techniques.\n\npagerank\nbetweenness_centrality\nweighted_degree\ndegree centrality",
"cor = pd.DataFrame.from_records([nx.pagerank_numpy(G_book1, weight='weight'), nx.betweenness_centrality(G_book1, weight='weight'), weighted_degree(G_book1, 'weight'), nx.degree_centrality(G_book1)])\n\n# cor.T\n\ncor.T.corr()",
"Evolution of importance of characters over the books\nAccording to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;)\nLet's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book.\nWe create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion.\nWe can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book",
"evol = [nx.degree_centrality(book) for book in books]\nevol_df = pd.DataFrame.from_records(evol).fillna(0)\nevol_df[['Eddard-Stark', 'Tyrion-Lannister', 'Jon-Snow']].plot()\n\nset_of_char = set()\nfor i in range(5):\n set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index))\nset_of_char",
"Exercise\nPlot the evolution of weighted degree centrality of the above mentioned characters over the 5 books, and repeat the same exercise for betweenness centrality.",
"evol_df[list(set_of_char)].plot(figsize=(29,15))\n\nevol = [nx.betweenness_centrality(graph, weight='weight') for graph in [G_book1, G_book2, G_book3, G_book4, G_book5]]\nevol_df = pd.DataFrame.from_records(evol).fillna(0)\n\nset_of_char = set()\nfor i in range(5):\n set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index))\n\n\nevol_df[list(set_of_char)].plot(figsize=(19,10))",
"So what's up with Stannis Baratheon?",
"nx.draw(nx.barbell_graph(5, 1), with_labels=True)\n\nsorted(nx.degree_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5]\n\nsorted(nx.betweenness_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5]",
"Community detection in Networks\nA network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally.\nWe will use louvain community detection algorithm to find the modules in our graph.",
"plt.figure(figsize=(15, 15))\n\npartition = community.best_partition(G_book1)\nsize = float(len(set(partition.values())))\npos = nx.kamada_kawai_layout(G_book1)\ncount = 0\ncolors = ['red', 'blue', 'yellow', 'black', 'brown', 'purple', 'green', 'pink']\nfor com in set(partition.values()):\n list_nodes = [nodes for nodes in partition.keys()\n if partition[nodes] == com]\n nx.draw_networkx_nodes(G_book1, pos, list_nodes, node_size = 20,\n node_color = colors[count])\n count = count + 1\n\n\n\nnx.draw_networkx_edges(G_book1, pos, alpha=0.2)\nplt.show()\n\nd = {}\nfor character, par in partition.items():\n if par in d:\n d[par].append(character)\n else:\n d[par] = [character]\nd\n\nnx.draw(nx.subgraph(G_book1, d[3]))\n\nnx.draw(nx.subgraph(G_book1, d[1]))\n\nnx.density(G_book1)\n\nnx.density(nx.subgraph(G_book1, d[4]))\n\nnx.density(nx.subgraph(G_book1, d[4]))/nx.density(G_book1)",
"Exercise\nFind the most important node in the partitions according to degree centrality of the nodes.",
"max_d = {}\ndeg_book1 = nx.degree_centrality(G_book1)\n\nfor group in d:\n temp = 0\n for character in d[group]:\n if deg_book1[character] > temp:\n max_d[group] = character\n temp = deg_book1[character]\n\nmax_d",
"A bit about power law in networks",
"G_random = nx.erdos_renyi_graph(100, 0.1)\n\nnx.draw(G_random)\n\nG_ba = nx.barabasi_albert_graph(100, 2)\n\nnx.draw(G_ba)\n\n# Plot a histogram of degree centrality\nplt.hist(list(nx.degree_centrality(G_random).values()))\nplt.show()\n\nplt.hist(list(nx.degree_centrality(G_ba).values()))\nplt.show()\n\nG_random = nx.erdos_renyi_graph(2000, 0.2)\nG_ba = nx.barabasi_albert_graph(2000, 20)\n\nd = {}\nfor i, j in dict(nx.degree(G_random)).items():\n if j in d:\n d[j] += 1\n else:\n d[j] = 1\nx = np.log2(list((d.keys())))\ny = np.log2(list(d.values()))\nplt.scatter(x, y, alpha=0.9)\nplt.show()\n\nd = {}\nfor i, j in dict(nx.degree(G_ba)).items():\n if j in d:\n d[j] += 1\n else:\n d[j] = 1\nx = np.log2(list((d.keys())))\ny = np.log2(list(d.values()))\nplt.scatter(x, y, alpha=0.9)\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
csc-training/python-introduction | notebooks/examples/Extra Scikit-learn.ipynb | mit | [
"Classification with Scikit-learn\nFirst we use pandas to read in the csv file and separate the Y (target class, final column in CSV) from the X, the predicting values.",
"import pandas as pd\n\ndata = pd.read_csv(\"../data/iris.data\")\n\n# convert to NumPy arrays because they are the easiest to handle in sklearn\nvariables = data.drop([\"class\"], axis=1).as_matrix()\nclasses = data[[\"class\"]].as_matrix().reshape(-1)\n\n# import cross-validation scorer and KNeighborsClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\n\ntrain_X, test_X, train_Y, test_Y = train_test_split(variables, classes)\n\n# initialize classifier object\nclassifier = KNeighborsClassifier()\n\n# fit the object using training data and sample labels\nclassifier.fit(train_X, train_Y)\n\n# evaluate the results for held-out test sample\nclassifier.score(test_X, test_Y)\n# value is the mean accuracy \n\n# if we wanted to predict values for unseen data, we would use the predict()-method\n\nclassifier.predict(test_X) # note no known Y-values passed",
"Exercise\n\nImport the classifier object ``sklearn.svm.SVC```\ninitialize it\nfit it with the training data (no need to split a second time)\nevaluate the quality of the created classifier using score()\n\nPipelining and cross-validation\nIt's common to want to preprocess data somehow or in general have several steps. This can be easily done with the Pipeline class. \nThere are typically parameters involved and you might want to select the best possible parameter.",
"from sklearn.decomposition import PCA # pca is a subspace method that projects the data into a lower-dimensional space\n\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.neighbors import KNeighborsClassifier\n\n\npca = PCA(n_components=2)\nknn = KNeighborsClassifier(n_neighbors=3)\n\nfrom sklearn.pipeline import Pipeline\n\npipeline = Pipeline([(\"pca\", pca), (\"kneighbors\", knn)])\n\nparameters_grid = dict(\n pca__n_components=[1,2,3,4],\n kneighbors__n_neighbors=[1,2,3,4,5,6]\n )\ngrid_search = GridSearchCV(pipeline, parameters_grid)\ngrid_search.fit(train_X, train_Y)\ngrid_search.best_estimator_\n\n# you can now test agains the held out part\ngrid_search.best_estimator_.score(test_X, test_Y)",
"Exercise\nThere is another dataset, \"breast-cancer-wisconsin.data\". For a description see [here] (https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/) . \nIt contains samples with patient ID (that you should remove), measurements and as last the doctors judgment of the biopsy: malignant or benign.\nRead in the file and create a classifier.\nYou can alternately just split the input and use some classifier or do a grid cross-validation over a larger space of potential parameters."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Danghor/Formal-Languages | ANTLR4-Python/Earley-Parser/Earley-Parser.ipynb | gpl-2.0 | [
"from IPython.core.display import HTML\nwith open('../../style.css', 'r') as file:\n css = file.read()\nHTML(css)",
"Implementing an Earley Parser\nA Grammar for Grammars\nEarley's algorithm has two inputs:\n- a grammar $G$ and\n- a string $s$.\nIt then checks whether the string $s$ can be parsed with the given grammar.\nIn order to input the grammar in a natural way, we first have to develop a parser for grammars.\nAn example grammar that we want to parse is stored in the file simple.g.",
"!cat simple.g",
"We use <span style=\"font-variant:small-caps;\">Antlr</span> to develop a parser for this Grammar.\nThe pure grammar to parse this type of grammar is stored in\nthe file Pure.g4.",
"!cat Pure.g4",
"The annotated grammar is stored in the file Grammar.g4.",
"!cat -n Grammar.g4",
"We start by generating both scanner and parser.",
"!antlr4 -Dlanguage=Python3 Grammar.g4\n\nfrom GrammarLexer import GrammarLexer\nfrom GrammarParser import GrammarParser\nimport antlr4",
"The function parse_grammar takes a filename as its argument and returns the grammar that is stored in the given file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.",
"def parse_grammar(filename):\n input_stream = antlr4.FileStream(filename)\n lexer = GrammarLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = GrammarParser(token_stream)\n grammar = parser.start()\n return grammar.g\n\nparse_grammar('simple.g')",
"Earley's Algorithm\nGiven a context-free grammar $G = \\langle V, \\Sigma, R, S \\rangle$ and a string $s = x_1x_2 \\cdots x_n \\in \\Sigma^$ of length $n$, \nan Earley item* is a pair of the form\n$$\\langle A \\rightarrow \\alpha \\bullet \\beta, k \\rangle$$\nsuch that \n- $(A \\rightarrow \\alpha \\beta) \\in R\\quad$ and\n- $k \\in {0,1,\\cdots,n}$. \nThe class EarleyItem represents a single Earley item.\n- mVariable is the variable $A$,\n- mAlpha is $\\alpha$,\n- mBeta is $\\beta$, and\n- mIndex is $k$.\nSince we later have to store objects of class EarleyItem in sets, we have to implement the functions\n- __eq__,\n- __ne__,\n- __hash__.\nIt is easiest to implement __hash__ by first converting the object into a string. Hence we also\nimplement the function __repr__, that converts an EarleyItem into a string.",
"class EarleyItem():\n def __init__(self, variable, alpha, beta, index):\n self.mVariable = variable\n self.mAlpha = alpha\n self.mBeta = beta\n self.mIndex = index\n \n def __eq__(self, other):\n return isinstance(other, EarleyItem) and \\\n self.mVariable == other.mVariable and \\\n self.mAlpha == other.mAlpha and \\\n self.mBeta == other.mBeta and \\\n self.mIndex == other.mIndex\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.__repr__())\n \n def __repr__(self):\n alphaStr = ' '.join(self.mAlpha)\n betaStr = ' '.join(self.mBeta)\n return f'<{self.mVariable} → {alphaStr} • {betaStr}, {self.mIndex}>'",
"Given an Earley item self, the function isComplete checks, whether the Earley item self has the form\n$$\\langle A \\rightarrow \\alpha \\bullet, k \\rangle,$$\ni.e. whether the $\\bullet$ is at the end of the grammar rule.",
"def isComplete(self):\n return self.mBeta == ()\n\nEarleyItem.isComplete = isComplete\ndel isComplete",
"The function sameVar(self, C) checks, whether the item following the dot is the same as the variable \ngiven as argument, i.e. sameVar(self, C) returns True if self is an Earley item of the form\n$$\\langle A \\rightarrow \\alpha \\bullet C\\beta, k \\rangle.$$",
"def sameVar(self, C):\n return len(self.mBeta) > 0 and self.mBeta[0] == C\n\nEarleyItem.sameVar = sameVar\ndel sameVar",
"The function scan(self, t) checks, whether the item following the dot matches the token t, \ni.e. scan(self, t) returns True if self is an Earley item of the form\n$$\\langle A \\rightarrow \\alpha \\bullet t\\beta, k \\rangle.$$\nThe argument $t$ can either be the name of a token or a literal.",
"def scan(self, t):\n if len(self.mBeta) > 0:\n return self.mBeta[0] == t or self.mBeta[0] == \"'\" + t + \"'\"\n return False\n\nEarleyItem.scan = scan\ndel scan",
"Given an Earley item, this function returns the name of the variable following the dot. If there is no variable following the dot, the function returns None. The function can distinguish variables from token names because variable names consist only of lower case letters.",
"def nextVar(self):\n if len(self.mBeta) > 0:\n var = self.mBeta[0]\n if var[0] != \"'\" and var.islower():\n return var\n return None\n\nEarleyItem.nextVar = nextVar\ndel nextVar",
"The function moveDot(self) moves the $\\bullet$ in the Earley item self, where self has the form \n$$\\langle A \\rightarrow \\alpha \\bullet \\beta, k \\rangle$$\nover the next variable, token, or literal in $\\beta$. It assumes that $\\beta$ is not empty.",
"def moveDot(self):\n return EarleyItem(self.mVariable, \n self.mAlpha + (self.mBeta[0],), \n self.mBeta[1:], \n self.mIndex)\n\nEarleyItem.moveDot = moveDot\ndel moveDot",
"The class Grammar represents a context free grammar. It stores a list of the rules of the grammar.\nEach grammar rule of the form\n$$ a \\rightarrow \\beta $$\nis stored as the tuple $(a,) + \\beta$. The start symbol is assumed to be the variable on the left hand side of\nthe first rule. To distinguish syntactical variables form tokens, variables contain only lower case letters,\nwhile tokens either contain only upper case letters or they start and end with a single quote character \"'\".",
"class Grammar():\n def __init__(self, Rules):\n self.mRules = Rules ",
"The function startItem returns the Earley item\n$$ \\langle\\hat{S} \\rightarrow \\bullet S, 0\\rangle $$\nwhere $S$ is the start variable of the given grammar and $\\hat{S}$ is a new variable.",
"def startItem(self):\n return EarleyItem('Start', (), (self.startVar(),), 0)\n\nGrammar.startItem = startItem\ndel startItem",
"The function finishItem returns the Earley item\n$$ \\langle\\hat{S} \\rightarrow S \\bullet, 0\\rangle $$\nwhere $S$ is the start variable of the given grammar and $\\hat{S}$ is a new variable.",
"def finishItem(self):\n return EarleyItem('Start', (self.startVar(),), (), 0)\n\nGrammar.finishItem = finishItem\ndel finishItem",
"The function startVar returns the start variable of the grammar. It is assumed that\nthe first rule grammar starts with the start variable of the grammar.",
"def startVar(self):\n return self.mRules[0][0]\n\nGrammar.startVar = startVar\ndel startVar",
"The function toString creates a readable presentation of the grammar rules.",
"def toString(self):\n result = ''\n for head, *body in self.mRules:\n result += f'{head}: {body};\\n'\n return result\n\nGrammar.__str__ = toString\ndel toString",
"The class EarleyParser implements the parsing algorithm of Jay Earley.\nThe class maintains the following member variables:\n- mGrammar is the grammar that is used to parse the given token string.\n- mString is the list of tokens and literals that has to be parsed.\nAs a hack, the first element of this list in None.\n Therefore, mString[i] is the ith token.\n- mStateList is a list of sets of Earley items. If $n$ is the length of the given token string\n (excluding the first element None), then $Q_i = \\texttt{mStateList}[i]$. \n The idea is that the set $Q_i$ is the set of those Earley items that the parser could be in \n when it has read the tokens mString[1], $\\cdots$, mString[n]. $Q_0$ is initialized as follows:\n $$ Q_0 = \\bigl{\\langle\\hat{S} \\rightarrow \\bullet S, 0\\rangle\\bigr}. $$\nThe Earley items are interpreted as follows: If we have\n$$ \\langle C \\rightarrow \\alpha \\bullet \\beta, k\\rangle \\in Q_i, $$\nthen we know the following:\n- After having read the tokens mString[:k+1] the parser tries to parse the variable $C$\n in the token string mString[k+1:].\n- After having read the token string mString[k+1:i+1] the parser has already recognized $\\alpha$\n and now needs to recognize $\\beta$ in the token string mString[i+1:] in order to parse the variable $C$.",
"class EarleyParser():\n def __init__(self, grammar, TokenList):\n self.mGrammar = grammar \n self.mString = [None] + TokenList # dirty hack so mString[1] is first token\n self.mStateList = [set() for i in range(len(TokenList)+1)] \n print('Grammar:\\n')\n print(self.mGrammar)\n print(f'Input: {self.mString}\\n')\n self.mStateList[0] = { self.mGrammar.startItem() }",
"The method parse implements Earley's algorithm. For all states \n$Q_1$, $\\cdots$, $Q_n$ we proceed as follows:\n- We apply the completion operation followed by the prediction operation.\n This is done until no more states are added to $Q_i$. \n(The inner while loop is not necessary if the grammar does not contain $\\varepsilon$-rules.)\n- Finally, the scanning operation is applied to $Q_i$.\nAfter $Q_i$ has been computed, we proceed to compute $Q_{i+1}$.\nParsing is successful iff\n$$ \\langle\\hat{S} \\rightarrow S \\bullet, 0\\rangle \\in Q_n $$",
"def parse(self):\n \"run Earley's algorithm\"\n n = len(self.mString) - 1 # mString[0] = None\n for i in range(0, n+1):\n if i + 1 <= n:\n next_token = self.mString[i+1]\n else:\n next_token = 'EOF'\n print('_' * 80)\n print(f'next token = {next_token}')\n print('_' * 80)\n change = True\n while change:\n change = self.complete(i)\n change = self.predict(i) or change\n self.scan(i)\n # print states\n print(f'\\nQ{i}:')\n Qi = self.mStateList[i]\n for item in Qi: \n print(item)\n if i + 1 <= n:\n print(f'\\nQ{i+1}:')\n Qip1 = self.mStateList[i+1]\n for item in Qip1: \n print(item)\n if self.mGrammar.finishItem() in self.mStateList[-1]:\n print('Parsing successful!')\n else:\n print('Parsing failed!')\n\nEarleyParser.parse = parse\ndel parse",
"The method complete(self, i) applies the completion operation to the state $Q_i$:\nIf we have\n- $\\langle C \\rightarrow \\gamma \\bullet, j\\rangle \\in Q_i$ and\n- $\\langle A \\rightarrow \\beta \\bullet C \\delta, k\\rangle \\in Q_j$,\nthen the parser tried to parse the variable $C$ after having read mString[:j+1]\nand we know that \n$$ C \\Rightarrow^ \\texttt{mString[j+1:i+1]}, $$\ni.e. the parser has recognized $C$ after having read mString[j+1:i+1].\nTherefore the parser should proceed to recognize $\\delta$ in state $Q_i$.\nTherefore we add the Earley item* $\\langle A \\rightarrow \\beta C \\bullet \\delta,k\\rangle$ to the set $Q_i$:\n$$\\langle C \\rightarrow \\gamma \\bullet, j\\rangle \\in Q_i \\wedge\n \\langle A \\rightarrow \\beta \\bullet C \\delta, k\\rangle \\in Q_j \\;\\rightarrow\\;\n Q_i := Q_i \\cup \\bigl{ \\langle A \\rightarrow \\beta C \\bullet \\delta, k\\rangle \\bigr}\n$$",
"def complete(self, i):\n change = False\n added = True\n Qi = self.mStateList[i]\n while added:\n added = False\n newQi = set()\n for item in Qi:\n if item.isComplete():\n C = item.mVariable\n j = item.mIndex\n Qj = self.mStateList[j]\n for newItem in Qj:\n if newItem.sameVar(C):\n moved = newItem.moveDot()\n newQi.add(moved)\n if not (newQi <= Qi):\n change = True\n added = True\n print(\"completion:\")\n for newItem in newQi:\n if newItem not in Qi:\n print(f'{newItem} added to Q{i}')\n self.mStateList[i] |= newQi\n Qi = self.mStateList[i]\n return change\n \nEarleyParser.complete = complete\ndel complete",
"The method self.predict(i) applies the prediction operation to the state $Q_i$: \nIf $\\langle A \\rightarrow \\beta \\bullet C \\delta, k \\rangle \\in Q_j$, then\nthe parser tries to recognize $C\\delta$ after having read mString[:j+1]. To this end\nit has to parse $C$ in the string mString[j+1:].\nTherefore, if $C \\rightarrow \\gamma$ is a rule of our grammar,\nwe add the Earley item $\\langle C \\rightarrow \\bullet \\gamma, j\\rangle$ to the set $Q_j$:\n$$ \\langle A \\rightarrow \\beta \\bullet C \\delta, k\\rangle \\in Q_j \n \\wedge (C \\rightarrow \\gamma) \\in R \n \\;\\rightarrow\\;\n Q_j := Q_j \\cup\\bigl{ \\langle C \\rightarrow \\bullet\\gamma, j\\rangle\\bigr}.\n$$\nAs the right hand side $\\gamma$ might start with a variable, the function uses a fix point iteration\nuntil no more Earley items are added to $Q_j$.",
"def predict(self, i):\n change = False\n added = True\n Qi = self.mStateList[i]\n while added:\n added = False\n newQi = set()\n for item in Qi:\n c = item.nextVar()\n if c != None:\n for rule in self.mGrammar.mRules:\n if c == rule[0]:\n newQi.add(EarleyItem(c, (), rule[1:], i))\n if not (newQi <= Qi):\n change = True\n added = True\n print(\"prediction:\")\n for newItem in newQi:\n if newItem not in Qi:\n print(f'{newItem} added to Q{i}')\n self.mStateList[i] |= newQi\n Qi = self.mStateList[i]\n return change\n\nEarleyParser.predict = predict\ndel predict",
"The function self.scan(i) applies the scanning operation to the state $Q_i$.\nIf $\\langle A \\rightarrow \\beta \\bullet a \\gamma, k\\rangle \\in Q_i$ and $a$ is a token,\nthen the parser tries to recognize the right hand side of the grammar rule\n$$ A \\rightarrow \\beta a \\gamma$$ \nand after having read mString[k+1:i+1] it has already recognized $\\beta$.\nIf we now have mString[i+1] == a, then the parser still has to recognize $\\gamma$ in mString[i+2:].\nTherefore, the Earley object $\\langle A \\rightarrow \\beta a \\bullet \\gamma, k\\rangle$ is added to\nthe set $Q_{i+1}$:\n$$\\langle A \\rightarrow \\beta \\bullet a \\gamma, k\\rangle \\in Q_i \\wedge x_{i+1} = a\n \\;\\rightarrow\\;\n Q_{i+1} := Q_{i+1} \\cup \\bigl{ \\langle A \\rightarrow \\beta a \\bullet \\gamma, k\\rangle \\bigr}\n$$",
"def scan(self, i):\n Qi = self.mStateList[i]\n n = len(self.mString) - 1 # remember mStateList[0] == None\n if i + 1 <= n:\n a = self.mString[i+1]\n for item in Qi:\n if item.scan(a):\n self.mStateList[i+1].add(item.moveDot())\n print('scanning:')\n print(f'{item.moveDot()} added to Q{i+1}')\n\nEarleyParser.scan = scan\ndel scan\n\nimport re",
"The function tokenize transforms the string s into a list of tokens. See below for an example.",
"def tokenize(s):\n '''Transform the string s into a list of tokens. The string s\n is supposed to represent an arithmetic expression.\n '''\n lexSpec = r'''([ \\t]+) | # blanks and tabs\n ([1-9][0-9]*|0) | # number\n ([()]) | # parentheses \n ([-+*/]) | # arithmetical operators\n (.) # unrecognized character\n '''\n tokenList = re.findall(lexSpec, s, re.VERBOSE)\n result = []\n for ws, number, parenthesis, operator, error in tokenList:\n if ws: # skip blanks and tabs\n continue\n elif number:\n result += [ 'NUMBER' ]\n elif parenthesis:\n result += [ parenthesis ]\n elif operator:\n result += [ operator ]\n else:\n result += [ f'ERROR({error})']\n return result\n\ntokenize('1 + 2 * 3')",
"The function test takes two arguments.\n- file is the name of a file containing a grammar,\n- word is a string that should be parsed.\nword is first tokenized. Then the resulting token list is parsed using Earley's algorithm.",
"def test(file, word): \n Rules = parse_grammar(file)\n grammar = Grammar(Rules)\n TokenList = tokenize(word)\n ep = EarleyParser(grammar, TokenList)\n ep.parse()\n\ntest('simple.g', '1 + 2 * 3')",
"The command below cleans the directory. If you are running windows, you have to replace rmwith del.",
"!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp\n!rm -r __pycache__\n\n!ls"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ThinkBayes2 | examples/normal.ipynb | mit | [
"Think Bayes\nSecond Edition\nCopyright 2020 Allen B. Downey\nLicense: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)",
"# If we're running on Colab, install empiricaldist\n# https://pypi.org/project/empiricaldist/\n\nimport sys\nIN_COLAB = 'google.colab' in sys.modules\n\nif IN_COLAB:\n !pip install empiricaldist\n\n# Get utils.py and create directories\n\nimport os\n\nif not os.path.exists('utils.py'):\n !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py\n \nif not os.path.exists('figs'):\n !mkdir figs\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom empiricaldist import Pmf, Cdf\nfrom utils import decorate, savefig",
"Univariate normal\nGenerate data",
"from scipy.stats import norm\n\ndata = norm(10, 2).rvs(20)\ndata\n\nn = len(data)\nxbar = np.mean(data)\ns2 = np.var(data)\n\nn, xbar, s2",
"Grid algorithm",
"mus = np.linspace(8, 12, 101)\nprior_mu = Pmf(1, mus)\nprior_mu.index.name = 'mu'\n\nsigmas = np.linspace(0.01, 5, 100)\nps = sigmas**-2\nprior_sigma = Pmf(ps, sigmas)\nprior_sigma.index.name = 'sigma'\n\nfrom utils import make_joint\n\nprior = make_joint(prior_mu, prior_sigma)\n\nfrom utils import normalize\n\ndef update_norm(prior, data):\n \"\"\"Update the prior based on data.\n \n prior: joint distribution of mu and sigma\n data: sequence of observations\n \"\"\"\n X, Y, Z = np.meshgrid(prior.columns, prior.index, data)\n likelihood = norm(X, Y).pdf(Z).prod(axis=2)\n\n posterior = prior * likelihood\n normalize(posterior)\n\n return posterior\n\nposterior = update_norm(prior, data)\n\nfrom utils import marginal\n\nposterior_mu_grid = marginal(posterior, 0)\nposterior_sigma_grid = marginal(posterior, 1)\n\nposterior_mu_grid.plot()\ndecorate(title='Posterior distribution of mu')\n\nposterior_sigma_grid.plot(color='C1')\ndecorate(title='Posterior distribution of sigma')",
"Update\nMostly following notation in Murphy, Conjugate Bayesian analysis of the Gaussian distribution",
"m0 = 0\nkappa0 = 0\nalpha0 = 0\nbeta0 = 0\n\nm_n = (kappa0 * m0 + n * xbar) / (kappa0 + n)\nm_n\n\nkappa_n = kappa0 + n\nkappa_n\n\nalpha_n = alpha0 + n/2\nalpha_n\n\nbeta_n = beta0 + n*s2/2 + n * kappa0 * (xbar-m0)**2 / (kappa0 + n) / 2\nbeta_n\n\ndef update_normal(prior, summary):\n m0, kappa0, alpha0, beta0 = prior\n n, xbar, s2 = summary\n\n m_n = (kappa0 * m0 + n * xbar) / (kappa0 + n)\n kappa_n = kappa0 + n\n alpha_n = alpha0 + n/2\n beta_n = (beta0 + n*s2/2 + \n n * kappa0 * (xbar-m0)**2 / (kappa0 + n) / 2)\n\n return m_n, kappa_n, alpha_n, beta_n\n\nprior = 0, 0, 0, 0\nsummary = n, xbar, s2\nupdate_normal(prior, summary)",
"Posterior distribution of sigma",
"from scipy.stats import invgamma\n\ndist_sigma2 = invgamma(alpha_n, scale=beta_n)\n\ndist_sigma2.mean()\n\ndist_sigma2.std()\n\nsigma2s = np.linspace(0.01, 20, 101)\nps = dist_sigma2.pdf(sigma2s)\nposterior_sigma2_invgammas = Pmf(ps, sigma2s)\nposterior_sigma2_invgammas.normalize()\n\nposterior_sigma2_invgammas.plot()\ndecorate(xlabel='$\\sigma^2$',\n ylabel='PDF',\n title='Posterior distribution of variance')\n\nsigmas = np.sqrt(sigma2s)\nposterior_sigma_invgammas = Pmf(ps, sigmas)\nposterior_sigma_invgammas.normalize()\n\nposterior_sigma_grid.make_cdf().plot(color='gray', label='grid')\nposterior_sigma_invgammas.make_cdf().plot(color='C1', label='invgamma')\n\ndecorate(xlabel='$\\sigma$',\n ylabel='PDF',\n title='Posterior distribution of standard deviation')\n\nposterior_sigma_invgammas.mean(), posterior_sigma_grid.mean()\n\nposterior_sigma_invgammas.std(), posterior_sigma_grid.std()\n\n2 / np.sqrt(2 * (n-1))",
"Posterior distribution of mu",
"from scipy.stats import t as student_t\n\ndef make_student_t(df, loc, scale):\n return student_t(df, loc=loc, scale=scale)\n\ndf = 2 * alpha_n\nprecision = alpha_n * kappa_n / beta_n\ndist_mu = make_student_t(df, m_n, 1/np.sqrt(precision))\n\ndist_mu.mean()\n\ndist_mu.std()\n\nnp.sqrt(4/n)\n\nmus = np.linspace(8, 12, 101)\nps = dist_mu.pdf(mus)\nposterior_mu_student = Pmf(ps, mus)\nposterior_mu_student.normalize()\n\nposterior_mu_student.plot()\ndecorate(xlabel='$\\mu$',\n ylabel='PDF',\n title='Posterior distribution of mu')\n\nposterior_mu_grid.make_cdf().plot(color='gray', label='grid')\nposterior_mu_student.make_cdf().plot(label='invgamma')\ndecorate(xlabel='$\\mu$',\n ylabel='CDF',\n title='Posterior distribution of mu')\n\ndef make_posterior_mu(m_n, kappa_n, alpha_n, beta_n):\n df = 2 * alpha_n\n loc = m_n\n precision = alpha_n * kappa_n / beta_n\n dist_mu = make_student_t(df, loc, 1/np.sqrt(precision))\n return dist_mu",
"Posterior joint distribution",
"mu_mesh, sigma2_mesh = np.meshgrid(mus, sigma2s)\n\njoint = (dist_sigma2.pdf(sigma2_mesh) * \n norm(m_n, sigma2_mesh/kappa_n).pdf(mu_mesh))\njoint_df = pd.DataFrame(joint, columns=mus, index=sigma2s)\n\nfrom utils import plot_contour\n\nplot_contour(joint_df)\ndecorate(xlabel='$\\mu$',\n ylabel='$\\sigma^2$',\n title='Posterior joint distribution')",
"Sampling from posterior predictive",
"sample_sigma2 = dist_sigma2.rvs(1000)\n\nsample_mu = norm(m_n, sample_sigma2 / kappa_n).rvs()\n\nsample_pred = norm(sample_mu, np.sqrt(sample_sigma2)).rvs()\n\ncdf_pred = Cdf.from_seq(sample_pred)\ncdf_pred.plot()\n\nsample_pred.mean(), sample_pred.var()",
"Analytic posterior predictive",
"df = 2 * alpha_n\nprecision = alpha_n * kappa_n / beta_n / (kappa_n+1)\ndist_pred = make_student_t(df, m_n, 1/np.sqrt(precision))\n\nxs = np.linspace(2, 16, 101)\nys = dist_pred.cdf(xs)\n\nplt.plot(xs, ys, color='gray', label='student t')\ncdf_pred.plot(label='sample')\n\ndecorate(title='Predictive distribution')\n\ndef make_posterior_pred(m_n, kappa_n, alpha_n, beta_n):\n df = 2 * alpha_n\n loc = m_n\n precision = alpha_n * kappa_n / beta_n / (kappa_n+1)\n dist_pred = make_student_t(df, loc, 1/np.sqrt(precision))\n return dist_pred",
"Multivariate normal\nGenerate data",
"mean = [10, 20]\n\nsigma_x = 2\nsigma_y = 3\nrho = 0.3\ncov = rho * sigma_x * sigma_y \n\nSigma = [[sigma_x**2, cov], [cov, sigma_y**2]]\nSigma\n\nfrom scipy.stats import multivariate_normal\n\nn = 20\ndata = multivariate_normal(mean, Sigma).rvs(n)\ndata\n\nn = len(data)\nn\n\nxbar = np.mean(data, axis=0)\nxbar\n\nS = np.cov(data.transpose())\nS\n\nnp.corrcoef(data.transpose())\n\nstds = np.sqrt(np.diag(S))\nstds\n\ncorrcoef = S / np.outer(stds, stds)\ncorrcoef\n\ndef unpack_cov(S):\n stds = np.sqrt(np.diag(S))\n corrcoef = S / np.outer(stds, stds)\n return stds[0], stds[1], corrcoef[0][1]\n\nsigma_x, sigma_y, rho = unpack_cov(S)\nsigma_x, sigma_y, rho\n\ndef pack_cov(sigma_x, sigma_y, rho):\n cov = sigma_x * sigma_y * rho\n return np.array([[sigma_x**2, cov], [cov, sigma_y**2]])\n\npack_cov(sigma_x, sigma_y, rho)\n\nS",
"Update",
"m_0 = 0\nLambda_0 = 0\nnu_0 = 0\nkappa_0 = 0\n\nm_n = (kappa_0 * m_0 + n * xbar) / (kappa_0 + n)\nm_n\n\nxbar\n\ndiff = (xbar - m_0)\nD = np.outer(diff, diff)\nD\n\nLambda_n = Lambda_0 + S + n * kappa_0 * D / (kappa_0 + n)\nLambda_n\n\nS\n\nnu_n = nu_0 + n\nnu_n\n\nkappa_n = kappa_0 + n\nkappa_n",
"Posterior distribution of covariance",
"from scipy.stats import invwishart\n\ndef make_invwishart(nu, Lambda):\n d, _ = Lambda.shape\n return invwishart(nu, scale=Lambda * (nu - d - 1))\n\ndist_cov = make_invwishart(nu_n, Lambda_n)\n\ndist_cov.mean()\n\nS\n\nsample_Sigma = dist_cov.rvs(1000)\nnp.mean(sample_Sigma, axis=0)\n\nres = [unpack_cov(Sigma) for Sigma in sample_Sigma]\n\nsample_sigma_x, sample_sigma_y, sample_rho = np.transpose(res)\nsample_sigma_x.mean(), sample_sigma_y.mean(), sample_rho.mean()\n\nunpack_cov(S)\n\nCdf.from_seq(sample_sigma_x).plot(label=r'$\\sigma_x$')\nCdf.from_seq(sample_sigma_y).plot(label=r'$\\sigma_y$')\n\ndecorate(xlabel='Standard deviation',\n ylabel='CDF',\n title='Posterior distribution of standard deviation')\n\nCdf.from_seq(sample_rho).plot()\n\ndecorate(xlabel='Coefficient of correlation',\n ylabel='CDF',\n title='Posterior distribution of correlation')",
"Evaluate the Inverse Wishart PDF",
"num = 51\nsigma_xs = np.linspace(0.01, 10, num)\n\nsigma_ys = np.linspace(0.01, 10, num)\n\nrhos = np.linspace(-0.3, 0.9, num)\n\nindex = pd.MultiIndex.from_product([sigma_xs, sigma_ys, rhos],\n names=['sigma_x', 'sigma_y', 'rho'])\njoint = Pmf(0, index)\njoint.head()\n\ndist_cov.pdf(S)\n\nfor sigma_x, sigma_y, rho in joint.index:\n Sigma = pack_cov(sigma_x, sigma_y, rho)\n joint.loc[sigma_x, sigma_y, rho] = dist_cov.pdf(Sigma)\n \njoint.normalize()\n\nfrom utils import pmf_marginal\n\nposterior_sigma_x = pmf_marginal(joint, 0)\nposterior_sigma_y = pmf_marginal(joint, 1)\nmarginal_rho = pmf_marginal(joint, 2)\n\nposterior_sigma_x.mean(), posterior_sigma_y.mean(), marginal_rho.mean()\n\nunpack_cov(S)\n\nposterior_sigma_x.plot(label='$\\sigma_x$')\nposterior_sigma_y.plot(label='$\\sigma_y$')\n\ndecorate(xlabel='Standard deviation',\n ylabel='PDF',\n title='Posterior distribution of standard deviation')\n\nposterior_sigma_x.make_cdf().plot(color='gray')\nposterior_sigma_y.make_cdf().plot(color='gray')\n\nCdf.from_seq(sample_sigma_x).plot(label=r'$\\sigma_x$')\nCdf.from_seq(sample_sigma_y).plot(label=r'$\\sigma_y$')\n\ndecorate(xlabel='Standard deviation',\n ylabel='CDF',\n title='Posterior distribution of standard deviation')\n\nmarginal_rho.make_cdf().plot(color='gray')\n\nCdf.from_seq(sample_rho).plot()\n\ndecorate(xlabel='Coefficient of correlation',\n ylabel='CDF',\n title='Posterior distribution of correlation')",
"Posterior distribution of mu",
"m_n\n\nsample_mu = [multivariate_normal(m_n, Sigma/kappa_n).rvs()\n for Sigma in sample_Sigma]\n\nsample_mu0, sample_mu1 = np.transpose(sample_mu)\n\nsample_mu0.mean(), sample_mu1.mean()\n\nxbar\n\nsample_mu0.std(), sample_mu1.std()\n\n2 / np.sqrt(n), 3 / np.sqrt(n)\n\nCdf.from_seq(sample_mu0).plot(label=r'$\\mu_0$ sample')\nCdf.from_seq(sample_mu1).plot(label=r'$\\mu_1$ sample')\n\ndecorate(xlabel=r'$\\mu$',\n ylabel='CDF',\n title=r'Posterior distribution of $\\mu$')",
"Multivariate student t\nLet's use this implementation",
"from scipy.special import gammaln\n\ndef multistudent_pdf(x, mean, shape, df):\n return np.exp(logpdf(x, mean, shape, df))\n\ndef logpdf(x, mean, shape, df):\n p = len(mean)\n vals, vecs = np.linalg.eigh(shape)\n logdet = np.log(vals).sum()\n valsinv = np.array([1.0/v for v in vals])\n U = vecs * np.sqrt(valsinv)\n dev = x - mean\n maha = np.square(dev @ U).sum(axis=-1)\n\n t = 0.5 * (df + p)\n A = gammaln(t)\n B = gammaln(0.5 * df)\n C = p/2. * np.log(df * np.pi)\n D = 0.5 * logdet\n E = -t * np.log(1 + (1./df) * maha)\n\n return A - B - C - D + E\n\n\nd = len(m_n)\nx = m_n\nmean = m_n\ndf = nu_n - d + 1\nshape = Lambda_n / kappa_n\nmultistudent_pdf(x, mean, shape, df)\n\nmu0s = np.linspace(8, 12, 91)\nmu1s = np.linspace(18, 22, 101)\n\nmu_mesh = np.dstack(np.meshgrid(mu0s, mu1s))\nmu_mesh.shape\n\nps = multistudent_pdf(mu_mesh, mean, shape, df)\n\njoint = pd.DataFrame(ps, columns=mu0s, index=mu1s)\nnormalize(joint)\n\nplot_contour(joint)\n\nfrom utils import marginal\n\nposterior_mu0_student = marginal(joint, 0)\nposterior_mu1_student = marginal(joint, 1)\n\nposterior_mu0_student.make_cdf().plot(color='gray', label=r'$\\mu_0 multi t$')\nposterior_mu1_student.make_cdf().plot(color='gray', label=r'$\\mu_1 multi t$')\n\nCdf.from_seq(sample_mu0).plot(label=r'$\\mu_0$ sample')\nCdf.from_seq(sample_mu1).plot(label=r'$\\mu_1$ sample')\n\ndecorate(xlabel=r'$\\mu$',\n ylabel='CDF',\n title=r'Posterior distribution of $\\mu$')",
"Compare to analytic univariate distributions",
"prior = 0, 0, 0, 0\nsummary = n, xbar[0], S[0][0]\nsummary\n\nparams = update_normal(prior, summary)\nparams\n\ndist_mu0 = make_posterior_mu(*params)\ndist_mu0.mean(), dist_mu0.std()\n\nmu0s = np.linspace(7, 12, 101)\nps = dist_mu0.pdf(mu0s)\nposterior_mu0 = Pmf(ps, index=mu0s)\nposterior_mu0.normalize()\n\nprior = 0, 0, 0, 0\nsummary = n, xbar[1], S[1][1]\nsummary\n\nparams = update_normal(prior, summary)\nparams\n\ndist_mu1 = make_posterior_mu(*params)\ndist_mu1.mean(), dist_mu1.std()\n\nmu1s = np.linspace(17, 23, 101)\nps = dist_mu1.pdf(mu1s)\nposterior_mu1 = Pmf(ps, index=mu1s)\nposterior_mu1.normalize()\n\nposterior_mu0.make_cdf().plot(label=r'$\\mu_0$ uni t', color='gray')\nposterior_mu1.make_cdf().plot(label=r'$\\mu_1$ uni t', color='gray')\n\nCdf.from_seq(sample_mu0).plot(label=r'$\\mu_0$ sample')\nCdf.from_seq(sample_mu1).plot(label=r'$\\mu_1$ sample')\n\ndecorate(xlabel=r'$\\mu$',\n ylabel='CDF',\n title=r'Posterior distribution of $\\mu$')",
"Sampling from posterior predictive",
"sample_pred = [multivariate_normal(mu, Sigma).rvs()\n for mu, Sigma in zip(sample_mu, sample_Sigma)]\n\nsample_x0, sample_x1 = np.transpose(sample_pred)\n\nsample_x0.mean(), sample_x1.mean()\n\nsample_x0.std(), sample_x1.std()\n\nprior = 0, 0, 0, 0\nsummary = n, xbar[0], S[0][0]\nparams = update_normal(prior, summary)\ndist_x0 = make_posterior_pred(*params)\ndist_x0.mean(), dist_x0.std()\n\nx0s = np.linspace(2, 18, 101)\nps = dist_x0.pdf(x0s)\npred_x0 = Pmf(ps, index=x0s)\npred_x0.normalize()\n\nprior = 0, 0, 0, 0\nsummary = n, xbar[1], S[1][1]\nparams = update_normal(prior, summary)\ndist_x1 = make_posterior_pred(*params)\ndist_x1.mean(), dist_x1.std()\n\nx1s = np.linspace(10, 30, 101)\nps = dist_x1.pdf(x1s)\npred_x1 = Pmf(ps, index=x1s)\npred_x1.normalize()\n\npred_x0.make_cdf().plot(label=r'$x_0$ student t', color='gray')\npred_x1.make_cdf().plot(label=r'$x_1$ student t', color='gray')\n\nCdf.from_seq(sample_x0).plot(label=r'$x_0$ sample')\nCdf.from_seq(sample_x1).plot(label=r'$x_1$ sample')\n\ndecorate(xlabel='Quantity',\n ylabel='CDF',\n title='Posterior predictive distributions')",
"Comparing to the multivariate student t",
"d = len(m_n)\nx = m_n\nmean = m_n\ndf = nu_n - d + 1\nshape = Lambda_n * (kappa_n+1) / kappa_n\nmultistudent_pdf(x, mean, shape, df)\n\nx0s = np.linspace(0, 20, 91)\nx1s = np.linspace(10, 30, 101)\n\nx_mesh = np.dstack(np.meshgrid(x0s, x1s))\nx_mesh.shape\n\nps = multistudent_pdf(x_mesh, mean, shape, df)\n\njoint = pd.DataFrame(ps, columns=x0s, index=x1s)\nnormalize(joint)\n\nplot_contour(joint)\n\nfrom utils import marginal\n\nposterior_x0_student = marginal(joint, 0)\nposterior_x1_student = marginal(joint, 1)\n\nposterior_x0_student.make_cdf().plot(color='gray', label=r'$x_0$ multi t')\nposterior_x1_student.make_cdf().plot(color='gray', label=r'$x_1$ multi t')\n\nCdf.from_seq(sample_x0).plot(label=r'$x_0$ sample')\nCdf.from_seq(sample_x1).plot(label=r'$x_1$ sample')\n\ndecorate(xlabel='Quantity',\n ylabel='CDF',\n title='Posterior predictive distributions')",
"Bayesian linear regression\nGenerate data",
"inter, slope = 5, 2\nsigma = 3\nn = 20\n\nxs = norm(0, 3).rvs(n)\nxs = np.sort(xs)\nys = inter + slope * xs + norm(0, sigma).rvs(20)\n\nplt.plot(xs, ys, 'o');\n\nimport statsmodels.api as sm\n\nX = sm.add_constant(xs)\nX\n\nmodel = sm.OLS(ys, X)\nresults = model.fit()\nresults.summary()\n\nbeta_hat = results.params\nbeta_hat\n\n# k = results.df_model\nk = 2\n\ns2 = results.resid @ results.resid / (n - k)\ns2\n\ns2 = results.ssr / (n - k)\ns2\n\nnp.sqrt(s2)",
"Grid algorithm",
"beta0s = np.linspace(2, 8, 71)\nprior_inter = Pmf(1, beta0s, name='inter')\nprior_inter.index.name = 'Intercept'\n\nbeta1s = np.linspace(1, 3, 61)\nprior_slope = Pmf(1, beta1s, name='slope')\nprior_slope.index.name = 'Slope'\n\nsigmas = np.linspace(1, 6, 51)\nps = sigmas**-2\nprior_sigma = Pmf(ps, sigmas, name='sigma')\nprior_sigma.index.name = 'Sigma'\nprior_sigma.normalize()\n\nprior_sigma.plot()\n\nfrom utils import make_joint\n\ndef make_joint3(pmf1, pmf2, pmf3):\n \"\"\"Make a joint distribution with three parameters.\n \n pmf1: Pmf object\n pmf2: Pmf object\n pmf3: Pmf object\n \n returns: Pmf representing a joint distribution\n \"\"\"\n joint2 = make_joint(pmf2, pmf1).stack()\n joint3 = make_joint(pmf3, joint2).stack()\n return Pmf(joint3)\n\nprior3 = make_joint3(prior_slope, prior_inter, prior_sigma)\nprior3.head()\n\nfrom utils import normalize\n\ndef update_optimized(prior, data):\n \"\"\"Posterior distribution of regression parameters\n `slope`, `inter`, and `sigma`.\n \n prior: Pmf representing the joint prior\n data: DataFrame with columns `x` and `y`\n \n returns: Pmf representing the joint posterior\n \"\"\"\n xs = data['x']\n ys = data['y']\n sigmas = prior.columns\n likelihood = prior.copy()\n\n for slope, inter in prior.index:\n expected = slope * xs + inter\n resid = ys - expected\n resid_mesh, sigma_mesh = np.meshgrid(resid, sigmas)\n densities = norm.pdf(resid_mesh, 0, sigma_mesh)\n likelihood.loc[slope, inter] = densities.prod(axis=1)\n \n posterior = prior * likelihood\n normalize(posterior)\n return posterior\n\ndata = pd.DataFrame(dict(x=xs, y=ys))\n\nfrom utils import normalize\n\nposterior = update_optimized(prior3.unstack(), data)\nnormalize(posterior)\n\nfrom utils import marginal\n\nposterior_sigma_grid = marginal(posterior, 0)\nposterior_sigma_grid.plot(label='grid')\n\ndecorate(title='Posterior distribution of sigma')\n\njoint_posterior = marginal(posterior, 1).unstack()\nplot_contour(joint_posterior)\n\nposterior_beta0_grid = marginal(joint_posterior, 0)\nposterior_beta1_grid = marginal(joint_posterior, 1)\n\nposterior_beta0_grid.make_cdf().plot(label=r'$\\beta_0$')\nposterior_beta1_grid.make_cdf().plot(label=r'$\\beta_1$')\n\ndecorate(title='Posterior distributions of parameters')",
"Posterior distribution of sigma\nAccording to Gelman et al, the posterior distribution of $\\sigma^2$ is scaled inverse chi2 with $\\nu=n-k$ and scale $s^2$.\nAccording to Wikipedia, that's equivalent to inverse gamma with parameters $\\nu/2$ and $\\nu s^2 / 2$.",
"nu = n-k\nnu/2, nu*s2/2\n\nfrom scipy.stats import invgamma\n\ndist_sigma2 = invgamma(nu/2, scale=nu*s2/2)\ndist_sigma2.mean()\n\nsigma2s = np.linspace(0.01, 30, 101)\nps = dist_sigma2.pdf(sigma2s)\nposterior_sigma2_invgamma = Pmf(ps, sigma2s)\nposterior_sigma2_invgamma.normalize()\n\nposterior_sigma2_invgamma.plot()\n\nsigmas = np.sqrt(sigma2s)\nposterior_sigma_invgamma = Pmf(ps, sigmas)\nposterior_sigma_invgamma.normalize()\n\nposterior_sigma_invgamma.mean(), posterior_sigma_grid.mean()\n\nposterior_sigma_grid.make_cdf().plot(color='gray', label='grid')\nposterior_sigma_invgamma.make_cdf().plot(label='invgamma')\n\ndecorate(title='Posterior distribution of sigma')",
"Posterior distribution of sigma, updatable version\nPer the Wikipedia page: https://en.wikipedia.org/wiki/Bayesian_linear_regression",
"Lambda_0 = np.zeros((k, k))\nLambda_n = Lambda_0 + X.T @ X\nLambda_n\n\nfrom scipy.linalg import inv\n\nmu_0 = np.zeros(k)\nmu_n = inv(Lambda_n) @ (Lambda_0 @ mu_0 + X.T @ X @ beta_hat)\nmu_n\n\na_0 = 0\na_n = a_0 + n / 2\na_n\n\nb_0 = 0\nb_n = b_0 + (ys.T @ ys + \n mu_0.T @ Lambda_0 @ mu_0 - \n mu_n.T @ Lambda_n @ mu_n) / 2\nb_n\n\na_n, nu/2\n\nb_n, nu * s2 / 2",
"Sampling the posterior of the parameters",
"sample_sigma2 = dist_sigma2.rvs(1000)\n\nsample_sigma = np.sqrt(sample_sigma2)\n\nfrom scipy.linalg import inv\n\nV_beta = inv(X.T @ X)\nV_beta\n\nsample_beta = [multivariate_normal(beta_hat, V_beta * sigma2).rvs()\n for sigma2 in sample_sigma2]\n\nnp.mean(sample_beta, axis=0)\n\nbeta_hat\n\nnp.std(sample_beta, axis=0)\n\nresults.bse\n\nsample_beta0, sample_beta1 = np.transpose(sample_beta)\n\nCdf.from_seq(sample_beta0).plot(label=r'$\\beta_0$')\nCdf.from_seq(sample_beta1).plot(label=r'$\\beta_1$')\n\ndecorate(title='Posterior distributions of the parameters')",
"Posterior using multivariate Student t",
"x = beta_hat\nmean = beta_hat\ndf = (n - k)\nshape = (V_beta * s2)\nmultistudent_pdf(x, mean, shape, df)\n\nlow, high = sample_beta0.min(), sample_beta0.max()\nlow, high\n\nbeta0s = np.linspace(0.9*low, 1.1*high, 101)\n\nlow, high = sample_beta1.min(), sample_beta1.max()\n\nbeta1s = np.linspace(0.9*low, 1.1*high, 91)\n\nbeta0_mesh, beta1_mesh = np.meshgrid(beta0s, beta1s)\n\nbeta_mesh = np.dstack(np.meshgrid(beta0s, beta1s))\nbeta_mesh.shape\n\nps = multistudent_pdf(beta_mesh, mean, shape, df)\nps.shape\n\njoint = pd.DataFrame(ps, columns=beta0s, index=beta1s)\n\nfrom utils import normalize\n\nnormalize(joint)\n\nfrom utils import plot_contour\n\nplot_contour(joint)\ndecorate(xlabel=r'$\\beta_0$',\n ylabel=r'$\\beta_1$')\n\nmarginal_beta0_student = marginal(joint, 0)\nmarginal_beta1_student = marginal(joint, 1)\n\nfrom utils import marginal\n\nposterior_beta0_grid.make_cdf().plot(color='gray', label=r'grid $\\beta_0$')\nposterior_beta1_grid.make_cdf().plot(color='gray', label=r'grid $\\beta_1$')\n\nmarginal_beta0_student.make_cdf().plot(label=r'student $\\beta_0$', color='gray')\nmarginal_beta1_student.make_cdf().plot(label=r'student $\\beta_0$', color='gray')\n\nCdf.from_seq(sample_beta0).plot(label=r'sample $\\beta_0$')\nCdf.from_seq(sample_beta1).plot(label=r'sample $\\beta_1$')\n\ndecorate()",
"Sampling the predictive distribution",
"t = [X @ beta + norm(0, sigma).rvs(n)\n for beta, sigma in zip(sample_beta, sample_sigma)]\npredictions = np.array(t)\npredictions.shape\n\nlow, median, high = np.percentile(predictions, [5, 50, 95], axis=0)\n\nplt.plot(xs, ys, 'o')\nplt.plot(xs, median)\nplt.fill_between(xs, low, high, color='C1', alpha=0.3)",
"Modeling the predictive distribution",
"xnew = [1, 2, 3]\nXnew = sm.add_constant(xnew)\nXnew\n\nt = [Xnew @ beta + norm(0, sigma).rvs(len(xnew))\n for beta, sigma in zip(sample_beta, sample_sigma)]\npredictions = np.array(t)\npredictions.shape\n\nx0, x1, x2 = predictions.T\n\nCdf.from_seq(x0).plot()\nCdf.from_seq(x1).plot()\nCdf.from_seq(x2).plot()\n\nmu_new = Xnew @ beta_hat\nmu_new\n\ncov_new = s2 * (np.eye(len(xnew)) + Xnew @ V_beta @ Xnew.T)\ncov_new\n\nx = mu_new\nmean = mu_new\ndf = (n - k)\nshape = cov_new\nmultistudent_pdf(x, mean, shape, df)\n\ny1s = np.linspace(0, 20, 51)\ny0s = np.linspace(0, 20, 61)\ny2s = np.linspace(0, 20, 71)\n\nmesh = np.stack(np.meshgrid(y0s, y1s, y2s), axis=-1)\nmesh.shape\n\nps = multistudent_pdf(mesh, mean, shape, df)\nps.shape\n\nps /= ps.sum()\nps.sum()\n\np1s = ps.sum(axis=1).sum(axis=1)\np1s.shape\n\np0s = ps.sum(axis=0).sum(axis=1)\np0s.shape\n\np2s = ps.sum(axis=0).sum(axis=0)\np2s.shape\n\npmf_y0 = Pmf(p0s, y0s)\npmf_y1 = Pmf(p1s, y1s)\npmf_y2 = Pmf(p2s, y2s)\n\npmf_y0.mean(), pmf_y1.mean(), pmf_y2.mean()\n\npmf_y0.make_cdf().plot(color='gray')\npmf_y1.make_cdf().plot(color='gray')\npmf_y2.make_cdf().plot(color='gray')\n\nCdf.from_seq(x0).plot()\nCdf.from_seq(x1).plot()\nCdf.from_seq(x2).plot()\n\nstop",
"Leftovers\nRelated discussion saved for the future\nhttps://stats.stackexchange.com/questions/78177/posterior-covariance-of-normal-inverse-wishart-not-converging-properly",
"from scipy.stats import chi2\n\n\nclass NormalInverseWishartDistribution(object):\n def __init__(self, mu, lmbda, nu, psi):\n self.mu = mu\n self.lmbda = float(lmbda)\n self.nu = nu\n self.psi = psi\n self.inv_psi = np.linalg.inv(psi)\n\n def sample(self):\n sigma = np.linalg.inv(self.wishartrand())\n return (np.random.multivariate_normal(self.mu, sigma / self.lmbda), sigma)\n\n def wishartrand(self):\n dim = self.inv_psi.shape[0]\n chol = np.linalg.cholesky(self.inv_psi)\n foo = np.zeros((dim,dim))\n\n for i in range(dim):\n for j in range(i+1):\n if i == j:\n foo[i,j] = np.sqrt(chi2.rvs(self.nu-(i+1)+1))\n else:\n foo[i,j] = np.random.normal(0,1)\n return np.dot(chol, np.dot(foo, np.dot(foo.T, chol.T)))\n\n def posterior(self, data):\n n = len(data)\n mean_data = np.mean(data, axis=0)\n sum_squares = np.sum([np.array(np.matrix(x - mean_data).T * np.matrix(x - mean_data)) for x in data], axis=0)\n mu_n = (self.lmbda * self.mu + n * mean_data) / (self.lmbda + n)\n lmbda_n = self.lmbda + n\n nu_n = self.nu + n\n dev = mean_data - self.mu\n psi_n = (self.psi + sum_squares + \n self.lmbda * n / (self.lmbda + n) * np.array(dev.T @ dev))\n return NormalInverseWishartDistribution(mu_n, lmbda_n, nu_n, psi_n)\n\n \n\nx = NormalInverseWishartDistribution(np.array([0,0])-3,1,3,np.eye(2))\nsamples = [x.sample() for _ in range(100)]\ndata = [np.random.multivariate_normal(mu,cov) for mu,cov in samples]\ny = NormalInverseWishartDistribution(np.array([0,0]),1,3,np.eye(2))\nz = y.posterior(data)\n\nprint('mu_n: {0}'.format(z.mu))\n\nprint('psi_n: {0}'.format(z.psi))\n\nfrom scipy.linalg import inv\nfrom scipy.linalg import cholesky\n\ndef wishartrand(nu, Lambda):\n d, _ = Lambda.shape\n chol = cholesky(Lambda)\n foo = np.empty((d, d))\n\n for i in range(d):\n for j in range(i+1):\n if i == j:\n foo[i,j] = np.sqrt(chi2.rvs(nu-(i+1)+1))\n else:\n foo[i,j] = np.random.normal(0, 1)\n \n return np.dot(chol, np.dot(foo, np.dot(foo.T, chol.T)))\n\nsample = [wishartrand(nu_n, Lambda_n) for i in range(1000)]\n\nnp.mean(sample, axis=0)\n\nLambda_n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
daniestevez/jupyter_notebooks | dslwp/DSLWP-B deorbit.ipynb | gpl-3.0 | [
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\nfrom astropy.time import Time\n\nimport subprocess\n\n# Larger figure size\nfig_size = [10, 6]\nplt.rcParams['figure.figsize'] = fig_size\n\nmjd_unixtimestamp_offset = 10587.5\nseconds_in_day = 3600 * 24\n\ndef mjd2unixtimestamp(m):\n return (m - mjd_unixtimestamp_offset) * seconds_in_day\n\ndef unixtimestamp2mjd(u):\n return u / seconds_in_day + mjd_unixtimestamp_offset\n\ndef load_orbit_file(path):\n ncols = 8\n data = np.fromfile(path, sep=' ')\n return data.reshape((data.size // ncols, ncols))",
"Keys for each of the columns in the orbit (Keplerian state) report.",
"utc = 0\nsma = 1\necc = 2\ninc = 3\nraan = 4\naop = 5\nma = 6\nta = 7",
"Plot the orbital parameters which are vary significantly between different tracking files.",
"#fig1 = plt.figure(figsize = [15,8], facecolor='w')\nfig_peri = plt.figure(figsize = [15,8], facecolor='w')\nfig_peri_deorbit = plt.figure(figsize = [15,8], facecolor='w')\nfig_apo = plt.figure(figsize = [15,8], facecolor='w')\nfig3 = plt.figure(figsize = [15,8], facecolor='w')\nfig4 = plt.figure(figsize = [15,8], facecolor='w')\nfig4_rap = plt.figure(figsize = [15,8], facecolor='w')\nfig5 = plt.figure(figsize = [15,8], facecolor='w')\nfig6 = plt.figure(figsize = [15,8], facecolor='w')\n#sub1 = fig1.add_subplot(111)\nsub_peri = fig_peri.add_subplot(111)\nsub_peri_deorbit = fig_peri_deorbit.add_subplot(111)\nsub_apo = fig_apo.add_subplot(111)\nsub3 = fig3.add_subplot(111)\nsub4 = fig4.add_subplot(111)\nsub4_rap = fig4_rap.add_subplot(111)\nsub5 = fig5.add_subplot(111)\nsub6 = fig6.add_subplot(111)\n\nsubs = [sub_peri, sub_peri_deorbit, sub_apo, sub3, sub4, sub4_rap, sub5, sub6]\n\nfor file in ['orbit_deorbit.txt', 'orbit_deorbit2.txt', 'orbit_deorbit3.txt']:\n orbit = load_orbit_file(file)\n\n t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix')\n\n #sub1.plot(t.datetime, orbit[:,sma])\n sub_peri.plot(t.datetime, orbit[:,sma]*(1-orbit[:,ecc]))\n \n deorbit_sel = (mjd2unixtimestamp(orbit[:,utc]) >= 1564012800) & (mjd2unixtimestamp(orbit[:,utc]) <= 1564963200)\n if np.any(deorbit_sel):\n sub_peri_deorbit.plot(t[deorbit_sel].datetime, orbit[deorbit_sel,sma]*(1-orbit[deorbit_sel,ecc]))\n \n sub_apo.plot(t.datetime, orbit[:,sma]*(1+orbit[:,ecc]))\n sub3.plot(t.datetime, orbit[:,ecc])\n sub4.plot(t.datetime, orbit[:,aop])\n sub4_rap.plot(t.datetime, np.fmod(orbit[:,aop] + orbit[:,raan],360))\n sub5.plot(t.datetime, orbit[:,inc])\n sub6.plot(t.datetime, orbit[:,raan])\n\nsub_peri.axhline(y = 1737, color='red')\nsub_peri_deorbit.axhline(y = 1737, color='red')\n\nmonth_locator = mdates.MonthLocator()\nday_locator = mdates.DayLocator()\n\nfor sub in subs:\n sub.set_xlabel('Time')\n sub.xaxis.set_major_locator(month_locator)\n sub.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))\n sub.xaxis.set_tick_params(rotation=45)\nsub_peri_deorbit.xaxis.set_major_locator(day_locator)\nsub_peri_deorbit.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))\n\n#sub1.set_ylabel('SMA (km)')\nsub_peri.set_ylabel('Periapsis radius (km)')\nsub_peri_deorbit.set_ylabel('Periapsis radius (km)')\nsub_apo.set_ylabel('Apoapsis radius (km)')\nsub3.set_ylabel('ECC')\nsub4.set_ylabel('AOP (deg)')\nsub4_rap.set_ylabel('RAOP (deg)')\nsub5.set_ylabel('INC (deg)')\nsub6.set_ylabel('RAAN (deg)')\n\n#sub1.set_title('Semi-major axis')\nsub_peri.set_title('Periapsis radius')\nsub_peri_deorbit.set_title('Periapsis radius')\nsub_apo.set_title('Apoapsis radius')\nsub3.set_title('Eccentricity')\nsub4.set_title('Argument of periapsis')\nsub4_rap.set_title('Right ascension of periapsis')\nsub5.set_title('Inclination')\nsub6.set_title('Right ascension of ascending node')\n\nfor sub in subs:\n sub.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris'])\n \nsub_peri.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris', 'Lunar radius']);\nsub_peri_deorbit.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris', 'Lunar radius']);"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
georgetown-analytics/yelp-classification | machine_learning/User_Sample_test_draft_ed.ipynb | mit | [
"import json\nimport pandas as pd\nimport re\nimport random\nfrom scipy import sparse\nimport numpy as np\nfrom pymongo import MongoClient\nfrom nltk.corpus import stopwords\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom sklearn import svm\nfrom sklearn.decomposition import LatentDirichletAllocation\nfrom sklearn.base import BaseEstimator, TransformerMixin\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.pipeline import Pipeline, FeatureUnion\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn import svm\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import log_loss\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nfrom gensim import corpora, models, similarities, matutils\nimport tqdm\nimport sys\nsys.path.append('/Users/ed/yelp-classification/machine_learning')\nimport yelp_ml as yml\n#reload(yml)\n\n\nlh_neg = open('../input/negative-words.txt', 'r', encoding = \"ISO-8859-1\").read()\nlh_neg = lh_neg.split('\\n')\nlh_pos = open('../input/positive-words.txt', 'r', encoding = \"ISO-8859-1\").read()\nlh_pos = lh_pos.split('\\n')\nusers = json.load(open('../input/many_reviews_dictionary.json'))\n\nword_list = list(set(lh_pos + lh_neg))\n\n\n#Fix users JSON\nusers_dict = {}\nuser_ids = []\n\nusers_dict = {}\nuser_ids = []\n\nfor list in users['reviews']:\n users_dict[list[0]['user_id']]= list\n\n\n\n\nfor list_reviews in users['reviews']:\n user_ids.append(list_reviews[0]['user_id'])\n \n#We have 228 users, creat a new dictionary where the user_ids are the keys and the entries are a list of reviews\n\n \nwith open('cleaned_large_user_dictionary.json', 'w') as outfile:\n json.dump(users_dict, outfile)\n",
"Try running a few tests on a subset of users, the keys are our unique user IDs. We proceed as follows for each user ID:\n1.Create a user dataframe with the following columns:•(review_text, review rating, business_id)\n2.Create a list of unique business IDs for that user\n3.Connect to the MongoDB server and pull all of the reviews for the restaurants that the user has reviewed\n4.Create a restaurant dataframe with the following columns:•(review_text, biz rating, business_id)\n5.Do a 80/20 training/test split, randomizing over the set of user' reviewed restaurants\n6.Train the LSI model on the set of training reviews, get the number of topics used in fitting\n7.Set up the FeatureUnion with the desired features, then fit according to the train reviews and transform the train reviews \n8.",
"#####Test Machine Learning Algorithms\nip = 'Insert IP here'\nconn = MongoClient(ip, 27017)\nconn.database_names()\ndb = conn.get_database('cleaned_data')\nreviews = db.get_collection('restaurant_reviews')\n",
"1.Create a user dataframe with the following columns:•(review_text, review rating, business_id)",
"useridlist =[]\n\nfor user in users_dict.keys():\n useridlist.append(user)\nprint(useridlist[1])\n\ndef make_user_df(user_specific_reviews):\n #Input:\n #user_specific_reviews: A list of reviews for a specific user\n #Output: A dataframe with the columns (user_reviews, user_ratings, biz_ids)\n user_reviews = []\n user_ratings = []\n business_ids = []\n\n for review in user_specific_reviews:\n user_reviews.append(review['text'])\n user_ratings.append(review['stars'])\n business_ids.append(review['business_id'])\n\n ###WE SHOULD MAKE THE OUR OWN PUNCTUATION RULES\n #https://www.tutorialspoint.com/python/string_translate.htm\n #I'm gonna have to go and figure out what this does -ed\n #user_reviews = [review.encode('utf-8').translate(None, string.punctuation) for review in user_reviews]\n\n user_df = pd.DataFrame({'review_text': user_reviews, 'rating': user_ratings, 'biz_id': business_ids})\n return user_df\n\n\n#test to make users_dict,make_user_df works \nuser_specific_reviews = users_dict[useridlist[0]]\nx= make_user_df(user_specific_reviews)\nx.head()\n",
"2.Create a list of unique business IDs for that user",
"business_ids = list(set(user['biz_id']))",
"3.Connect to the MongoDB server and pull all of the reviews for the restaurants that the user has reviewed",
"restreview = {}\n\nfor i in range(0, len(business_ids)):\n rlist = []\n for obj in reviews.find({'business_id':business_ids[i]}):\n rlist.append(obj)\n restreview[business_ids[i]] = rlist\n \n\n",
"4.Create a restaurant dataframe with the following columns:•(review_text, biz rating, business_id)",
"restaurant_df = yml.make_biz_df(user, restreview)",
"5.Do a 80/20 training/test split, randomizing over the set of user' reviewed restaurants",
" #Create a training and test sample from the user reviewed restaurants\nsplit_samp = .30\nrandom_int = random.randint(1, len(business_ids)-1)\nlen_random = int(len(business_ids) * split_samp)\ntest_set = business_ids[random_int:random_int+len_random]\ntraining_set = business_ids[0:random_int]+business_ids[random_int+len_random:len(business_ids)]\ntrain_reviews, train_ratings = [], []\n\n \n\n#Create a list of training reviews and training ratings\nfor rest_id in training_set:\n train_reviews.extend(list(user_df[user_df['biz_id'] == rest_id]['review_text']))\n train_ratings.extend(list(user_df[user_df['biz_id'] == rest_id]['rating']))\n\n \n\n\n#Transform the star labels into a binary class problem, 0 if rating is < 4 else 1\ntrain_labels = [1 if x >=4 else 0 for x in train_ratings]\n ",
"6.Train the LSI model on the set of training reviews, get the number of topics used in fitting",
"#this is just for my understand of how the model is working under the hood\ndef fit_lsi(train_reviews):\n #Input: train_reviews is a list of reviews that will be used to train the LSI feature transformer\n #Output: A trained LSI model and the transformed training reviews\n\n texts = [[word for word in review.lower().split() if (word not in stop_words)]\n for review in train_reviews]\n \n dictionary = corpora.Dictionary(texts)\n\n corpus = [dictionary.doc2bow(text) for text in texts]\n\n numpy_matrix = matutils.corpus2dense(corpus, num_terms=10000)\n singular_values = np.linalg.svd(numpy_matrix, full_matrices=False, compute_uv=False)\n mean_sv = sum(list(singular_values))/len(singular_values)\n topics = int(mean_sv)\n\n tfidf = models.TfidfModel(corpus)\n corpus_tfidf = tfidf[corpus]\n\n lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=topics)\n\n return lsi, topics, dictionary\n\n#Fit LSI model and return number of LSI topics\nlsi, topics, dictionary = yml.fit_lsi(train_reviews)\n \n ",
"7.Set up the FeatureUnion with the desired features, then fit according to the train reviews and transform the train reviews",
"#Make a FeatureUnion object with the desired features then fit to train reviews\ncomb_features = yml.make_featureunion()\ncomb_features.fit(train_reviews)\n \ntrain_features = comb_features.transform(train_reviews)\ntrain_lsi = yml.get_lsi_features(train_reviews, lsi, topics, dictionary)\ntrain_features = sparse.hstack((train_features, train_lsi))\ntrain_features = train_features.todense()\n \n \n\n#fit each model in turn \nmodel_runs = [(True, False, False, False, False), (False, True, False, False, False), \n (False, False, True, False, False), (False, False, False, True, False),\n (False, False, False, False, True)]\n\ntest_results = {}\n\nfor i in tqdm.tqdm(range(0, len(model_runs))):\n clf = yml.fit_model(train_features, train_labels, svm_clf = model_runs[i][0], \n RandomForest = model_runs[i][1], nb = model_runs[i][2])\n threshold = 0.7\n error = yml.test_user_set(test_set, clf, restaurant_df, user_df, comb_features, threshold, lsi, topics, dictionary)\n test_results[clf] = error\n \n\n\n#Get top predictions\n\nfor key in test_results.keys():\n results = test_results[test_results.keys()[0]]\n log_loss = yml.get_log_loss()\n print log_loss\n \n"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mauriciogtec/PropedeuticoDataScience2017 | Alumnos/Victor Quintero/Tarea 2.ipynb | mit | [
"Parte I\n<li>¿Por qué una matriz equivale a una transformación lineal entre espacios vectoriales?\n\nR= Por que una matriz A al multiplicar a un vector X lo transforma en el vector b (Ax = b). Convierte vectores en vectores.\n\n\n <li>¿Cuál es el efecto de transformación lineal de una matriz diagonal y el de una matriz ortogonal?</li>\n\nR= El efecto de transformación lineal de una matriz diagonal equivale a multiplicar la matriz X por un escar. Las matrices ortonormales preservan la norma y el volumen de los vectores.\n\n\n<li>¿Qué es la descomposición en valores singulares de una matriz?</li>\n\nR= Es representar una matriz como producto de tres matices, las cuales se pueden interpretar como transformaciones geométricas: una rotción, un escalamiento o redimensión y otra rotación. En otras palabras, nos dice que toda transformación lineal es una rotación, redimensión de ejes canónicos y otra rotación.\n\n\n<li>¿Qué es diagonalizar una matriz y que representan los eigenvectores?</li>\n\n R= Diagonalizar una matriz es representarla como una multiplición de 3 matirces ($W, D, W^{t}$) donde W es ortogonal y D diagonal. Esto nos sirve para encontrar la base de eigenvectores. Los eigenvectores representan dirección (ejes) dentro de una transformación lineal, la cual es un reescalamiento o rotación. Por lo tanto al no repsentar un cambio de sentido, representan un reescalamiento.\n\n\n<li>¿Intuitivamente qué son los eigenvectores?</li>\n\n R = Son valores que reescalan dentro de una transformación lineal. Un eigenvector de valor de valor 1, por ejemplo, va a mantener el tamaño en la transformación. \n\n<li>¿Cómo interpretas la descomposición en valores singulares como una composición de tres tipos de transformaciones\n lineales simples?</li>\n\nR = Toda transformación lineal es una rotación, redimensión de ejes canónicos y otra rotación. Por lo que la primera matriz se encarga de hacer una rotación, la segunda matriz (diagonal) contiene a los eigenvectores por lo que se encarga de hacer una redimensión de los ejes canónicos, y la tercera matriz se encarda de dar una última rotación.\n\n\n<li>¿Qué relación hay entre la descomposición en valores singulares y la diagonalización?</li>\n\n R= que ambos sirven para representar una matriz con el producto de 3 matrices. Donde la segunda matriz es diagonal y contiene eigenvectores.\n\n<li>¿Cómo se usa la descomposición en valores singulares para dar un aproximación de rango menor a una matriz?</li>\n\n R= Al hacer la descomposición, tenemos la matriz diagonal con los eigenvectores acomodados en columnas de mayor a menor importancia, por lo que basta tomar solo un número menor de esas columnas y renglones de las otras matrices para tener una aproximación a la matriz original. \n\n<li>Describe el método de minimización por descenso gradiente</li>\n\nR= En este método sirve para encontrar el valor mínimo de un función (un valor x que minimice la función F(x)). El método lo que nos dice es que tomemos un punto cualquiera $x_{0}$ de la función, luego para iniciar la busqueda de un valor que minimice la función ($x_{1}$) vamos a restarle a $x_{0}$ alpha veces el gradiente de F(x) (recordemos que el gradiente apunta al máximo ascenso por lo que el gradiente negativo apunta al máximo descenso), alpha nos indica la magnitud del siguiente paso, por lo que si es muy chica el proceso se puede volver muy tardado, pero si es muy grande el algoritmo diverge. Conforme nos vamos acercando al valor mínimo, el gradiente se va volviendo más pequeño y nuestros valores $ x_{t}$ y $x_{t+1}$ se van acercando más y más, por lo que el proceso termina cuando $x_{t} = x_{t+1}$ o la diferencia entre ambos es un valor muy pequeño previamente fijado. La formula de este proceso es: $x_{t+1}=x_{t} - \\alpha \\nabla F(x_{t})$\n\n <li>Menciona 4 ejemplo de problemas de optimización (dos con restricciones y dos sin restricciones) que te parecan interesantes como Científico de Datos</li>\n\n1) Asignción de tripulación en la industria aérea (Crew Scheduling Problem).- El objetivo es asignar la tripulación a todos los vuelos sin incurrir en diferentes restricciones que se tienen a nivel persona (máximo en horas de trabajo, número de días fuera de base, etc.) al menor costo posible. Por lo regular este costo es el primero o segundo mayor para una aerolínea.\n\n2) En el sector agricultor, maximizar las ganancias determinando cuanto sembrar de cada cultivo para satisfacer cierto pronóstico de demanda.\n\n3) Determinar cuanto dinero poner en cada cajero de cierta ciudad y cada cuanto resurtirlo para dar un nivel correcto de servicio sin tener que tener parado mucho dinero.\n\n4) En la industria de acero, reducir el desperdicio generado por cortar placas grandes en piezas más pequeñas, determinando patrones de corte.\n\n\n# Parte II\n\n# Ejercicio 1: Script para aproximar una imagen\n\nVamos a aproximar una imagen blanco y negro utilizando la descomposición SVD.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom PIL import Image\n\n#Pedir el path del archivo\nIM=input(\"Introducir el path del archivo: \")\n\n#Pedir el grado de aproximación k\nk=input(\"Grado de aproximación k: \")\nk=int(k)\n\n#Ejemplo de la imagen que usé\n#IM = \"ImagenTarea2.png\"\n\nimg = Image.open(IM) \nimgmat = np.array(list(img.getdata(band=0)), float) #Hago un array con la informacion de los pixeles\nimgmat.shape = (img.size[1], img.size[0]) #Redimenciono el array en base al numero de pixeles\nimgmat = np.matrix(imgmat) #Convierto el array en matriz\nplt.imshow(imgmat, cmap='gray'); #Para visualizar la imagen usando matplotlib\ntitle = \"Imagen original:\"\nplt.title(title)\nplt.show()\n\nU, S, V = np.linalg.svd(imgmat) #Descomposicion svd\n\nmatreconstruida = np.matrix(U[:, :k]) * np.diag(S[:k]) * np.matrix(V[:k, :])\nplt.imshow(matreconstruida, cmap='gray')\ntitle = \"Aproximación de grado k = %s\" % k\nplt.title(title)\nplt.show()",
"Ahora vamos a elegir diferentes grados de aproximación a la imagen original usando un ciclo \"For\":",
"for i in range(5, 85, 15):\n matreconstruida = np.matrix(U[:, :i]) * np.diag(S[:i]) * np.matrix(V[:i, :])\n plt.imshow(matreconstruida, cmap='gray')\n title = \"Aproximación de grado k = %s\" % i\n plt.title(title)\n plt.show()",
"¿Qué tiene que ver este proyecto con compresión de imágenes?\nR= Al descomponer la imagen nos estamos quedando únicamente con información relevante de la misma, por lo que podemos reconstruirla posteriormente utilizando únicamente un porcentaje de los vectores de U, Sigma y V, dependiendo de la fidelidad de la imagen que queramos. Como se puede observar en el ejercicio, con un porcentaje bajo de vectores podemos reconstruir la imagen sin alterar mucho su fidelidad, por lo que en lugar de guardar toda la matriz, basta con guardar ese porcentaje de vectores y así ahorrar memoria.\nEjercicio 2: Cálculo de pseudoinversa y resoluver sistemas de ecuaciones\nProgramar una función que dada cualquier matriz devuelva la pseudoinversa usando la descomposición SVD. Hacer otra función que resuelva un sistema de ecuaciones de la forma Ax=b usando la pseudoinversa.",
"from copy import copy, deepcopy\n\ndef pseudoinversa(A):\n U, S, V = np.linalg.svd(A)\n \n m, n = A.shape\n\n D = np.empty([m,n])\n \n D = D * 0\n \n for k in range (n):\n D[k,k] = 1\n \n S = D * S # Vuelvo a S una matriz diagonal mXn\n \n pseudo = deepcopy(S)\n \n for i in range (n): #Calculo pseudo inversa de sigma\n if pseudo[i,i] != 0:\n pseudo[i,i] = 1/pseudo[i,i]\n \n pseudo = pseudo.transpose()\n VT = V.transpose()\n UT = U.transpose()\n \n w = np.dot(VT,pseudo)\n pseudo = np.dot(w,UT)\n \n return pseudo\n \n \ndef resuelve(A,b):\n y= pseudoinversa(A)\n x = np.dot(y,b)\n return x",
"Ejemplo para ver que la función resuelve de manera correcta el sistema de ecuaciones:",
"A = np.array([[2, 1, 3], [4, -1, 3], [-2, 5, 5]])\nb = np.array([[17],[31],[-5]])\n\nresuelve(A,b)",
"Jugar con la función donde b puede tomar distintos valores y A=[[1,1],[0,0]]:",
"A = np.array([[1,1],[0,0]])\nb= np.array([[5],[0]])\n\nresuelve(A,b)",
"a) Si b esta en la imagen de A (La imagen es [x,0]) devuelve la solución al sistema de manera correcta. Si b no esta en la imagen (ej. b= [1,1]) devuelve la solución al sistema considerando la imagen, que es la solución más cercana, en el ejemplo b=[1,1] devuelve la solución al sistema considerando b=[1,0].\nb) ¿La solución resultante es única? No, ya que para diferentes valore de b, existe el mismo valor de x. Esto sucede porque la matriz es singular.\nc) Cambiar a: A=[[1,1],[0,1e-32]]. ¿La solución es única? Sí, para cada diferente valor de b1 y b2, devuelve un valor único de x1 y x2. ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior? sí, debido a que esta matriz si es invertible con el metodo de la pseudoinversa, aunque prácticamente sea la misma matriz que en el punto anterior.",
"A = np.array([[1,1],[0,1e-32]])\nb= np.array([[5],[0]])\n\nresuelve(A,b)",
"Ejercicio 3: Ajuste de mínimos cuadrados",
"import pandas as pd\n\nz = pd.read_csv(\"https://raw.githubusercontent.com/mauriciogtec/PropedeuticoDataScience2017/master/Tarea/study_vs_sat.csv\",index_col = False)\n\n\nm, n = z.shape\n\nSX= z.iloc[0][0]\nSY = z.iloc[0][1]\nSXX = z.iloc[0][0] **2\nSYY = z.iloc[0][1] **2\nSXY = z.iloc[0][0] * z.iloc[0][1]\n\n\nfor i in range (1,m):\n SX += z.iloc[i][0]\n SY += z.iloc[i][1]\n SXX += z.iloc[i][0] **2\n SYY += z.iloc[i][1] **2\n SXY += z.iloc[i][0] * z.iloc[i][1]\n\nBeta = (m*SXY - SX*SY) / (m*SXX- SX**2)\nAlpha = (1/m)*SY - Beta*(1/m)*SX\n\nfuncion= \"Sat_score ~ \" + str(Alpha) + \" + \" + str(Beta) + \"Study_hours\"\nprint(z,\"\\n \",\"\\n \",funcion) ",
"<li> ¿Cuál es el gradiente de la función que se quiere optimizar? R= El Vector [1, Study_hours]\n\nProgramar una función que reciba los valores alpha, beta y el vector Study_hours y devuelva un vector array de numpy de predicciones alpha + beta * Study_hours_i, con un vaor por cada individuo.",
"def sat_score(Alpha,Beta,Study_hours):\n m, = Study_hours.shape\n \n Satscore= [0]\n for i in range (m-1):\n Satscore += [0]\n Satscore = np.array([Satscore])\n Satscore= Satscore.transpose()\n \n for j in range (m):\n Satscore[j,0]= Alpha + Beta * Study_hours[j]\n \n return Satscore\n \n\nSH= z.iloc[:,0]\nsat_s = sat_score(353.164879499,25.3264677779,SH)\n\nplt.scatter(SH, sat_s)\nplt.title('Scatter: Study hours vs Sat Score')\nplt.xlabel('Study_hours')\nplt.ylabel('Sat_score')\nplt.show()\n\nsat_s",
"<li><strong>(Avanzado)</strong> Usen la libreria <code>matplotlib</code> par visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score.",
"SS= z.iloc[:,1]\ng1 = (SH,SS)\ng2 = (SH,sat_s)\n\n \ndata = (g1, g2)\ncolors = (\"green\", \"red\")\ngroups = (\"Real\", \"Forecast: Alpha + Beta * Study_Hours\") \n\nfig, ax = plt.subplots()\nfor data, color, group in zip(data, colors, groups):\n x, y = data\n ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=30, label=group)\n \nplt.title('Real vs Forecast')\nplt.legend(loc=0)\nplt.show()\n\n",
"<li> Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda y la segunda con la variable Study_hours. Observen que <code>X*[alpha,beta]</code> nos devuelve <code>alpha + beta*study_hours_i</code> en cada entrada y que entonces el problema se vuelve <code>sat_score ~ X*[alpha,beta]</code>",
"x=[1.]\ny= [z.iloc[0,0]]\nfor i in range (19):\n x += [1]\n y += [z.iloc[i+1,0]]\n\nX = np.array([x,y])\nX = X.transpose()\n\nalpha = 353.164879499\nbeta = 25.3264677779\nab=np.array([[alpha],[beta]])\nR = np.dot(X,ab)\nR\n\n ",
"<li>Calculen la pseudoinversa X^+ de X y computen <code>(X^+)*sat_score</code> para obtener alpha y beta soluciones.</li>",
"Xpseudo= pseudoinversa(X)\nSscore= z.iloc[:,1]\n\nab=np.dot(Xpseudo,Sscore)\nab",
"<li>Comparen la solución anterior con la de la fórmula directa de solución exacta <code>(alpha,beta)=(X^t*X)^(-1)*X^t*study_hours</code>.</li>",
"SH= z.iloc[:,0]\nSscore= z.iloc[:,1]\nXT = X.transpose()\nXT2 = np.dot(XT,X)\nXTI = np.linalg.inv(XT2)\n\nw= np.dot(XTI,XT)\nab = np.dot(w,Sscore)\nab\n",
"La solución es la misma con ambos métodos"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/sandbox-2/atmos.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: SANDBOX-2\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
JakeColtman/BayesianSurvivalAnalysis | Basic Presentation.ipynb | mit | [
"import lifelines\nimport pymc as pm\nfrom pyBMA.CoxPHFitter import CoxPHFitter\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy import log\nfrom datetime import datetime\nimport pandas as pd\n%matplotlib inline ",
"The first step in any data analysis is acquiring and munging the data\nOur starting data set can be found here:\n http://jakecoltman.com in the pyData post\nIt is designed to be roughly similar to the output from DCM's path to conversion\nDownload the file and transform it into something with the columns:\nid,lifetime,age,male,event,search,brand\nwhere lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints\nIt is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)",
"####Data munging here\n\n###Parametric Bayes\n#Shout out to Cam Davidson-Pilon\n\n## Example fully worked model using toy data\n## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html\n## Note that we've made some corrections \n\nN = 2500\n\n##Generate some random data \nlifetime = pm.rweibull( 2, 5, size = N )\nbirth = pm.runiform(0, 10, N)\ncensor = ((birth + lifetime) >= 10)\nlifetime_ = lifetime.copy()\nlifetime_[censor] = 10 - birth[censor]\n\n\nalpha = pm.Uniform('alpha', 0, 20)\nbeta = pm.Uniform('beta', 0, 20)\n\[email protected]\ndef survival(value=lifetime_, alpha = alpha, beta = beta ):\n return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(50000, 30000)\n\npm.Matplot.plot(mcmc)\nmcmc.trace(\"alpha\")[:]",
"Problems: \n1 - Try to fit your data from section 1 \n2 - Use the results to plot the distribution of the median\n\nNote that the media of a Weibull distribution is:\n$$β(log 2)^{1/α}$$",
"#### Fit to your data here\n\n#### Plot the distribution of the median",
"Problems:\n4 - Try adjusting the number of samples for burning and thinnning\n5 - Try adjusting the prior and see how it affects the estimate",
"#### Adjust burn and thin, both paramters of the mcmc sample function\n\n#### Narrow and broaden prior",
"Problems:\n7 - Try testing whether the median is greater than a different values",
"#### Hypothesis testing",
"If we want to look at covariates, we need a new approach. \nWe'll use Cox proprtional hazards, a very popular regression model.\nTo fit in python we use the module lifelines:\nhttp://lifelines.readthedocs.io/en/latest/",
"### Fit a cox proprtional hazards model",
"Once we've fit the data, we need to do something useful with it. Try to do the following things:\n1 - Plot the baseline survival function\n\n2 - Predict the functions for a particular set of features\n\n3 - Plot the survival function for two different set of features\n\n4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time",
"#### Plot baseline hazard function\n\n#### Predict\n\n#### Plot survival functions for different covariates\n\n#### Plot some odds",
"Model selection\nDifficult to do with classic tools (here)\nProblem:\n1 - Calculate the BMA coefficient values\n\n2 - Try running with different priors",
"#### BMA Coefficient values\n\n#### Different priors"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anthonyng2/FX-Trading-with-Python-and-Oanda | Oanda v20 REST-oandapyV20/06.00 Position Management.ipynb | mit | [
"<!--NAVIGATION-->\n< Trade Management | Contents | Transaction History >\nPosition Management\nOANDA REST-V20 API Wrapper Doc on Position\nOANDA API Getting Started\nOANDA DOC on Position",
"import pandas as pd\nimport oandapyV20\nimport oandapyV20.endpoints.positions as positions\nimport configparser\n\nconfig = configparser.ConfigParser()\nconfig.read('../config/config_v20.ini')\naccountID = config['oanda']['account_id']\naccess_token = config['oanda']['api_key']\n\nclient = oandapyV20.API(access_token=access_token)",
"List all Positions for an Account.",
"r = positions.PositionList(accountID=accountID)\n\nclient.request(r)\n\nprint(r.response)",
"List all open Positions for an Account.",
"r = positions.OpenPositions(accountID=accountID)\n\nclient.request(r)",
"Get the details of a single instrument’s position in an Account",
"instrument = \"AUD_USD\"\n\nr = positions.PositionDetails(accountID=accountID, instrument=instrument)\n\nclient.request(r)",
"Closeout the open Position regarding instrument in an Account.",
"data = {\n \"longUnits\": \"ALL\"\n}\n\nr = positions.PositionClose(accountID=accountID,\n instrument=instrument,\n data=data)\n\nclient.request(r)",
"<!--NAVIGATION-->\n< Trade Management | Contents | Transaction History >"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Parsl/parsl_demos | Bash-Tutorial.ipynb | apache-2.0 | [
"Parsl Bash Tutorial\nThis tutorial will show you how to run Bash scripts as Parsl apps. \nLoad parsl\nImport parsl, and check the module version. This tutorial requires version 0.2.0 or above.",
"# Import Parsl\nimport parsl\nfrom parsl import *\n\nprint(parsl.__version__) # The version should be v0.2.1+",
"Define resources\nTo execute parsl we need to first define a set of resources on which the apps can run. Here we use a pool of threads.",
"workers = ThreadPoolExecutor(max_workers=4)\n# We pass the workers to the DataFlowKernel which will execute our Apps over the workers.\ndfk = DataFlowKernel(executors=[workers])",
"Defining Bash Apps\nTo demonstrate how to run apps written as Bash scripts, we use two mock science applications: simulate.sh and stats.sh. The simulation.sh script serves as a trivial proxy for any more complex scientific simulation application. It generates and prints a set of one or more random integers in the range [0-2^62) as controlled by its command line arguments. The stats.sh script serves as a trivial model of an \"analysis\" program. It reads N files each containing M integers and prints the average of all those numbers to stdout. Like simulate.sh it logs environmental information to stderr.\nThe following cell show how apps can be composed from arbitrary Bash scripts. The simulate signature shows how variables can be passed to the Bash script (e.g., \"sim_steps\") as well as how standard Parsl parameters are managed (e.g., \"stdout\").",
"@App('bash', dfk)\ndef simulate(sim_steps=1, sim_range=100, sim_values=5, outputs=[], stdout=None, stderr=None):\n # The bash app function requires that the bash script is returned from the function as a \n # string. Positional and Keyword args to the fn() are formatted into the cmd_line string\n # All arguments to the app function are made available at the time of string formatting a\n # string assigned to cmd_line.\n \n # Here we compose the command-line call to simulate.sh with keyword arguments to simulate()\n # and redirect stdout to the first file listed in the outputs list.\n return '''echo \"sim_steps: {sim_steps}\\nsim_range: {sim_range}\\nsim_values: {sim_values}\"\n echo \"Starting run at $(date)\"\n $PWD/bin/simulate.sh --timesteps {sim_steps} --range {sim_range} --nvalues {sim_values} > {outputs[0]}\n echo \"Done at $(date)\"\n ls\n '''",
"Running Bash Apps\nNow that we've defined an app, let's run 10 parallel instances of it using a for loop. Each run will write 100 random numbers, each between 0 and 99, to the output file.\nIn order to track files created by Bash apps, a list of data futures (one for each file in the outputs[] list) is made available as an attribute of the AppFuture returned upon calling the decorated app fn. \n<App_Future> = App_Function(... , outputs=['x.txt', 'y.txt'...])\n[<DataFuture> ... ] = <App_Future>.outputs",
"simulated_results = []\n# Launch 10 parallel runs of simulate() and put the futures in a list\nfor sim_index in range(10):\n sim_fut = simulate(sim_steps=1,\n sim_range=100,\n sim_values=100,\n outputs = ['stdout.{0}.txt'.format(sim_index)], \n stderr='stderr.{0}.txt'.format(sim_index)) \n simulated_results.extend([sim_fut])",
"Handling Futures\nThe variable \"simulated_results\" contains a list of AppFutures, each corresponding to a running bash app.\nNow let's print the status of the 10 jobs by checking if the app futures are done.\nNote: you can re-run this step until all the jobs complete (all status are True) or go on, as a later step will block until all the jobs are complete.",
"print ([i.done() for i in simulated_results])",
"Retrieving Results\nEach of the Apps return one DataFuture. Here we put all of these (data futures of file outputs) together into a list (simulation_outputs). This is done by iterating over each of the AppFutures and taking the first and only DataFuture in it's outputs list.",
"# Grab just the data futures for the output files from each simulation\nsimulation_outputs = [i.outputs[0] for i in simulated_results]",
"Defining a Second Bash App\nWe now explore how Parsl can be used to block on results. Let's define another app, analyze(), that calls stats.sh to find the average of the numbers in a set of files.",
"@App('bash', dfk)\ndef analyze(inputs=[], stdout=None, stderr=None):\n # Here we compose the commandline for stats.sh that take a list of filenames as arguments\n # Since we want a space separated list, rather than a python list (e.g: ['x.txt', 'y.txt'])\n # we create a string by joining the filenames of each item in the inputs list and using\n # that string to format the cmd_line explicitly\n input_files = ' '.join([i for i in inputs])\n return '$PWD/bin/stats.sh {0}'.format(input_files)",
"Blocking on Results\nWe call analyze with the list of data futures as inputs. This will block until all the simulate runs have completed and the data futures have 'resolved'. Finally, we print the result when it is ready.",
"results = analyze(inputs=simulation_outputs, \n stdout='analyze.out', \n stderr='analyze.err')\nresults.result()\nwith open('analyze.out', 'r') as f:\n print(f.read())"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vivekec/datascience | tutorials/python/Ipython files/py basics/OOPS basics.ipynb | gpl-3.0 | [
"Aggregation (HAS-A)\nPassing an object of Class 1 as an argument of class 2 constructer",
"class A():\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n def addNums():\n self.b + self.c\n\nclass B():\n def __init__(self, d, e, A):\n self.d = d\n self.e = e\n self.A = A\n\n def addAllNums(self):\n x = self.d + self.e + self.A.b + self.A.c\n return x\n\nobjA = A(\"hi\", 2, 6)\nobjB = B(5, 9, objA)\nobjB.addAllNums()",
"Association (USES-A)\nPassing object of class 1 as an argument of class 2 methods",
"class A():\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n def addNums():\n self.b + self.c\n\nclass B():\n def __init__(self, d, e):\n self.d = d\n self.e = e \n\n def addAllNums(self, arg):\n x = self.d + self.e + arg.b + arg.c\n return x\n\nobjA = A(\"hi\", 2, 6)\nobjB = B(5, 9)\nobjB.addAllNums(objA)",
"Composition (PART-OF)\nObject of class 1 is defined inside the constructor of class 2",
"class A():\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n def addNums():\n self.b + self.c\n\nclass B():\n def __init__(self, d, e):\n self.d = d\n self.e = e\n self.objA = A(\"hi\", 2, 6)\n\n def addAllNums(self):\n x = self.d + self.e + self.objA.b + self.objA.c\n return x\n\n\nobjB = B(5, 9)\nobjB.addAllNums()",
"Inheritance (IS-A)",
"class A():\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n def addNums():\n self.b + self.c\n\nclass B(A):\n def __init__(self, a, b, c, d, e):\n# A.__init__(self, a, b, c)\n super().__init__(a, b, c)\n self.d = d\n self.e = e\n \n\n def addAllNums(self):\n x = self.a + self.b + self.c + self.d + self.e\n return x\n\n\nobjB = B(1, 2, 3, 5, 9)\nobjB.addAllNums()",
"Function overriding",
"class A():\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n def addNums(self):\n return self.b * self.c\n\nclass B(A):\n def __init__(self, a, b, c, d, e):\n super().__init__(a, b, c)\n self.d = d\n self.e = e\n \n def addNums(self):\n return self.d + self.e\n \n def check(self):\n print(\"Class B func:\", self.addNums())\n print(\"Class B func:\", super().addNums())\n\n\nobjB = B(1,2,3,5, 9)\nobjB.check()",
"There is no function overloading\nGives no error but only last defined function is executed",
"class A():\n def f1(self, x):\n return x\n def f1(self, x, y):\n return x, y\n\n\nobjA = A()\nobjA.f1(8,5)\n# objA.f1(8) # Gives error\n\n# How to work with function overloading\nclass A():\n def f1(self,name = None):\n if name is None:\n return 5\n else:\n return name\n\nobjA = A()\nprint(objA.f1())\nobjA.f1(8)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
shareactorIO/pipeline | oreilly.ml/high-performance-tensorflow/notebooks/02_Feed_Queue_HDFS.ipynb | apache-2.0 | [
"Feed Dataset through Queue to Tensorflow from HDFS\nPopulate HDFS with Sample Dataset",
"%%bash\n\nhadoop fs -copyFromLocal /root/datasets/csv/ /csv\n\n%%bash\n\nhadoop fs -ls /csv",
"Open a Terminal through Jupyter Notebook\n(Menu Bar -> Terminal -> New Terminal)\n\nCreate Queue and Feed Tensorflow Graph\nRun the Next Cell to Display the Code",
"%%bash\n\ncat /root/src/main/python/queue/tensorflow_hdfs.py",
"Run the following in the Terminal\nqueue_hdfs"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hvillanua/deep-learning | tensorboard/Anna_KaRNNa_Summaries.ipynb | mit | [
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]",
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_cells\"):\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN outputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n tf.summary.histogram('softmax_w', softmax_w)\n tf.summary.histogram('softmax_b', softmax_b)\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n tf.summary.histogram('predictions', preds)\n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n tf.summary.scalar('cost', cost)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n merged = tf.summary.merge_all()\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer', 'merged']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.",
"!mkdir -p checkpoints/anna\n\nepochs = 10\nsave_every_n = 100\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)\n test_writer = tf.summary.FileWriter('./logs/2/test')\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, \n model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n train_writer.add_summary(summary, iteration)\n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n summary, batch_loss, new_state = sess.run([model.merged, model.cost, \n model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n \n test_writer.add_summary(summary, iteration)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n #saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google-aai/tf-serving-k8s-tutorial | jupyter/resnet_model_understanding.ipynb | apache-2.0 | [
"Understanding Resnet Model Features\nWe know that the Resnet model works well, but why does it work? How can we have confidence that it is searching out the correct features? A recent paper, Axiomatic Attribution for Deep Networks, shows that averaging gradients taken along a path of images from a blank image (e.g. pure black or grey) to the actual image, can robustly predict sets of pixels that have a strong impact on the overall classification of the image. The below code shows how to modify the TF estimator code to analyze model behavior of different images.",
"import csv\nimport io\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport pickle\nimport requests\nimport tensorflow as tf\n\nfrom io import BytesIO\nfrom PIL import Image\nfrom subprocess import call",
"Constants",
"_DEFAULT_IMAGE_SIZE = 224\n_NUM_CHANNELS = 3\n_LABEL_CLASSES = 1001\n\nRESNET_SIZE = 50 # We're loading a resnet-50 saved model.\n\n# Model directory\nMODEL_DIR='resnet_model_checkpoints'\nVIS_DIR='visualization'\n\n# RIEMANN STEPS is the number of steps in a Riemann Sum.\n# This is used to compute an approximate the integral of gradients by supplying\n# images on the path from a blank image to the original image.\nRIEMANN_STEPS = 30\n\n# Return the top k classes and probabilities, so we can also visualize model inference\n# against other contending classes besides the most likely class.\nTOP_K = 5\n",
"Download model checkpoint\nThe next step is to load the researcher's saved checkpoint into our estimator. We will download it from\nhttp://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz using the following commands.",
"import urllib.request\n\nurllib.request.urlretrieve(\"http://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz \", \"resnet.tar.gz\")\n\n#unzip the file into a directory called resnet\ncall([\"mkdir\", MODEL_DIR])\ncall([\"tar\", \"-zxvf\", \"resnet.tar.gz\", \"-C\", MODEL_DIR])\n\n# Make sure you see model checkpoint files in this directory\nos.listdir(MODEL_DIR)",
"Import the Model Architecture\nIn order to reconstruct the Resnet neural network used to train the Imagenet model, we need to load the architecture pieces. During the setup step, we checked out https://github.com/tensorflow/models/tree/v1.4.0/official/resnet. We can now load functions and constants from resnet_model.py into the notebook.",
"%run ../models/official/resnet/resnet_model.py #TODO: modify directory based on where you git cloned the TF models.",
"Image preprocessing functions\nNote that preprocessing functions are called during training as well (see https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py and https://github.com/tensorflow/models/blob/master/official/resnet/vgg_preprocessing.py), so we will need to extract relevant logic from these functions. Below is a simplified preprocessing code that normalizes the image's pixel values.\nFor simplicity, we assume the client provides properly-sized images 224 x 224 x 3 in batches. It will become clear later that sending images over ip in protobuf format can be more easily handled by storing a 4d tensor. The only preprocessing required here is to subtract the mean.",
"def preprocess_images(images):\n \"\"\"Preprocesses the image by subtracting out the mean from all channels.\n Args:\n image: A 4D `Tensor` representing a batch of images.\n Returns:\n image pixels normalized to be between -0.5 and 0.5\n \"\"\"\n return tf.to_float(images) / 255 - 0.5",
"Resnet Model Functions\nWe are going to create two estimators here since we need to run two model predictions. \n\n\nThe first prediction computes the top labels for the image by returning the argmax_k top logits. \n\n\nThe second prediction returns a sequence of gradients along the straightline path from a purely grey image (127.5, 127.5, 127.5) to the final image. We use grey here because the resnet model transforms this pixel value to all 0s.\n\n\nBelow is the resnet model function.",
"def resnet_model_fn(features, labels, mode):\n \"\"\"Our model_fn for ResNet to be used with our Estimator.\"\"\"\n\n # Preprocess images as necessary for resnet\n features = preprocess_images(features['images'])\n\n # This network must be IDENTICAL to that used to train.\n network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES)\n\n # tf.estimator.ModeKeys.TRAIN will be false since we are predicting.\n logits = network(\n inputs=features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))\n\n # Instead of the top 1 result, we can now return top k!\n top_k_logits, top_k_classes = tf.nn.top_k(logits, k=TOP_K)\n top_k_probs = tf.nn.softmax(top_k_logits)\n predictions = {\n 'classes': top_k_classes,\n 'probabilities': top_k_probs\n }\n\n\n return tf.estimator.EstimatorSpec(\n mode=mode,\n predictions=predictions, \n )\n",
"Gradients Model Function\nThe Gradients model function takes as input a single image (a 4d tensor of dimension [1, 244, 244, 3]) and expands it to a series of images (tensor dimension [RIEMANN_STEPS + 1, 244, 244, 3]), where each image is simply a \"fractional\" image, with image 0 being pure gray to image RIEMANN_STEPS being the original image. The gradients are then computed for each of these images, and various outputs are returned.\nNote: Each step is a single inference that returns an entire gradient pixel map.\nThe total gradient map evaluation can take a couple minutes!",
"def gradients_model_fn(features, labels, mode):\n \"\"\"Our model_fn for ResNet to be used with our Estimator.\"\"\"\n \n # Supply the most likely class from features dict to determine which logit function\n # to use gradients along the\n most_likely_class = features['most_likely_class']\n \n # Features here is a 4d tensor of ONE image. Normalize it as in training and serving.\n features = preprocess_images(features['images'])\n\n # This network must be IDENTICAL to that used to train.\n network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES)\n\n # path_features should have dim [RIEMANN_STEPS + 1, 224, 224, 3]\n path_features = tf.zeros([1, 224, 224, 3])\n for i in range(1, RIEMANN_STEPS + 1):\n path_features = tf.concat([path_features, features * i / RIEMANN_STEPS], axis=0)\n \n # Path logits should evaluate logits for each path feature and return a 2d array for all path images and classes\n path_logits = network(inputs=path_features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))\n\n # The logit we care about is only that pertaining to the most likely class\n # The most likely class contains only a single integer, so retrieve it.\n target_logits = path_logits[:, most_likely_class[0]]\n \n # Compute gradients for each image with respect to each logit\n gradients = tf.gradients(target_logits, path_features)\n \n # Multiply elementwise to the original image to get weighted gradients for each pixel.\n gradients = tf.squeeze(tf.multiply(gradients, features))\n \n predictions = {\n 'path_features': path_features, # for debugging\n 'path_logits': path_logits, # for debugging\n 'target_logits': target_logits, # use this to verify that the riemann integral works out\n 'path_features': path_features, # for displaying path images\n 'gradients': gradients # for displaying gradient images and computing integrated gradient\n }\n\n\n return tf.estimator.EstimatorSpec(\n mode=mode,\n predictions=predictions, # This is the returned value\n )\n",
"Estimators\nLoad in the model_fn using the checkpoints from MODEL_DIR. This will initialize our weights which we will then use to run backpropagation to find integrated gradients.",
"# Load this model into our estimator\nresnet_estimator = tf.estimator.Estimator(\n model_fn=resnet_model_fn, # Call our generate_model_fn to create model function\n model_dir=MODEL_DIR, # Where to look for model checkpoints\n #config not needed\n)\n\ngradients_estimator = tf.estimator.Estimator(\n model_fn=gradients_model_fn, # Call our generate_model_fn to create model function\n model_dir=MODEL_DIR, # Where to look for model checkpoints\n #config not needed\n)",
"Create properly sized image in numpy\nLoad whatever image you would like (local or url), and resize to 224 x 224 x 3 using opencv2.",
"def resize_and_pad_image(img, output_image_dim):\n \"\"\"Resize the image to make it IMAGE_DIM x IMAGE_DIM pixels in size.\n\n If an image is not square, it will pad the top/bottom or left/right\n with black pixels to ensure the image is square.\n\n Args:\n img: the input 3-color image\n output_image_dim: resized and padded output length (and width)\n\n Returns:\n resized and padded image\n \"\"\"\n\n old_size = img.size # old_size[0] is in (width, height) format\n\n ratio = float(output_image_dim) / max(old_size)\n new_size = tuple([int(x * ratio) for x in old_size])\n # use thumbnail() or resize() method to resize the input image\n\n # thumbnail is a in-place operation\n\n # im.thumbnail(new_size, Image.ANTIALIAS)\n\n scaled_img = img.resize(new_size, Image.ANTIALIAS)\n # create a new image and paste the resized on it\n\n padded_img = Image.new(\"RGB\", (output_image_dim, output_image_dim))\n padded_img.paste(scaled_img, ((output_image_dim - new_size[0]) // 2,\n (output_image_dim - new_size[1]) // 2))\n\n return padded_img\n\nIMAGE_PATH = 'https://www.popsci.com/sites/popsci.com/files/styles/1000_1x_/public/images/2017/09/depositphotos_33210141_original.jpg?itok=MLFznqbL&fc=50,50'\nIMAGE_NAME = os.path.splitext(os.path.basename(IMAGE_PATH))[0]\nprint(IMAGE_NAME)\n\nimage = None\nif 'http' in IMAGE_PATH:\n resp = requests.get(IMAGE_PATH)\n image = Image.open(BytesIO(resp.content))\nelse:\n image = Image.open(IMAGE_PATH) # Parse the image from your local disk.\n# Resize and pad the image\nimage = resize_and_pad_image(image, _DEFAULT_IMAGE_SIZE)\nfeature = np.asarray(image)\nfeature = np.array([feature])\n\n# Display the image to validate\nimgplot = plt.imshow(feature[0])\nplt.show()",
"Prediction Input Function\nSince we are analyzing the model using the estimator api, we need to provide an input function for prediction. Fortunately, there are built-in input functions that can read from numpy arrays, e.g. tf.estimator.inputs.numpy_input_fn.",
"label_predictions = resnet_estimator.predict(\n tf.estimator.inputs.numpy_input_fn(\n x={'images': feature},\n shuffle=False\n )\n)\n\nlabel_dict = next(label_predictions)\n\n\n# Print out probabilities and class names\nclassval = label_dict['classes']\nprobsval = label_dict['probabilities']\nlabels = []\nwith open('client/imagenet1000_clsid_to_human.txt', 'r') as f:\n label_reader = csv.reader(f, delimiter=':', quotechar='\\'')\n for row in label_reader:\n labels.append(row[1][:-1])\n# The served model uses 0 as the miscellaneous class, and so starts indexing\n# the imagenet images from 1. Subtract 1 to reference the text correctly.\nclassval = [labels[x - 1] for x in classval]\nclass_and_probs = [str(p) + ' : ' + c for c, p in zip(classval, probsval)]\nfor j in range(0, 5):\n print(class_and_probs[j])",
"Computing Gradients\nRun the gradients estimator to retrieve a generator of metrics and gradient pictures, and pickle the images.",
"# make the visualization directory\nIMAGE_DIR = os.path.join(VIS_DIR, IMAGE_NAME)\ncall(['mkdir', '-p', IMAGE_DIR])\n\n\n# Get one of the top classes. 0 picks out the best, 1 picks out second best, etc...\nbest_label = label_dict['classes'][0]\n\n# Compute gradients with respect to this class\ngradient_predictions = gradients_estimator.predict(\n tf.estimator.inputs.numpy_input_fn(\n x={'images': feature, 'most_likely_class': np.array([best_label])},\n shuffle=False\n )\n)\n\n# Start computing the sum of gradients (to be used for integrated gradients)\nint_gradients = np.zeros((224, 224, 3))\ngradients_and_logits = []\n\n# Print gradients along the path, and pickle them\nfor i in range(0, RIEMANN_STEPS + 1):\n gradient_dict = next(gradient_predictions)\n gradient_map = gradient_dict['gradients']\n print('Path image %d: gradient: %f, logit: %f' % (i, np.sum(gradient_map), gradient_dict['target_logits']))\n # Gradient visualization output pickles\n pickle.dump(gradient_map, open(os.path.join(IMAGE_DIR, 'path_gradient_' + str(i) + '.pkl'), \"wb\" ))\n int_gradients = np.add(int_gradients, gradient_map)\n gradients_and_logits.append((np.sum(gradient_map), gradient_dict['target_logits']))\n \npickle.dump(int_gradients, open(os.path.join(IMAGE_DIR, 'int_gradients.pkl'), \"wb\" ))\npickle.dump(gradients_and_logits, open(os.path.join(IMAGE_DIR, 'gradients_and_logits.pkl'), \"wb\" ))",
"Visualization\nIf you simply want to play around with visualization, unpickle the result from above so you do not have to rerun prediction again. The following visualizes the gradients with different amplification of pixels, and prints their derivatives and logits as well to view where the biggest differentiators lie. You can also modify the INTERPOLATION flag to increase the \"fatness\" of pixels.\nBelow are two examples of visualization methods: one computing the gradient value normalized to between 0 and 1, and another visualizing absolute deviation from the median.\nPlotting individual image gradients along path\nFirst, let us plot the individual gradient value for all gradient path images. Pay special attention to the images with a large positive gradient (i.e. in the direction of increasing logit for the most likely class). Do the pixel gradients resemble the image class you are trying to detect?",
"AMPLIFICATION = 2.0\nINTERPOLATION = 'none'\n\ngradients_and_logits = pickle.load(open(os.path.join(IMAGE_DIR, 'gradients_and_logits.pkl'), \"rb\" ))\nfor i in range(0, RIEMANN_STEPS + 1):\n gradient_map = pickle.load(open(os.path.join(IMAGE_DIR, 'path_gradient_' + str(i) + '.pkl'), \"rb\" ))\n min_grad = np.ndarray.min(gradient_map)\n max_grad = np.ndarray.max(gradient_map)\n median_grad = np.median(gradient_map)\n gradient_and_logit = gradients_and_logits[i]\n\n plt.figure(figsize=(10,10))\n plt.subplot(121)\n plt.title('Image %d: grad: %.2f, logit: %.2f' % (i, gradient_and_logit[0], gradient_and_logit[1]))\n imgplot = plt.imshow((gradient_map - min_grad) / (max_grad - min_grad),\n interpolation=INTERPOLATION)\n plt.subplot(122)\n plt.title('Image %d: grad: %.2f, logit: %.2f' % (i, gradient_and_logit[0], gradient_and_logit[1]))\n imgplot = plt.imshow(np.abs(gradient_map - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),\n interpolation=INTERPOLATION)\n plt.show()",
"Plot the Integrated Gradient\nWhen integrating over all gradients along the path, the result is an image that captures larger signals from pixels with the large gradients. Is the integrated gradient a clear representation of what it is trying to detect?",
"AMPLIFICATION = 2.0\nINTERPOLATION = 'none'\n\n# Plot the integrated gradients\nint_gradients = pickle.load(open(os.path.join(IMAGE_DIR, 'int_gradients.pkl'), \"rb\" ))\nmin_grad = np.ndarray.min(int_gradients)\nmax_grad = np.ndarray.max(int_gradients)\nmedian_grad = np.median(int_gradients)\nplt.figure(figsize=(15,15))\nplt.subplot(131)\nimgplot = plt.imshow((int_gradients - min_grad) / (max_grad - min_grad),\n interpolation=INTERPOLATION)\nplt.subplot(132)\nimgplot = plt.imshow(np.abs(int_gradients - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),\n interpolation=INTERPOLATION)\nplt.subplot(133)\nimgplot = plt.imshow(feature[0])\nplt.show()\n\n# Verify that the average of gradients is equal to the difference in logits\nprint('total logit diff: %f' % (gradients_and_logits[RIEMANN_STEPS][1] - gradients_and_logits[0][1]))\nprint('sum of integrated gradients: %f' % (np.sum(int_gradients) / RIEMANN_STEPS + 1))",
"Plot the integrated gradients for each channel\nWe can also visualize individual pixel contributions from different RGB channels.\nCan you think of any other visualization ideas to try out?",
"AMPLIFICATION = 2.0\nINTERPOLATION = 'none'\n\n# Show red-green-blue channels for integrated gradients\nfor channel in range(0, 3):\n gradient_channel = int_gradients[:,:,channel]\n min_grad = np.ndarray.min(gradient_channel)\n max_grad = np.ndarray.max(gradient_channel)\n median_grad = np.median(gradient_channel)\n plt.figure(figsize=(10,10))\n plt.subplot(121)\n imgplot = plt.imshow((gradient_channel - min_grad) / (max_grad - min_grad),\n interpolation=INTERPOLATION,\n cmap='gray')\n plt.subplot(122)\n imgplot = plt.imshow(np.abs(gradient_channel - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),\n interpolation=INTERPOLATION,\n cmap='gray')\n plt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GEMScienceTools/rmtk | notebooks/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb | agpl-3.0 | [
"SPO2IDA\nThis methodology uses the SPO2IDA tool described in Vamvatsikos and Cornell (2006) to convert static pushover curves into $16\\%$, $50\\%$, and $84\\%$ IDA curves. The SPO2IDA tool is based on empirical relationships obtained from a large database of incremental dynamic analysis results. This procedure is applicable to any kind of multi-linear capacity curve and it is suitable for single-building fragility curve estimation. Individual fragility curves can later be combined into a single fragility curve that considers the inter-building uncertainty. The figure below illustrates the IDA curves estimated using this methodology for a given capacity curve.\n<img src=\"../../../../../figures/spo2ida.jpg\" width=\"500\" align=\"middle\">\nNote: To run the code in a cell:\n\nClick on the cell to select it.\nPress SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.",
"from rmtk.vulnerability.derivation_fragility.R_mu_T_dispersion.SPO2IDA import SPO2IDA_procedure \nfrom rmtk.vulnerability.common import utils\n%matplotlib inline ",
"Load capacity curves\nIn order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. In case multiple capacity curves are input, a spectral shape also needs to be defined.\n\nPlease provide the location of the file containing the capacity curves using the parameter capacity_curves_file.\nPlease also provide a spectral shape using the parameter input_spectrum if multiple capacity curves are used.",
"capacity_curves_file = \"../../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv\"\ninput_spectrum = \"../../../../../../rmtk_data/FEMAP965spectrum.txt\"\n\ncapacity_curves = utils.read_capacity_curves(capacity_curves_file)\nSa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)\nutils.plot_capacity_curves(capacity_curves)",
"Idealise pushover curves\nIn order to use this methodology the pushover curves need to be idealised. Please choose an idealised shape using the parameter idealised_type. The valid options for this methodology are \"bilinear\" and \"quadrilinear\". Idealised curves can also be directly provided as input by setting the field Idealised to TRUE in the input file defining the capacity curves.",
"idealised_type = \"quadrilinear\"\n\nidealised_capacity = utils.idealisation(idealised_type, capacity_curves)\nutils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)",
"Load damage state thresholds\nPlease provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently only interstorey drift damage model type is supported.",
"damage_model_file = \"../../../../../../rmtk_data/damage_model_ISD.csv\"\n\ndamage_model = utils.read_damage_model(damage_model_file)",
"Calculate fragility functions\nThe damage threshold dispersion is calculated and integrated with the record-to-record dispersion through Monte Carlo simulations. Please enter the number of Monte Carlo samples to be performed using the parameter montecarlo_samples in the cell below.",
"montecarlo_samples = 50\n\nfragility_model = SPO2IDA_procedure.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios, 1)",
"Plot fragility functions\nThe following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:\n* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions",
"minIML, maxIML = 0.01, 2\n\nutils.plot_fragility_model(fragility_model, minIML, maxIML)\n\nprint fragility_model",
"Save fragility functions\nThe derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n2. minIML and maxIML: These parameters define the bounds of applicability of the functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".",
"taxonomy = \"RC\"\nminIML, maxIML = 0.01, 2.00\noutput_type = \"csv\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)",
"Obtain vulnerability function\nA vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level. \nThe following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:\n1. cons_model_file: This parameter specifies the path of the consequence model file.\n2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.\n3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are \"lognormal\", \"beta\", and \"PMF\".",
"cons_model_file = \"../../../../../../rmtk_data/cons_model.csv\"\nimls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, \n 0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]\ndistribution_type = \"lognormal\"\n\ncons_model = utils.read_consequence_model(cons_model_file)\nvulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model, \n imls, distribution_type)",
"Plot vulnerability function",
"utils.plot_vulnerability_model(vulnerability_model)",
"Save vulnerability function\nThe derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".",
"taxonomy = \"RC\"\noutput_type = \"nrml\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cathalmccabe/PYNQ | boards/Pynq-Z1/base/notebooks/video/opencv_face_detect_webcam.ipynb | bsd-3-clause | [
"OpenCV Face Detection Webcam\nIn this notebook, opencv face detection will be applied to webcam images.\nTo run all cells in this notebook a webcam and HDMI output monitor are required. \nReferences:\nhttps://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml\nhttps://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_eye.xml\nStep 1: Load the overlay",
"from pynq.overlays.base import BaseOverlay\nfrom pynq.lib.video import *\nbase = BaseOverlay(\"base.bit\")",
"Step 2: Initialize Webcam and HDMI Out",
"# monitor configuration: 640*480 @ 60Hz\nMode = VideoMode(640,480,24)\nhdmi_out = base.video.hdmi_out\nhdmi_out.configure(Mode,PIXEL_BGR)\nhdmi_out.start()\n\n# monitor (output) frame buffer size\nframe_out_w = 1920\nframe_out_h = 1080\n# camera (input) configuration\nframe_in_w = 640\nframe_in_h = 480\n\n# initialize camera from OpenCV\nimport cv2\n\nvideoIn = cv2.VideoCapture(0)\nvideoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);\nvideoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);\n\nprint(\"Capture device is open: \" + str(videoIn.isOpened()))",
"Step 3: Show input frame on HDMI output",
"# Capture webcam image\nimport numpy as np\n\nret, frame_vga = videoIn.read()\n\n# Display webcam image via HDMI Out\nif (ret): \n outframe = hdmi_out.newframe()\n outframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]\n hdmi_out.writeframe(outframe)\nelse:\n raise RuntimeError(\"Failed to read from camera.\")",
"Step 4: Now use matplotlib to show image inside notebook",
"# Output webcam image as JPEG\n%matplotlib inline \nfrom matplotlib import pyplot as plt\nimport numpy as np\nplt.imshow(frame_vga[:,:,[2,1,0]])\nplt.show()",
"Step 5: Apply the face detection to the input",
"import cv2\n\nnp_frame = frame_vga\n\nface_cascade = cv2.CascadeClassifier(\n '/home/xilinx/jupyter_notebooks/base/video/data/'\n 'haarcascade_frontalface_default.xml')\neye_cascade = cv2.CascadeClassifier(\n '/home/xilinx/jupyter_notebooks/base/video/data/'\n 'haarcascade_eye.xml')\n\ngray = cv2.cvtColor(np_frame, cv2.COLOR_BGR2GRAY)\nfaces = face_cascade.detectMultiScale(gray, 1.3, 5)\n\nfor (x,y,w,h) in faces:\n cv2.rectangle(np_frame,(x,y),(x+w,y+h),(255,0,0),2)\n roi_gray = gray[y:y+h, x:x+w]\n roi_color = np_frame[y:y+h, x:x+w]\n\n eyes = eye_cascade.detectMultiScale(roi_gray)\n for (ex,ey,ew,eh) in eyes:\n cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)",
"Step 6: Show results on HDMI output",
"# Output OpenCV results via HDMI\noutframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]\nhdmi_out.writeframe(outframe)",
"Step 7: Now use matplotlib to show image inside notebook",
"# Output OpenCV results via matplotlib\n%matplotlib inline \nfrom matplotlib import pyplot as plt\nimport numpy as np\nplt.imshow(np_frame[:,:,[2,1,0]])\nplt.show()",
"Step 8: Release camera and HDMI",
"videoIn.release()\nhdmi_out.stop()\ndel hdmi_out"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ledeprogram/algorithms | class6/donow/Skinner_Barnaby_DoNow_6.ipynb | gpl-3.0 | [
"1. Import the necessary packages to read in the data, plot, and create a linear regression model",
"import pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport statsmodels.formula.api as smf",
"2. Read in the hanford.csv file",
"df = pd.read_csv(\"data/hanford.csv\")\n\ndf",
"3. Calculate the basic descriptive statistics on the data",
"df['Exposure'].mean()\n\ndf['Exposure'].describe()",
"4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?",
"df.corr()\n\ndf.plot(kind='scatter', x='Mortality', y='Exposure')",
"5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure",
"lm = smf.ols(formula='Mortality~Exposure',data=df).fit()\nlm.params\n\n\n\nintercept, Exposure = lm.params\nMortality = Exposure*10+intercept\n\nMortality",
"6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)\n7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10",
"intercept, Exposure = lm.params\nMortality = Exposure*10+intercept\n\nMortality"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/ja/r1/tutorials/keras/basic_text_classification.ipynb | apache-2.0 | [
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"映画レビューのテキスト分類\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/basic_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/basic_text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nNote: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [email protected] メーリングリストにご連絡ください。\nここでは、映画のレビューをそのテキストを使って肯定的か否定的かに分類します。これは、二値分類あるいは2クラス分類という問題の例であり、機械学習において重要でいろいろな応用が可能なものです。\nここでは、Internet Movie Databaseから抽出した50,000件の映画レビューを含む、 IMDB dataset を使います。レビューは訓練用とテスト用に25,000件ずつに分割されています。訓練用とテスト用のデータは均衡しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。\nここでは、TensorFlowを使ってモデルを構築・訓練するためのハイレベルなAPIである tf.kerasを使用します。tf.kerasを使ったもう少し高度なテキスト分類のチュートリアルについては、 MLCC Text Classification Guideを参照してください。",
"# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3\n!pip install tf_nightly\n\nimport tensorflow.compat.v1 as tf\n\nfrom tensorflow import keras\n\nimport numpy as np\n\nprint(tf.__version__)",
"IMDB datasetのダウンロード\nIMDBデータセットは、TensorFlowにパッケージ化されています。それは前処理済みのものであり、(単語の連なりである)レビューが、整数の配列に変換されています。そこでは整数が辞書中の特定の単語を表します。\n次のコードは、IMDBデータセットをあなたのパソコンにダウンロードします。(すでにダウンロードしていれば、キャッシュされたコピーを使用します)",
"imdb = keras.datasets.imdb\n\n(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)",
"num_words=10000という引数は、訓練データ中に出てくる単語のうち、最も頻繁に出現する10,000個を保持するためのものです。データサイズを管理可能にするため、稀にしか出現しない単語は破棄されます。\nデータを調べる\nデータの形式を理解するために少し時間を割いてみましょう。このデータセットは前処理済みで、サンプルそれぞれが、映画レビューの中の単語を表す整数の配列になっています。ラベルはそれぞれ、0または1の整数値で、0が否定的レビュー、1が肯定的なレビューを示しています。",
"print(\"Training entries: {}, labels: {}\".format(len(train_data), len(train_labels)))",
"レビューのテキストは複数の整数に変換されており、それぞれの整数が辞書の中の特定の単語を表します。最初のレビューがどのようなものか見てみましょう。",
"print(train_data[0])",
"映画のレビューはそれぞれ長さが異なっていることでしょう。次のコードで、最初と2つ目のレビューの単語の数を見てみます。ニューラルネットワークへの入力は同じ長さでなければならないため、後ほどその問題を解決する必要があります。",
"len(train_data[0]), len(train_data[1])",
"整数を単語に戻してみる\n整数をテキストに戻す方法を知っていると便利です。整数を文字列にマッピングする辞書オブジェクトを検索するためのヘルパー関数を定義します。",
"# 単語を整数にマッピングする辞書\nword_index = imdb.get_word_index()\n\n# インデックスの最初の方は予約済み\nword_index = {k:(v+3) for k,v in word_index.items()}\nword_index[\"<PAD>\"] = 0\nword_index[\"<START>\"] = 1\nword_index[\"<UNK>\"] = 2 # unknown\nword_index[\"<UNUSED>\"] = 3\n\nreverse_word_index = dict([(value, key) for (key, value) in word_index.items()])\n\ndef decode_review(text):\n return ' '.join([reverse_word_index.get(i, '?') for i in text])",
"decode_reviewを使うと、最初のレビューのテキストを表示できます。",
"decode_review(train_data[0])",
"データの準備\nレビュー(整数の配列)は、ニューラルネットワークに投入する前に、テンソルに変換する必要があります。これには2つの方法があります。\n\n配列をワンホット(one-hot)エンコーディングと同じように、単語の出現を表す0と1のベクトルに変換します。例えば、[3, 5]という配列は、インデックス3と5を除いてすべてゼロの10,000次元のベクトルになります。そして、これをネットワークの最初の層、すなわち、浮動小数点のベクトルデータを扱うことができるDense(全結合)層とします。ただし、これは単語数×レビュー数の行列が必要なメモリ集約的な方法です。\nもう一つの方法では、配列をパディングによって同じ長さに揃え、サンプル数 * 長さの最大値の形の整数テンソルにします。そして、この形式を扱うことができるEmbedding(埋め込み)層をネットワークの最初の層にします。\n\nこのチュートリアルでは、後者を採用することにします。\n映画レビューは同じ長さでなければならないので、長さを標準化する pad_sequences 関数を使うことにします。",
"train_data = keras.preprocessing.sequence.pad_sequences(train_data,\n value=word_index[\"<PAD>\"],\n padding='post',\n maxlen=256)\n\ntest_data = keras.preprocessing.sequence.pad_sequences(test_data,\n value=word_index[\"<PAD>\"],\n padding='post',\n maxlen=256)",
"サンプルの長さを見てみましょう。",
"len(train_data[0]), len(train_data[1])",
"次に、パディング済みの最初のサンプルを確認します。",
"print(train_data[0])",
"モデルの構築\nニューラルネットワークは、層を積み重ねることで構成されます。この際、2つの大きな決定が必要です。\n\nモデルにいくつの層を設けるか?\n層ごとに何個の隠れユニットを使用するか?\n\nこの例では、入力データは単語インデックスの配列で構成されています。推定の対象となるラベルは、0または1です。この問題のためのモデルを構築しましょう。",
"# 入力の形式は映画レビューで使われている語彙数(10,000語)\nvocab_size = 10000\n\nmodel = keras.Sequential()\nmodel.add(keras.layers.Embedding(vocab_size, 16))\nmodel.add(keras.layers.GlobalAveragePooling1D())\nmodel.add(keras.layers.Dense(16, activation=tf.nn.relu))\nmodel.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))\n\nmodel.summary()",
"これらの層は、分類器を構成するため一列に積み重ねられます。\n\n最初の層はEmbedding(埋め込み)層です。この層は、整数にエンコードされた語彙を受け取り、それぞれの単語インデックスに対応する埋め込みベクトルを検索します。埋め込みベクトルは、モデルの訓練の中で学習されます。ベクトル化のために、出力行列には次元が1つ追加されます。その結果、次元は、(batch, sequence, embedding)となります。\n次は、GlobalAveragePooling1D(1次元のグローバル平均プーリング)層です。この層は、それぞれのサンプルについて、シーケンスの次元方向に平均値をもとめ、固定長のベクトルを返します。この結果、モデルは最も単純な形で、可変長の入力を扱うことができるようになります。\nこの固定長の出力ベクトルは、16個の隠れユニットを持つ全結合(Dense)層に受け渡されます。\n最後の層は、1個の出力ノードに全結合されます。シグモイド(sigmoid)活性化関数を使うことで、値は確率あるいは確信度を表す0と1の間の浮動小数点数となります。\n\n隠れユニット\n上記のモデルには、入力と出力の間に、2つの中間層あるいは「隠れ」層があります。出力(ユニット、ノード、またはニューロン)は、その層の内部表現の次元数です。言い換えると、このネットワークが学習によって内部表現を獲得する際の自由度ということです。\nモデルにより多くの隠れユニットがある場合(内部表現空間の次元数がより大きい場合)、または、より多くの層がある場合、あるいはその両方の場合、ネットワークはより複雑な内部表現を学習することができます。しかしながら、その結果として、ネットワークの計算量が多くなるほか、学習してほしくないパターンを学習するようになります。学習してほしくないパターンとは、訓練データでの性能は向上するものの、テスト用データの性能が向上しないパターンです。この問題を過学習(overfitting)といいます。この問題は後ほど検証することになります。\n損失関数とオプティマイザ\nモデルを訓練するには、損失関数とオプティマイザが必要です。今回の問題は二値分類問題であり、モデルの出力は確率(1ユニットの層とシグモイド活性化関数)であるため、損失関数としてbinary_crossentropy(2値のクロスエントロピー)関数を使用することにします。\n損失関数の候補はこれだけではありません。例えば、mean_squared_error(平均二乗誤差)を使うこともできます。しかし、一般的には、確率を扱うにはbinary_crossentropyの方が適しています。binary_crossentropyは、確率分布の間の「距離」を測定する尺度です。今回の場合には、真の分布と予測値の分布の間の距離ということになります。\n後ほど、回帰問題を検証する際には(例えば家屋の値段を推定するとか)、もう一つの損失関数であるmean_squared_error(平均二乗誤差)の使い方を目にすることになります。\nさて、モデルのオプティマイザと損失関数を設定しましょう。",
"model.compile(optimizer=tf.keras.optimizers.Adam(),\n loss='binary_crossentropy',\n metrics=['accuracy'])",
"検証用データを作る\n訓練を行う際、モデルが見ていないデータでの正解率を検証したいと思います。もとの訓練用データから、10,000個のサンプルを取り分けて検証用データ(validation set)を作ります。(なぜ、ここでテスト用データを使わないのでしょう? 今回の目的は、訓練用データだけを使って、モデルの開発とチューニングを行うことです。その後、テスト用データを1回だけ使い、正解率を検証するのです。)",
"x_val = train_data[:10000]\npartial_x_train = train_data[10000:]\n\ny_val = train_labels[:10000]\npartial_y_train = train_labels[10000:]",
"モデルの訓練\n512個のサンプルからなるミニバッチを使って、40エポックモデルを訓練します。この結果、x_trainとy_trainに含まれるすべてのサンプルを40回繰り返すことになります。訓練中、検証用データの10,000サンプルを用いて、モデルの損失と正解率をモニタリングします。",
"history = model.fit(partial_x_train,\n partial_y_train,\n epochs=40,\n batch_size=512,\n validation_data=(x_val, y_val),\n verbose=1)",
"モデルの評価\nさて、モデルの性能を見てみましょう。2つの値が返されます。損失(エラーを示す数値であり、小さい方が良い)と正解率です。",
"results = model.evaluate(test_data, test_labels, verbose=2)\n\nprint(results)",
"この、かなり素朴なアプローチでも87%前後の正解率を達成しました。もっと高度なアプローチを使えば、モデルの正解率は95%に近づけることもできるでしょう。\n正解率と損失の時系列グラフを描く\nmodel.fit() は、訓練中に発生したすべてのことを記録した辞書を含むHistory オブジェクトを返します。",
"history_dict = history.history\nhistory_dict.keys()",
"4つのエントリがあります。それぞれが、訓練と検証の際にモニターしていた指標を示します。これを使って、訓練時と検証時の損失を比較するグラフと、訓練時と検証時の正解率を比較するグラフを作成することができます。",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nacc = history.history['acc']\nval_acc = history.history['val_acc']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\n# \"bo\" は青いドット\nplt.plot(epochs, loss, 'bo', label='Training loss')\n# ”b\" は青い実線\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\n\nplt.show()\n\nplt.clf() # 図のクリア\nacc_values = history_dict['acc']\nval_acc_values = history_dict['val_acc']\n\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend()\n\nplt.show()",
"上記のグラフでは、点が訓練時の損失と正解率を、実線が検証時の損失と正解率を表しています。\n訓練時の損失がエポックごとに減少し、訓練時の正解率がエポックごとに上昇していることに気がつくはずです。繰り返すごとに指定された数値指標を最小化する勾配降下法を最適化に使用している場合に期待される動きです。\nこれは、検証時の損失と正解率には当てはまりません。20エポックを過ぎたあたりから、横ばいになっているようです。これが、過学習の一例です。モデルの性能が、訓練用データでは高い一方で、見たことの無いデータではそれほど高くないというものです。このポイントをすぎると、モデルが最適化しすぎて、訓練用データでは特徴的であるが、テスト用データには一般化できない内部表現を学習しています。\nこのケースの場合、20エポックを過ぎたあたりで訓練をやめることで、過学習を防止することが出来ます。後ほど、コールバックを使って、これを自動化する方法を紹介します。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bioasp/meneco | meneco.ipynb | gpl-3.0 | [
"Meneco demo\nFirst you need to install Meneco. For example with pip ...",
"pip install meneco",
"then you can import the necessary modules ...",
"from clyngor.as_pyasp import TermSet,Atom\nfrom urllib.request import urlopen\nfrom meneco.meneco import query, utils, sbml",
"Next, you can load a draft network from an sbml file ...",
"draft_sbml= urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/ectocyc.sbml')\ndraftnet = sbml.readSBMLnetwork(draft_sbml, 'draft') ",
"load the seeds ...",
"seeds_sbml = urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/seeds.sbml')\nseeds = sbml.readSBMLseeds(seeds_sbml)",
"and load the targets ...",
"targets_sbml = urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/targets.sbml')\ntargets = sbml.readSBMLtargets(targets_sbml)",
"Then you can check the draft network for unproducible targets ...",
"model = query.get_unproducible(draftnet, targets, seeds)\nunproducible = set(a[0] for pred in model if pred == 'unproducible_target' for a in model[pred])\nprint('{0} unproducible targets:\\n\\t{1}\\n'.format(len(unproducible), '\\n\\t'.join(unproducible)))",
"You can load another reaction network like metacyc repair data base ...",
"repair_sbml = urlopen('https://raw.githubusercontent.com/bioasp/meneco/master/Ectodata/metacyc_16-5.sbml')\nrepairnet = sbml.readSBMLnetwork(repair_sbml, 'repairnet')",
"and combine the draft network with the repair database ...",
"combinet = draftnet\ncombinet = TermSet(combinet.union(repairnet))",
"and then check for which targets producibilty cannot be restored even with the combined networks ...",
"model = query.get_unproducible(combinet, targets, seeds)\nnever_producible = set(a[0] for pred in model if pred == 'unproducible_target' for a in model[pred])\nprint('{0} unreconstructable targets:\\n\\t{1}\\n'.format(len(never_producible), '\\n\\t'.join(never_producible)))",
"and for which targets the production paths are repairable ...",
"reconstructable_targets = unproducible.difference(never_producible)\nprint('{0} reconstructable targets:\\n\\t{1}\\n'.format(len(reconstructable_targets), '\\n\\t'.join(reconstructable_targets)))",
"You can compute the essential reactions for the repairable target ...",
"essential_reactions = set()\nfor t in reconstructable_targets:\n single_target = TermSet()\n single_target.add(Atom('target(\"' + t + '\")'))\n print('\\nComputing essential reactions for', t,'... ', end=' ')\n model = query.get_intersection_of_completions(draftnet, repairnet, seeds, single_target)\n print(' done.')\n essentials_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])\n print('{0} essential reactions for target {1}:\\n\\t{2}'.format(len(essentials_lst), t, '\\n\\t'.join(essentials_lst)))\n essential_reactions = essential_reactions.union(essentials_lst)\nprint('Overall {0} essential reactions found:\\n\\t{1}\\n'.format(len(essential_reactions), '\\n\\t'.join(essential_reactions)))",
"You can compute a completion of minimal size suitable to produce all reconstructable targets ...",
"targets = TermSet(Atom('target(\"' + t+'\")') for t in reconstructable_targets)\nmodel = query.get_minimal_completion_size(draftnet, repairnet, seeds, targets)\none_min_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])\noptimum = len(one_min_sol_lst)\n\nprint('minimal size =',optimum)\n\nprint('One minimal completion of size {0}:\\n\\t{1}\\n'.format(\n optimum, '\\n\\t'.join(one_min_sol_lst)))",
"We can compute the common reactions in all completion with a given size ...",
"model = query.get_intersection_of_optimal_completions(draftnet, repairnet, seeds, targets, optimum)\ncautious_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])\n\nprint('Intersection of all solutions of size {0}:\\n\\t{1}\\n'.format(\n optimum, '\\n\\t'.join(cautious_sol_lst)))",
"We can compute the union of all completion with a given size ...",
"model = query.get_union_of_optimal_completions(draftnet, repairnet, seeds, targets, optimum)\nbrave_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])\n\nprint('Intersection of all solutions of size {0}:\\n\\t{1}\\n'.format(\n optimum, '\\n\\t'.join(brave_sol_lst)))",
"And finally compute all (for this notebook we print only the first three) completions with a given size ...",
"models = query.get_optimal_completions(draftnet, repairnet, seeds, targets, optimum)\ncount = 0\nfor model in models:\n one_min_sol_lst = set(a[0] for pred in model if pred == 'xreaction' for a in model[pred])\n count += 1\n print('Completion {0}:\\n\\t{1}\\n'.format(\n str(count), '\\n\\t'.join(one_min_sol_lst)))\n if count == 3: break",
"That's all folks!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/eng-edu | ml/cc/exercises/intro_to_neural_nets.ipynb | apache-2.0 | [
"#@title Copyright 2020 Google LLC. Double-click here for license information.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Introduction to Neural Nets\nThis Colab builds a deep neural network to perform more sophisticated linear regression than the earlier Colabs.\nLearning Objectives:\nAfter doing this Colab, you'll know how to do the following:\n\nCreate a simple deep neural network.\nTune the hyperparameters for a simple deep neural network.\n\nThe Dataset\nLike several of the previous Colabs, this Colab uses the California Housing Dataset.\nUse the right version of TensorFlow\nThe following hidden code cell ensures that the Colab will run on TensorFlow 2.X.",
"#@title Run on TensorFlow 2.x\n%tensorflow_version 2.x\nfrom __future__ import absolute_import, division, print_function, unicode_literals",
"Import relevant modules\nThe following hidden code cell imports the necessary code to run the code in the rest of this Colaboratory.",
"#@title Import relevant modules\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.keras import layers\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n\n# The following lines adjust the granularity of reporting. \npd.options.display.max_rows = 10\npd.options.display.float_format = \"{:.1f}\".format\n\nprint(\"Imported modules.\")",
"Load the dataset\nLike most of the previous Colab exercises, this exercise uses the California Housing Dataset. The following code cell loads the separate .csv files and creates the following two pandas DataFrames:\n\ntrain_df, which contains the training set\ntest_df, which contains the test set",
"train_df = pd.read_csv(\"https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv\")\ntrain_df = train_df.reindex(np.random.permutation(train_df.index)) # shuffle the examples\ntest_df = pd.read_csv(\"https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv\")",
"Normalize values\nWhen building a model with multiple features, the values of each feature should cover roughly the same range. The following code cell normalizes datasets by converting each raw value to its Z-score. (For more information about Z-scores, see the Classification exercise.)",
"#@title Convert raw values to their Z-scores \n\n# Calculate the Z-scores of each column in the training set:\ntrain_df_mean = train_df.mean()\ntrain_df_std = train_df.std()\ntrain_df_norm = (train_df - train_df_mean)/train_df_std\n\n# Calculate the Z-scores of each column in the test set.\ntest_df_mean = test_df.mean()\ntest_df_std = test_df.std()\ntest_df_norm = (test_df - test_df_mean)/test_df_std\n\nprint(\"Normalized the values.\")",
"Represent data\nThe following code cell creates a feature layer containing three features:\n\nlatitude X longitude (a feature cross)\nmedian_income\npopulation\n\nThis code cell specifies the features that you'll ultimately train the model on and how each of those features will be represented. The transformations (collected in my_feature_layer) don't actually get applied until you pass a DataFrame to it, which will happen when we train the model.",
"# Create an empty list that will eventually hold all created feature columns.\nfeature_columns = []\n\n# We scaled all the columns, including latitude and longitude, into their\n# Z scores. So, instead of picking a resolution in degrees, we're going\n# to use resolution_in_Zs. A resolution_in_Zs of 1 corresponds to \n# a full standard deviation. \nresolution_in_Zs = 0.3 # 3/10 of a standard deviation.\n\n# Create a bucket feature column for latitude.\nlatitude_as_a_numeric_column = tf.feature_column.numeric_column(\"latitude\")\nlatitude_boundaries = list(np.arange(int(min(train_df_norm['latitude'])), \n int(max(train_df_norm['latitude'])), \n resolution_in_Zs))\nlatitude = tf.feature_column.bucketized_column(latitude_as_a_numeric_column, latitude_boundaries)\n\n# Create a bucket feature column for longitude.\nlongitude_as_a_numeric_column = tf.feature_column.numeric_column(\"longitude\")\nlongitude_boundaries = list(np.arange(int(min(train_df_norm['longitude'])), \n int(max(train_df_norm['longitude'])), \n resolution_in_Zs))\nlongitude = tf.feature_column.bucketized_column(longitude_as_a_numeric_column, \n longitude_boundaries)\n\n# Create a feature cross of latitude and longitude.\nlatitude_x_longitude = tf.feature_column.crossed_column([latitude, longitude], hash_bucket_size=100)\ncrossed_feature = tf.feature_column.indicator_column(latitude_x_longitude)\nfeature_columns.append(crossed_feature) \n\n# Represent median_income as a floating-point value.\nmedian_income = tf.feature_column.numeric_column(\"median_income\")\nfeature_columns.append(median_income)\n\n# Represent population as a floating-point value.\npopulation = tf.feature_column.numeric_column(\"population\")\nfeature_columns.append(population)\n\n# Convert the list of feature columns into a layer that will later be fed into\n# the model. \nmy_feature_layer = tf.keras.layers.DenseFeatures(feature_columns)",
"Build a linear regression model as a baseline\nBefore creating a deep neural net, find a baseline loss by running a simple linear regression model that uses the feature layer you just created.",
"#@title Define the plotting function.\n\ndef plot_the_loss_curve(epochs, mse):\n \"\"\"Plot a curve of loss vs. epoch.\"\"\"\n\n plt.figure()\n plt.xlabel(\"Epoch\")\n plt.ylabel(\"Mean Squared Error\")\n\n plt.plot(epochs, mse, label=\"Loss\")\n plt.legend()\n plt.ylim([mse.min()*0.95, mse.max() * 1.03])\n plt.show() \n\nprint(\"Defined the plot_the_loss_curve function.\")\n\n#@title Define functions to create and train a linear regression model\ndef create_model(my_learning_rate, feature_layer):\n \"\"\"Create and compile a simple linear regression model.\"\"\"\n # Most simple tf.keras models are sequential.\n model = tf.keras.models.Sequential()\n\n # Add the layer containing the feature columns to the model.\n model.add(feature_layer)\n\n # Add one linear layer to the model to yield a simple linear regressor.\n model.add(tf.keras.layers.Dense(units=1, input_shape=(1,)))\n\n # Construct the layers into a model that TensorFlow can execute.\n model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate),\n loss=\"mean_squared_error\",\n metrics=[tf.keras.metrics.MeanSquaredError()])\n\n return model \n\n\ndef train_model(model, dataset, epochs, batch_size, label_name):\n \"\"\"Feed a dataset into the model in order to train it.\"\"\"\n\n # Split the dataset into features and label.\n features = {name:np.array(value) for name, value in dataset.items()}\n label = np.array(features.pop(label_name))\n history = model.fit(x=features, y=label, batch_size=batch_size,\n epochs=epochs, shuffle=True)\n\n # Get details that will be useful for plotting the loss curve.\n epochs = history.epoch\n hist = pd.DataFrame(history.history)\n rmse = hist[\"mean_squared_error\"]\n\n return epochs, rmse \n\nprint(\"Defined the create_model and train_model functions.\")",
"Run the following code cell to invoke the the functions defined in the preceding two code cells. (Ignore the warning messages.)\nNote: Because we've scaled all the input data, including the label, the resulting loss values will be much less than previous models. \nNote: Depending on the version of TensorFlow, running this cell might generate WARNING messages. Please ignore these warnings.",
"# The following variables are the hyperparameters.\nlearning_rate = 0.01\nepochs = 15\nbatch_size = 1000\nlabel_name = \"median_house_value\"\n\n# Establish the model's topography.\nmy_model = create_model(learning_rate, my_feature_layer)\n\n# Train the model on the normalized training set.\nepochs, mse = train_model(my_model, train_df_norm, epochs, batch_size, label_name)\nplot_the_loss_curve(epochs, mse)\n\ntest_features = {name:np.array(value) for name, value in test_df_norm.items()}\ntest_label = np.array(test_features.pop(label_name)) # isolate the label\nprint(\"\\n Evaluate the linear regression model against the test set:\")\nmy_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)",
"Define a deep neural net model\nThe create_model function defines the topography of the deep neural net, specifying the following:\n\nThe number of layers in the deep neural net.\nThe number of nodes in each layer.\n\nThe create_model function also defines the activation function of each layer.",
"def create_model(my_learning_rate, my_feature_layer):\n \"\"\"Create and compile a simple linear regression model.\"\"\"\n # Most simple tf.keras models are sequential.\n model = tf.keras.models.Sequential()\n\n # Add the layer containing the feature columns to the model.\n model.add(my_feature_layer)\n\n # Describe the topography of the model by calling the tf.keras.layers.Dense\n # method once for each layer. We've specified the following arguments:\n # * units specifies the number of nodes in this layer.\n # * activation specifies the activation function (Rectified Linear Unit).\n # * name is just a string that can be useful when debugging.\n\n # Define the first hidden layer with 20 nodes. \n model.add(tf.keras.layers.Dense(units=20, \n activation='relu', \n name='Hidden1'))\n \n # Define the second hidden layer with 12 nodes. \n model.add(tf.keras.layers.Dense(units=12, \n activation='relu', \n name='Hidden2'))\n \n # Define the output layer.\n model.add(tf.keras.layers.Dense(units=1, \n name='Output')) \n \n model.compile(optimizer=tf.keras.optimizers.Adam(lr=my_learning_rate),\n loss=\"mean_squared_error\",\n metrics=[tf.keras.metrics.MeanSquaredError()])\n\n return model",
"Define a training function\nThe train_model function trains the model from the input features and labels. The tf.keras.Model.fit method performs the actual training. The x parameter of the fit method is very flexible, enabling you to pass feature data in a variety of ways. The following implementation passes a Python dictionary in which:\n\nThe keys are the names of each feature (for example, longitude, latitude, and so on).\nThe value of each key is a NumPy array containing the values of that feature. \n\nNote: Although you are passing every feature to model.fit, most of those values will be ignored. Only the features accessed by my_feature_layer will actually be used to train the model.",
"def train_model(model, dataset, epochs, label_name,\n batch_size=None):\n \"\"\"Train the model by feeding it data.\"\"\"\n\n # Split the dataset into features and label.\n features = {name:np.array(value) for name, value in dataset.items()}\n label = np.array(features.pop(label_name))\n history = model.fit(x=features, y=label, batch_size=batch_size,\n epochs=epochs, shuffle=True) \n\n # The list of epochs is stored separately from the rest of history.\n epochs = history.epoch\n \n # To track the progression of training, gather a snapshot\n # of the model's mean squared error at each epoch. \n hist = pd.DataFrame(history.history)\n mse = hist[\"mean_squared_error\"]\n\n return epochs, mse",
"Call the functions to build and train a deep neural net\nOkay, it is time to actually train the deep neural net. If time permits, experiment with the three hyperparameters to see if you can reduce the loss\nagainst the test set.",
"# The following variables are the hyperparameters.\nlearning_rate = 0.01\nepochs = 20\nbatch_size = 1000\n\n# Specify the label\nlabel_name = \"median_house_value\"\n\n# Establish the model's topography.\nmy_model = create_model(learning_rate, my_feature_layer)\n\n# Train the model on the normalized training set. We're passing the entire\n# normalized training set, but the model will only use the features\n# defined by the feature_layer.\nepochs, mse = train_model(my_model, train_df_norm, epochs, \n label_name, batch_size)\nplot_the_loss_curve(epochs, mse)\n\n# After building a model against the training set, test that model\n# against the test set.\ntest_features = {name:np.array(value) for name, value in test_df_norm.items()}\ntest_label = np.array(test_features.pop(label_name)) # isolate the label\nprint(\"\\n Evaluate the new model against the test set:\")\nmy_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)",
"Task 1: Compare the two models\nHow did the deep neural net perform against the baseline linear regression model?",
"#@title Double-click to view a possible answer\n\n# Assuming that the linear model converged and\n# the deep neural net model also converged, please \n# compare the test set loss for each.\n# In our experiments, the loss of the deep neural \n# network model was consistently lower than \n# that of the linear regression model, which \n# suggests that the deep neural network model \n# will make better predictions than the \n# linear regression model.",
"Task 2: Optimize the deep neural network's topography\nExperiment with the number of layers of the deep neural network and the number of nodes in each layer. Aim to achieve both of the following goals:\n\nLower the loss against the test set.\nMinimize the overall number of nodes in the deep neural net. \n\nThe two goals may be in conflict.",
"#@title Double-click to view a possible answer\n\n# Many answers are possible. We noticed the \n# following trends:\n# * Two layers outperformed one layer, but \n# three layers did not perform significantly \n# better than two layers; two layers \n# outperformed one layer.\n# In other words, two layers seemed best. \n# * Setting the topography as follows produced \n# reasonably good results with relatively few \n# nodes:\n# * 10 nodes in the first layer.\n# * 6 nodes in the second layer.\n# As the number of nodes in each layer dropped\n# below the preceding, test loss increased. \n# However, depending on your application, hardware\n# constraints, and the relative pain inflicted \n# by a less accurate model, a smaller network \n# (for example, 6 nodes in the first layer and \n# 4 nodes in the second layer) might be \n# acceptable.",
"Task 3: Regularize the deep neural network (if you have enough time)\nNotice that the model's loss against the test set is much higher than the loss against the training set. In other words, the deep neural network is overfitting to the data in the training set. To reduce overfitting, regularize the model. The course has suggested several different ways to regularize a model, including:\n\nL1 regularization\nL2 regularization\nDropout regularization\n\nYour task is to experiment with one or more regularization mechanisms to bring the test loss closer to the training loss (while still keeping test loss relatively low). \nNote: When you add a regularization function to a model, you might need to tweak other hyperparameters. \nImplementing L1 or L2 regularization\nTo use L1 or L2 regularization on a hidden layer, specify the kernel_regularizer argument to tf.keras.layers.Dense. Assign one of the following methods to this argument:\n\ntf.keras.regularizers.l1 for L1 regularization\ntf.keras.regularizers.l2 for L2 regularization\n\nEach of the preceding methods takes an l parameter, which adjusts the regularization rate. Assign a decimal value between 0 and 1.0 to l; the higher the decimal, the greater the regularization. For example, the following applies L2 regularization at a strength of 0.01. \nmodel.add(tf.keras.layers.Dense(units=20, \n activation='relu',\n kernel_regularizer=tf.keras.regularizers.l2(l=0.01),\n name='Hidden1'))\nImplementing Dropout regularization\nYou implement dropout regularization as a separate layer in the topography. For example, the following code demonstrates how to add a dropout regularization layer between the first hidden layer and the second hidden layer:\n```\nmodel.add(tf.keras.layers.Dense( define first hidden layer)\nmodel.add(tf.keras.layers.Dropout(rate=0.25))\nmodel.add(tf.keras.layers.Dense( define second hidden layer)\n```\nThe rate parameter to tf.keras.layers.Dropout specifies the fraction of nodes that the model should drop out during training.",
"#@title Double-click for a possible solution\n\n# The following \"solution\" uses L2 regularization to bring training loss\n# and test loss closer to each other. Many, many other solutions are possible.\n\n\ndef create_model(my_learning_rate, my_feature_layer):\n \"\"\"Create and compile a simple linear regression model.\"\"\"\n\n # Discard any pre-existing version of the model.\n model = None\n\n # Most simple tf.keras models are sequential.\n model = tf.keras.models.Sequential()\n\n # Add the layer containing the feature columns to the model.\n model.add(my_feature_layer)\n\n # Describe the topography of the model. \n\n # Implement L2 regularization in the first hidden layer.\n model.add(tf.keras.layers.Dense(units=20, \n activation='relu',\n kernel_regularizer=tf.keras.regularizers.l2(0.04),\n name='Hidden1'))\n \n # Implement L2 regularization in the second hidden layer.\n model.add(tf.keras.layers.Dense(units=12, \n activation='relu', \n kernel_regularizer=tf.keras.regularizers.l2(0.04),\n name='Hidden2'))\n\n # Define the output layer.\n model.add(tf.keras.layers.Dense(units=1, \n name='Output')) \n \n model.compile(optimizer=tf.keras.optimizers.Adam(lr=my_learning_rate),\n loss=\"mean_squared_error\",\n metrics=[tf.keras.metrics.MeanSquaredError()])\n\n return model \n\n# Call the new create_model function and the other (unchanged) functions.\n\n# The following variables are the hyperparameters.\nlearning_rate = 0.007\nepochs = 140\nbatch_size = 1000\n\nlabel_name = \"median_house_value\"\n\n# Establish the model's topography.\nmy_model = create_model(learning_rate, my_feature_layer)\n\n# Train the model on the normalized training set.\nepochs, mse = train_model(my_model, train_df_norm, epochs, \n label_name, batch_size)\nplot_the_loss_curve(epochs, mse)\n\ntest_features = {name:np.array(value) for name, value in test_df_norm.items()}\ntest_label = np.array(test_features.pop(label_name)) # isolate the label\nprint(\"\\n Evaluate the new model against the test set:\")\nmy_model.evaluate(x = test_features, y = test_label, batch_size=batch_size) "
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ray-project/ray | doc/source/ray-air/examples/analyze_tuning_results.ipynb | apache-2.0 | [
"Analyzing results from hyperparameter tuning\nIn this example, we will go through how you can use Ray AIR to run a distributed hyperparameter experiment to find optimal hyperparameters for an XGBoost model.\nWhat we'll cover:\n- How to load data from an Sklearn example dataset\n- How to initialize an XGBoost trainer\n- How to define a search space for regular XGBoost parameters and for data preprocessors\n- How to fetch the best obtained result from the tuning run\n- How to fetch a dataframe to do further analysis on the results\nWe'll use the Covertype dataset provided from sklearn to train a multiclass classification task using XGBoost.\nIn this dataset, we try to predict the forst cover type (e.g. \"lodgehole pine\") from cartographic variables, like the distance to the closest road, or the hillshade at different times of the day. The features are binary, discrete and continuous and thus well suited for a decision-tree based classification task.\nYou can find more information about the dataset on the dataset homepage.\nWe will train XGBoost models on this dataset. Because model training performance can be influenced by hyperparameter choices, we will generate several different configurations and train them in parallel. Notably each of these trials will itself start a distributed training job to speed up training. All of this happens automatically within Ray AIR.\nFirst, let's make sure we have all dependencies installed:",
"!pip install -q \"ray[all]\" sklearn",
"Then we can start with some imports.",
"import pandas as pd\nfrom sklearn.datasets import fetch_covtype\n\nimport ray\nfrom ray import tune\nfrom ray.air import RunConfig\nfrom ray.train.xgboost import XGBoostTrainer\nfrom ray.tune.tune_config import TuneConfig\nfrom ray.tune.tuner import Tuner",
"We'll define a utility function to create a Ray Dataset from the Sklearn dataset. We expect the target column to be in the dataframe, so we'll add it to the dataframe manually.",
"def get_training_data() -> ray.data.Dataset:\n data_raw = fetch_covtype()\n df = pd.DataFrame(data_raw[\"data\"], columns=data_raw[\"feature_names\"])\n df[\"target\"] = data_raw[\"target\"]\n return ray.data.from_pandas(df)\n\n\ntrain_dataset = get_training_data()",
"Let's take a look at the schema here:",
"print(train_dataset)",
"Since we'll be training a multiclass prediction model, we have to pass some information to XGBoost. For instance, XGBoost expects us to provide the number of classes, and multiclass-enabled evaluation metrices.\nFor a good overview of commonly used hyperparameters, see our tutorial in the docs.",
"# XGBoost specific params\nparams = {\n \"tree_method\": \"approx\",\n \"objective\": \"multi:softmax\",\n \"eval_metric\": [\"mlogloss\", \"merror\"],\n \"num_class\": 8,\n \"min_child_weight\": 2\n}",
"With these parameters in place, we'll create a Ray AIR XGBoostTrainer.\nNote a few things here. First, we pass in a scaling_config to configure the distributed training behavior of each individual XGBoost training job. Here, we want to distribute training across 2 workers.\nThe label_column specifies which columns in the dataset contains the target values. params are the XGBoost training params defined above - we can tune these later! The datasets dict contains the dataset we would like to train on. Lastly, we pass the number of boosting rounds to XGBoost.",
"trainer = XGBoostTrainer(\n scaling_config={\"num_workers\": 2},\n label_column=\"target\",\n params=params,\n datasets={\"train\": train_dataset},\n num_boost_round=10,\n)",
"We can now create the Tuner with a search space to override some of the default parameters in the XGBoost trainer.\nHere, we just want to the XGBoost max_depth and min_child_weights parameters. Note that we specifically specified min_child_weight=2 in the default XGBoost trainer - this value will be overwritten during tuning.\nWe configure Tune to minimize the train-mlogloss metric. In random search, this doesn't affect the evaluated configurations, but it will affect our default results fetching for analysis later.\nBy the way, the name train-mlogloss is provided by the XGBoost library - train is the name of the dataset and mlogloss is the metric we passed in the XGBoost params above. Trainables can report any number of results (in this case we report 2), but most search algorithms only act on one of them - here we chose the mlogloss.",
"tuner = Tuner(\n trainer,\n run_config=RunConfig(verbose=1),\n param_space={\n \"params\": {\n \"max_depth\": tune.randint(2, 8), \n \"min_child_weight\": tune.randint(1, 10), \n },\n },\n tune_config=TuneConfig(num_samples=8, metric=\"train-mlogloss\", mode=\"min\"),\n)",
"Let's run the tuning. This will take a few minutes to complete.",
"results = tuner.fit()",
"Now that we obtained the results, we can analyze them. For instance, we can fetch the best observed result according to the configured metric and mode and print it:",
"# This will fetch the best result according to the `metric` and `mode` specified\n# in the `TuneConfig` above:\n\nbest_result = results.get_best_result()\n\nprint(\"Best result error rate\", best_result.metrics[\"train-merror\"])",
"For more sophisticated analysis, we can get a pandas dataframe with all trial results:",
"df = results.get_dataframe()\nprint(df.columns)",
"As an example, let's group the results per min_child_weight parameter and fetch the minimal obtained values:",
"groups = df.groupby(\"config/params/min_child_weight\")\nmins = groups.min()\n\nfor min_child_weight, row in mins.iterrows():\n print(\"Min child weight\", min_child_weight, \"error\", row[\"train-merror\"], \"logloss\", row[\"train-mlogloss\"])\n",
"As you can see in our example run, the min child weight of 2 showed the best prediction accuracy with 0.196929. That's the same as results.get_best_result() gave us!\nThe results.get_dataframe() returns the last reported results per trial. If you want to obtain the best ever observed results, you can pass the filter_metric and filter_mode arguments to results.get_dataframe(). In our example, we'll filter the minimum ever observed train-merror for each trial:",
"df_min_error = results.get_dataframe(filter_metric=\"train-merror\", filter_mode=\"min\")\ndf_min_error[\"train-merror\"]",
"The best ever observed train-merror is 0.196929, the same as the minimum error in our grouped results. This is expected, as the classification error in XGBoost usually goes down over time - meaning our last results are usually the best results.\nAnd that's how you analyze your hyperparameter tuning results. If you would like to have access to more analytics, please feel free to file a feature request e.g. as a Github issue or on our Discuss platform!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wilselby/diy_driverless_car_ROS | rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb | bsd-2-clause | [
"<a href=\"https://colab.research.google.com/github/wilselby/diy_driverless_car_ROS/blob/ml-model/rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nDevelopment of an End-to-End ML Model for Navigating an RC car with a Camera\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/wilselby/diy_driverless_car_ROS/blob/ml-model/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/wilselby/diy_driverless_car_ROS/blob/ml-model/rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>\n\nEnvironment Setup\nImport Dependencies",
"import os\nimport csv\nimport cv2\nimport matplotlib.pyplot as plt\nimport random\nimport pprint\n\nimport numpy as np\nfrom numpy import expand_dims\n\n%tensorflow_version 1.x\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\n\nfrom keras import backend as K\nfrom keras.models import Model, Sequential\nfrom keras.models import load_model\nfrom keras.layers import Dense, GlobalAveragePooling2D, MaxPooling2D, Lambda, Cropping2D\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.core import Flatten, Dense, Dropout, SpatialDropout2D\nfrom keras.optimizers import Adam\nfrom keras.callbacks import ModelCheckpoint, TensorBoard\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.preprocessing.image import load_img\nfrom keras.preprocessing.image import img_to_array \n \nfrom google.colab.patches import cv2_imshow\n \nimport sklearn\nfrom sklearn.model_selection import train_test_split\nimport pandas as pd\n\nprint(\"Tensorflow Version:\",tf.__version__)\nprint(\"Tensorflow Keras Version:\",tf.keras.__version__)\nprint(\"Eager mode: \", tf.executing_eagerly())\n",
"Confirm TensorFlow can see the GPU\nSimply select \"GPU\" in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P).",
"device_name = tf.test.gpu_device_name()\n\nif device_name != '/device:GPU:0':\n #raise SystemError('GPU device not found')\n print('GPU device not found')\nelse:\n print('Found GPU at: {}'.format(device_name))\n \n #GPU count and name\n !nvidia-smi -L",
"Load the Dataset\nDownload and Extract the Dataset",
"# Download the dataset\n!curl -O https://selbystorage.s3-us-west-2.amazonaws.com/research/office_2/office_2.tar.gz\n\ndata_set = 'office_2'\ntar_file = data_set + '.tar.gz'\n\n# Unzip the .tgz file\n# -x for extract\n# -v for verbose \n# -z for gnuzip\n# -f for file (should come at last just before file name)\n# -C to extract the zipped contents to a different directory\n!tar -xvzf $tar_file",
"Parse the CSV File",
"# Define path to csv file\ncsv_path = data_set + '/interpolated.csv'\n\n# Load the CSV file into a pandas dataframe\ndf = pd.read_csv(csv_path, sep=\",\")\n\n# Print the dimensions\nprint(\"Dataset Dimensions:\")\nprint(df.shape)\n\n# Print the first 5 lines of the dataframe for review\nprint(\"\\nDataset Summary:\")\ndf.head(5)\n ",
"Clean and Pre-process the Dataset\nRemove Unneccessary Columns",
"# Remove 'index' and 'frame_id' columns \ndf.drop(['index','frame_id'],axis=1,inplace=True)\n\n# Verify new dataframe dimensions\nprint(\"Dataset Dimensions:\")\nprint(df.shape)\n\n# Print the first 5 lines of the new dataframe for review\nprint(\"\\nDataset Summary:\")\ndf.head(5)",
"Detect Missing Data",
"# Detect Missing Values\nprint(\"Any Missing Values?: {}\".format(df.isnull().values.any()))\n\n# Total Sum\nprint(\"\\nTotal Number of Missing Values: {}\".format(df.isnull().sum().sum()))\n\n# Sum Per Column\nprint(\"\\nTotal Number of Missing Values per Column:\")\nprint(df.isnull().sum())",
"Remove Zero Throttle Values",
"# Determine if any throttle values are zeroes\nprint(\"Any 0 throttle values?: {}\".format(df['speed'].eq(0).any()))\n\n# Determine number of 0 throttle values:\nprint(\"\\nNumber of 0 throttle values: {}\".format(df['speed'].eq(0).sum()))\n\n# Remove rows with 0 throttle values\nif df['speed'].eq(0).any():\n df = df.query('speed != 0')\n \n # Reset the index\n df.reset_index(inplace=True,drop=True)\n \n# Verify new dataframe dimensions\nprint(\"\\nNew Dataset Dimensions:\")\nprint(df.shape)\ndf.head(5)",
"View Label Statistics",
"# Steering Command Statistics\nprint(\"\\nSteering Command Statistics:\")\nprint(df['angle'].describe())\n\nprint(\"\\nThrottle Command Statistics:\")\n# Throttle Command Statistics\nprint(df['speed'].describe())",
"View Histogram of Steering Commands",
"#@title Select the number of histogram bins\n\nnum_bins = 25 #@param {type:\"slider\", min:5, max:50, step:1}\n\nhist, bins = np.histogram(df['angle'], num_bins)\ncenter = (bins[:-1]+ bins[1:]) * 0.5\nplt.bar(center, hist, width=0.05)\n#plt.plot((np.min(df['angle']), np.max(df['angle'])), (samples_per_bin, samples_per_bin))\n\n# Normalize the histogram (150-300 for RBG)\n#@title Normalize the Histogram { run: \"auto\" }\nhist = True #@param {type:\"boolean\"}\n\nremove_list = []\nsamples_per_bin = 200\n\nif hist:\n for j in range(num_bins):\n list_ = []\n for i in range(len(df['angle'])):\n if df.loc[i,'angle'] >= bins[j] and df.loc[i,'angle'] <= bins[j+1]:\n list_.append(i)\n random.shuffle(list_)\n list_ = list_[samples_per_bin:]\n remove_list.extend(list_)\n\n print('removed:', len(remove_list))\n df.drop(df.index[remove_list], inplace=True)\n df.reset_index(inplace=True)\n df.drop(['index'],axis=1,inplace=True)\n print('remaining:', len(df))\n \n hist, _ = np.histogram(df['angle'], (num_bins))\n plt.bar(center, hist, width=0.05)\n plt.plot((np.min(df['angle']), np.max(df['angle'])), (samples_per_bin, samples_per_bin))",
"View a Sample Image",
"# View a Single Image \nindex = random.randint(0,df.shape[0]-1)\n\nimg_name = data_set + '/' + df.loc[index,'filename']\nangle = df.loc[index,'angle']\n\ncenter_image = cv2.imread(img_name)\ncenter_image_mod = cv2.resize(center_image, (320,180))\ncenter_image_mod = cv2.cvtColor(center_image_mod,cv2.COLOR_RGB2BGR)\n\n# Crop the image\nheight_min = 75 \nheight_max = center_image_mod.shape[0]\nwidth_min = 0\nwidth_max = center_image_mod.shape[1]\n\ncrop_img = center_image_mod[height_min:height_max, width_min:width_max]\n\nplt.subplot(2,1,1)\nplt.imshow(center_image_mod)\nplt.grid(False)\nplt.xlabel('angle: {:.2}'.format(angle))\nplt.show() \n\nplt.subplot(2,1,2)\nplt.imshow(crop_img)\nplt.grid(False)\nplt.xlabel('angle: {:.2}'.format(angle))\nplt.show() ",
"View Multiple Images",
"# Number of Images to Display\nnum_images = 4\n\n# Display the images\ni = 0\nfor i in range (i,num_images):\n index = random.randint(0,df.shape[0]-1)\n image_path = df.loc[index,'filename']\n angle = df.loc[index,'angle']\n img_name = data_set + '/' + image_path\n image = cv2.imread(img_name)\n image = cv2.resize(image, (320,180))\n image = cv2.cvtColor(image,cv2.COLOR_RGB2BGR)\n plt.subplot(num_images/2,num_images/2,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(image, cmap=plt.cm.binary)\n plt.xlabel('angle: {:.3}'.format(angle))\n i += 1",
"Split the Dataset\nDefine an ImageDataGenerator to Augment Images",
"# Create image data augmentation generator and choose augmentation types\ndatagen = ImageDataGenerator(\n #rotation_range=20,\n zoom_range=0.15,\n #width_shift_range=0.1,\n #height_shift_range=0.2,\n #shear_range=10,\n brightness_range=[0.5,1.0],\n \t #horizontal_flip=True,\n #vertical_flip=True,\n #channel_shift_range=100.0,\n fill_mode=\"reflect\")",
"View Image Augmentation Examples",
"# load the image\nindex = random.randint(0,df.shape[0]-1)\n\nimg_name = data_set + '/' + df.loc[index,'filename']\noriginal_image = cv2.imread(img_name)\noriginal_image = cv2.cvtColor(original_image,cv2.COLOR_RGB2BGR)\noriginal_image = cv2.resize(original_image, (320,180))\nlabel = df.loc[index,'angle']\n\n# convert to numpy array\ndata = img_to_array(original_image)\n\n# expand dimension to one sample\ntest = expand_dims(data, 0)\n\n# prepare iterator\nit = datagen.flow(test, batch_size=1)\n\n# generate batch of images\nbatch = it.next()\n\n# convert to unsigned integers for viewing\nimage_aug = batch[0].astype('uint8')\n\nprint(\"Augmenting a Single Image: \\n\")\n\nplt.subplot(2,1,1)\nplt.imshow(original_image)\nplt.grid(False)\nplt.xlabel('angle: {:.2}'.format(label))\nplt.show() \n\nplt.subplot(2,1,2)\nplt.imshow(image_aug)\nplt.grid(False)\nplt.xlabel('angle: {:.2}'.format(label))\nplt.show() \n\nprint(\"Multiple Augmentations: \\n\")\n# generate samples and plot\nfor i in range(0,num_images):\n\t# define subplot\n\tplt.subplot(num_images/2,num_images/2,i+1)\n\t# generate batch of images\n\tbatch = it.next()\n\t# convert to unsigned integers for viewing\n\timage = batch[0].astype('uint8')\n\t# plot raw pixel data\n\tplt.imshow(image)\n# show the figure\nplt.show()\n",
"Define a Data Generator",
"def generator(samples, batch_size=32, aug=0):\n num_samples = len(samples)\n\n while 1: # Loop forever so the generator never terminates\n for offset in range(0, num_samples, batch_size):\n batch_samples = samples[offset:offset + batch_size]\n\n #print(batch_samples)\n images = []\n angles = []\n for batch_sample in batch_samples:\n if batch_sample[5] != \"filename\":\n name = data_set + '/' + batch_sample[3]\n center_image = cv2.imread(name)\n center_image = cv2.cvtColor(center_image,cv2.COLOR_RGB2BGR)\n center_image = cv2.resize(\n center_image,\n (320, 180)) #resize from 720x1280 to 180x320\n angle = float(batch_sample[4])\n if not aug:\n images.append(center_image)\n angles.append(angle)\n else:\n data = img_to_array(center_image)\n sample = expand_dims(data, 0)\n it = datagen.flow(sample, batch_size=1)\n batch = it.next()\n image_aug = batch[0].astype('uint8')\n if random.random() < .5:\n image_aug = np.fliplr(image_aug)\n angle = -1 * angle\n images.append(image_aug)\n angles.append(angle)\n\n X_train = np.array(images)\n y_train = np.array(angles)\n\n yield sklearn.utils.shuffle(X_train, y_train)",
"Split the Dataset",
"samples = []\n\nsamples = df.values.tolist()\n\nsklearn.utils.shuffle(samples)\ntrain_samples, validation_samples = train_test_split(samples, test_size=0.2)\n\nprint(\"Number of traing samples: \", len(train_samples))\nprint(\"Number of validation samples: \", len(validation_samples))",
"Define Training and Validation Data Generators",
"batch_size_value = 32\nimg_aug = 0\n\ntrain_generator = generator(train_samples, batch_size=batch_size_value, aug=img_aug)\nvalidation_generator = generator(\n validation_samples, batch_size=batch_size_value, aug=0)",
"Compile and Train the Model\nBuild the Model",
"# Initialize the model\nmodel = Sequential()\n\n# trim image to only see section with road\n# (top_crop, bottom_crop), (left_crop, right_crop)\nmodel.add(Cropping2D(cropping=((height_min,0), (width_min,0)), input_shape=(180,320,3)))\n\n# Preprocess incoming data, centered around zero with small standard deviation\nmodel.add(Lambda(lambda x: (x / 255.0) - 0.5))\n\n# Nvidia model\nmodel.add(Convolution2D(24, (5, 5), activation=\"relu\", name=\"conv_1\", strides=(2, 2)))\nmodel.add(Convolution2D(36, (5, 5), activation=\"relu\", name=\"conv_2\", strides=(2, 2)))\nmodel.add(Convolution2D(48, (5, 5), activation=\"relu\", name=\"conv_3\", strides=(2, 2)))\nmodel.add(SpatialDropout2D(.5, dim_ordering='default'))\n\nmodel.add(Convolution2D(64, (3, 3), activation=\"relu\", name=\"conv_4\", strides=(1, 1)))\nmodel.add(Convolution2D(64, (3, 3), activation=\"relu\", name=\"conv_5\", strides=(1, 1)))\n\nmodel.add(Flatten())\n\nmodel.add(Dense(1164))\nmodel.add(Dropout(.5))\nmodel.add(Dense(100, activation='relu'))\nmodel.add(Dropout(.5))\nmodel.add(Dense(50, activation='relu'))\nmodel.add(Dropout(.5))\nmodel.add(Dense(10, activation='relu'))\nmodel.add(Dropout(.5))\nmodel.add(Dense(1))\n\nmodel.compile(loss='mse', optimizer=Adam(lr=0.001), metrics=['mse','mae','mape','cosine'])\n\n# Print model sumamry\nmodel.summary()",
"Setup Checkpoints",
"# checkpoint\nmodel_path = './model'\n\n!if [ -d $model_path ]; then echo 'Directory Exists'; else mkdir $model_path; fi\n\nfilepath = model_path + \"/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5\"\ncheckpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto', period=1)",
"Setup Early Stopping to Prevent Overfitting",
"# The patience parameter is the amount of epochs to check for improvement\nearly_stop = EarlyStopping(monitor='val_loss', patience=10)",
"Reduce Learning Rate When a Metric has Stopped Improving",
"reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,\n patience=5, min_lr=0.001)",
"Setup Tensorboard",
"# Clear any logs from previous runs\n!rm -rf ./Graph/ \n\n# Launch Tensorboard\n!pip install -U tensorboardcolab\n\nfrom tensorboardcolab import *\n\ntbc = TensorBoardColab()\n\n# Configure the Tensorboard Callback\ntbCallBack = TensorBoard(log_dir='./Graph', \n histogram_freq=1,\n write_graph=True,\n write_grads=True,\n write_images=True,\n batch_size=batch_size_value,\n update_freq='epoch')\n",
"Load Existing Model",
"load = True #@param {type:\"boolean\"}\n\nif load:\n # Returns a compiled model identical to the previous one\n !curl -O https://selbystorage.s3-us-west-2.amazonaws.com/research/office_2/model.h5\n !mv model.h5 model/\n model_path_full = model_path + '/' + 'model.h5'\n model = load_model(model_path_full)\n print(\"Loaded previous model: {} \\n\".format(model_path_full))\nelse:\n print(\"No previous model loaded \\n\")",
"Train the Model",
"# Define step sizes\nSTEP_SIZE_TRAIN = len(train_samples) / batch_size_value\nSTEP_SIZE_VALID = len(validation_samples) / batch_size_value\n\n# Define number of epochs\nn_epoch = 5\n\n# Define callbacks\n# callbacks_list = [TensorBoardColabCallback(tbc)]\n# callbacks_list = [TensorBoardColabCallback(tbc), early_stop]\n# callbacks_list = [TensorBoardColabCallback(tbc), early_stop, checkpoint]\ncallbacks_list = [TensorBoardColabCallback(tbc), early_stop, checkpoint, reduce_lr]\n\n# Fit the model\nhistory_object = model.fit_generator(\n generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=validation_generator,\n validation_steps=STEP_SIZE_VALID,\n callbacks=callbacks_list,\n use_multiprocessing=True,\n epochs=n_epoch)\n",
"Save the Model",
"# Save model\nmodel_path_full = model_path + '/'\n\nmodel.save(model_path_full + 'model.h5')\nwith open(model_path_full + 'model.json', 'w') as output_json:\n output_json.write(model.to_json())",
"Evaluate the Model\nPlot the Training Results",
"# Plot the training and validation loss for each epoch\nprint('Generating loss chart...')\nplt.plot(history_object.history['loss'])\nplt.plot(history_object.history['val_loss'])\nplt.title('model mean squared error loss')\nplt.ylabel('mean squared error loss')\nplt.xlabel('epoch')\nplt.legend(['training set', 'validation set'], loc='upper right')\nplt.savefig(model_path + '/model.png')\n\n# Done\nprint('Done.')",
"Print Performance Metrics",
"scores = model.evaluate_generator(validation_generator, STEP_SIZE_VALID, use_multiprocessing=True)\n\nmetrics_names = model.metrics_names\n\nfor i in range(len(model.metrics_names)):\n print(\"Metric: {} - {}\".format(metrics_names[i],scores[i]))\n",
"Compute Prediction Statistics",
"# Define image loading function\ndef load_images(dataframe):\n \n # initialize images array\n images = []\n \n for i in dataframe.index.values:\n name = data_set + '/' + dataframe.loc[i,'filename']\n center_image = cv2.imread(name)\n center_image = cv2.resize(center_image, (320,180))\n images.append(center_image)\n \n return np.array(images)\n \n# Load images \ntest_size = 200\ndf_test = df.sample(frac=1).reset_index(drop=True)\ndf_test = df_test.head(test_size)\n\ntest_images = load_images(df_test)\n\nbatch_size = 32\npreds = model.predict(test_images, batch_size=batch_size, verbose=1)\n\n#print(\"Preds: {} \\n\".format(preds))\n\ntestY = df_test.iloc[:,4].values\n\n#print(\"Labels: {} \\n\".format(testY))\n\ndf_testY = pd.Series(testY)\ndf_preds = pd.Series(preds.flatten)\n\n# Replace 0 angle values\nif df_testY.eq(0).any():\n df_testY.replace(0, 0.0001,inplace=True)\n\n# Calculate the difference\ndiff = preds.flatten() - df_testY\npercentDiff = (diff / testY) * 100\nabsPercentDiff = np.abs(percentDiff)\n\n# compute the mean and standard deviation of the absolute percentage\n# difference\nmean = np.mean(absPercentDiff)\nstd = np.std(absPercentDiff)\nprint(\"[INFO] mean: {:.2f}%, std: {:.2f}%\".format(mean, std))\n\n# Compute the mean and standard deviation of the difference\nprint(diff.describe())\n\n# Plot a histogram of the prediction errors\nnum_bins = 25\nhist, bins = np.histogram(diff, num_bins)\ncenter = (bins[:-1]+ bins[1:]) * 0.5\nplt.bar(center, hist, width=0.05)\nplt.title('Historgram of Predicted Error')\nplt.xlabel('Steering Angle')\nplt.ylabel('Number of predictions')\nplt.xlim(-2.0, 2.0)\nplt.plot(np.min(diff), np.max(diff))\n\n# Plot a Scatter Plot of the Error\nplt.scatter(testY, preds)\nplt.xlabel('True Values ')\nplt.ylabel('Predictions ')\nplt.axis('equal')\nplt.axis('square')\nplt.xlim([-1.75,1.75])\nplt.ylim([-1.75,1.75])\nplt.plot([-1.75, 1.75], [-1.75, 1.75], color='k', linestyle='-', linewidth=.1)",
"Plot a Prediction",
"# Plot the image with the actual and predicted steering angle\nindex = random.randint(0,df_test.shape[0]-1)\nimg_name = data_set + '/' + df_test.loc[index,'filename']\ncenter_image = cv2.imread(img_name)\ncenter_image = cv2.cvtColor(center_image,cv2.COLOR_RGB2BGR)\ncenter_image_mod = cv2.resize(center_image, (320,180)) #resize from 720x1280 to 180x320\nplt.imshow(center_image_mod)\nplt.grid(False)\nplt.xlabel('Actual: {:.2f} Predicted: {:.2f}'.format(df_test.loc[index,'angle'],float(preds[index])))\nplt.show() \n",
"Visualize the Network\nShow the Model Summary",
"model.summary()",
"Access Individual Layers",
"# Creating a mapping of layer name ot layer details \n# We will create a dictionary layers_info which maps a layer name to its charcteristics\nlayers_info = {}\nfor i in model.layers:\n layers_info[i.name] = i.get_config()\n\n# Here the layer_weights dictionary will map every layer_name to its corresponding weights\nlayer_weights = {}\nfor i in model.layers:\n layer_weights[i.name] = i.get_weights()\n\npprint.pprint(layers_info['conv_5'])",
"Visualize the filters",
"# Visualize the first filter of each convolution layer\nlayers = model.layers\nlayer_ids = [2,3,4,6,7]\n\n#plot the filters\nfig,ax = plt.subplots(nrows=1,ncols=5)\nfor i in range(5):\n ax[i].imshow(layers[layer_ids[i]].get_weights()[0][:,:,:,0][:,:,0],cmap='gray')\n ax[i].set_title('Conv'+str(i+1))\n ax[i].set_xticks([])\n ax[i].set_yticks([])",
"Visualize the Saliency Map",
"!pip install -I scipy==1.2.*\n!pip install git+https://github.com/raghakot/keras-vis.git -U\n\n# import specific functions from keras-vis package\nfrom vis.utils import utils\nfrom vis.visualization import visualize_saliency, visualize_cam, overlay\n\n# View a Single Image \nindex = random.randint(0,df.shape[0]-1)\nimg_name = data_set + '/' + df.loc[index,'filename']\n\nsample_image = cv2.imread(img_name)\nsample_image = cv2.cvtColor(sample_image,cv2.COLOR_RGB2BGR)\nsample_image_mod = cv2.resize(sample_image, (320,180))\nplt.imshow(sample_image_mod)\n \nlayer_idx = utils.find_layer_idx(model, 'conv_5')\n\ngrads = visualize_saliency(model, \n layer_idx, \n filter_indices=None, \n seed_input=sample_image_mod,\n grad_modifier='absolute',\n backprop_modifier='guided')\n\nplt.imshow(grads, alpha = 0.6)\n",
"References:\nKeras, Regression, and CNNs\nRegression with Keras\nHow to use Keras fit and fit_generator\nImage Classification with Convolutional Neural Networks\nKeras Image Processing Documentation\nAttribution.ipynb\nA Guide to Understanding Convolutional Neural Networks (CNNs) using Visualization\nVisualizing attention on self driving car\nExploring Image Data Augmentation with Keras and Tensorflow\nTensorboard Documentation"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
radaniba/QuotaWatcher | QuotaWatcher.ipynb | gpl-2.0 | [
"This is a documentation for QuotaWatcher utility, a small cron job developed to monitor disk usage on GSC servers\nIn this notebook we will explain every part of the utility in order to have other people maintain the code easily\nAll the code is heavily pep8'd :) \n\nImporting needed Libraries",
"from __future__ import division\n\n__author__ = \"Rad <[email protected]>\"\n__license__ = \"GNU General Public License version 3\"\n__date__ = \"06/30/2015\"\n__version__ = \"0.2\"\n\ntry:\n import os\n from quota_logger import init_log\n import subprocess\n from prettytable import PrettyTable\n from smtplib import SMTP\n from smtplib import SMTPException\n from email.mime.text import MIMEText\n from argparse import ArgumentParser\nexcept ImportError:\n # Checks the installation of the necessary python modules\n import os\n import sys\n\n print((os.linesep * 2).join(\n [\"An error found importing one module:\", str(sys.exc_info()[1]), \"You need to install it Stopping...\"]))\n sys.exit(-2)",
"I like this way of importing libraries, if some libraries are not already installed, the system will exit. There is another room for improvement here, if a library does not exist, it is possile to install it automatically if we run the code as admin or with enough permission\nThe Notifier Class",
"class Notifier(object):\n\n suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']\n\n def __init__(self, **kwargs):\n\n self.threshold = None\n self.path = None\n self.list = None\n self.email_sender = None\n self.email_password = None\n self.gmail_smtp = None\n self.gmail_smtp_port = None\n self.text_subtype = None\n self.cap_reached = False\n self.email_subject = None\n\n for (key, value) in kwargs.iteritems():\n if hasattr(self, key):\n setattr(self, key, value)\n\n self._log = init_log()\n",
"We init the class as an object containing some features, this object will have a threshold upon which there will be an email triggered to a recipient list. This obect is looking ath the size of each subdirectory in path. You need to create an email addresse and add some variables to your PATH ( will be discussed later)",
" @property\n def loggy(self):\n return self._log\n",
"We need to inherhit logging capabilities from the logging class we imported (see later the code of this class). This will allow us to log from within the class itself",
" @staticmethod\n def load_recipients_emails(emails_file):\n recipients = [line.rstrip('\\n') for line in open(emails_file) if not line[0].isspace()]\n return recipients\n",
"We need to lad the emails from a file created by the user. Usually I create 2 files, development_list containing only email adresses I will use for testing and production_list containing adresses I want to notify in production",
" @staticmethod\n def load_message_content(message_template_file, table):\n template_file = open(message_template_file, 'rb')\n template_file_content = template_file.read().replace(\n \"{{table}}\", table.get_string())\n template_file.close()\n return template_file_content\n",
"Inspired by MVC apps, we load message body from a template, this template will contain a placeholder called {{table}} that will contain the table of subdirectories and their respective sizes",
" def notify_user(self, email_receivers, table, template):\n \"\"\"This method sends an email\n :rtype : email sent to specified members\n \"\"\"\n # Create the message\n input_file = os.path.join(\n os.path.dirname(__file__), \"templates/\" + template + \".txt\")\n content = self.load_message_content(input_file, table)\n\n msg = MIMEText(content, self.text_subtype)\n\n msg[\"Subject\"] = self.email_subject\n msg[\"From\"] = self.email_sender\n msg[\"To\"] = ','.join(email_receivers)\n\n try:\n smtpObj = SMTP(self.gmail_smtp, self.gmail_smtp_port)\n # Identify yourself to GMAIL ESMTP server.\n smtpObj.ehlo()\n # Put SMTP connection in TLS mode and call ehlo again.\n smtpObj.starttls()\n smtpObj.ehlo()\n # Login to service\n smtpObj.login(user=self.email_sender, password=self.email_password)\n # Send email\n smtpObj.sendmail(self.email_sender, email_receivers, msg.as_string())\n # close connection and session.\n smtpObj.quit()\n except SMTPException as error:\n print \"Error: unable to send email : {err}\".format(err=error)\n",
"notify_user is the function that will send an email to the users upon request. It loads the message body template and injects the table in it.",
" @staticmethod\n def du(path):\n \"\"\"disk usage in kilobytes\"\"\"\n # return subprocess.check_output(['du', '-s',\n # path]).split()[0].decode('utf-8')\n try:\n p1 = subprocess.Popen(('ls', '-d', path), stdout=subprocess.PIPE)\n p2 = subprocess.Popen((os.environ[\"GNU_PARALLEL\"], '--no-notice', 'du', '-s', '2>&1'), stdin=p1.stdout,\n stdout=subprocess.PIPE)\n p3 = subprocess.Popen(\n ('grep', '-v', '\"Permission denied\"'), stdin=p2.stdout, stdout=subprocess.PIPE)\n output = p3.communicate()[0]\n except subprocess.CalledProcessError as e:\n raise RuntimeError(\"command '{0}' return with error (code {1}): {2}\".format(\n e.cmd, e.returncode, e.output))\n # return ''.join([' '.join(hit.split('\\t')) for hit in output.split('\\n')\n # if len(hit) > 0 and not \"Permission\" in hit and output[0].isdigit()])\n result = [' '.join(hit.split('\\t')) for hit in output.split('\\n')]\n for line in result:\n if line and len(line.split('\\n')) > 0 and \"Permission\" not in line and line[0].isdigit():\n return line.split(\" \")[0]\n",
"This is a wrapper of the famous du command. I use GNU_PARALLEL in case we have a lot of subdirectories and in case we don't want to wait for sequential processing. Note that we could have done this in multithreading as well",
" def du_h(self, nbytes):\n if nbytes == 0:\n return '0 B'\n i = 0\n while nbytes >= 1024 and i < len(self.suffixes) - 1:\n nbytes /= 1024.\n i += 1\n f = ('%.2f'.format(nbytes)).rstrip('0').rstrip('.')\n return '%s %s'.format(f, self.suffixes[i])\n",
"I didn't want to use the -h flag because we may want to sum up subdirectories sizes or doing other postprocessing, we'd rather keep them in a unified format (unit). For a more human readable format, we can use du_h() method",
" @staticmethod\n def list_folders(given_path):\n user_list = []\n for path in os.listdir(given_path):\n if not os.path.isfile(os.path.join(given_path, path)) and not path.startswith(\".\") and not path.startswith(\n \"archive\"):\n user_list.append(path)\n return user_list\n",
"we need at some point to return a list of subdirectories, each will be passed through the same function (du)",
" def notify(self):\n global cap_reached\n self._log.info(\"Loading recipient emails...\")\n list_of_recievers = self.load_recipients_emails(self.list)\n paths = self.list_folders(self.path)\n paths = [self.path + user for user in paths]\n sizes = []\n for size in paths:\n try:\n self._log.info(\"calculating disk usage for \" + size + \" ...\")\n sizes.append(int(self.du(size)))\n except Exception, e:\n self._log.exception(e)\n sizes.append(0)\n # sizes = [int(du(size).split(' ')[0]) for size in paths]\n # convert kilobytes to bytes\n sizes = [int(element) * 1000 for element in sizes]\n table = PrettyTable([\"Directory\", \"Size\"])\n table.align[\"Directory\"] = \"l\"\n table.align[\"Size\"] = \"r\"\n table.padding_width = 5\n table.border = False\n for account, size_of_account in zip(paths, sizes):\n if int(size_of_account) > int(self.threshold):\n table.add_row(\n [\"*\" + os.path.basename(account) + \"*\", \"*\" + self.du_h(size_of_account) + \"*\"])\n self.cap_reached = True\n else:\n table.add_row([os.path.basename(account), self.du_h(size_of_account)])\n # notify Admins\n table.add_row([\"TOTAL\", self.du_h(sum(sizes))])\n table.add_row([\"Usage\", str(sum(sizes) / 70000000000000)])\n self.notify_user(list_of_recievers, table, \"karey\")\n if self.cap_reached:\n self.notify_user(list_of_recievers, table, \"default_size_limit\")\n\n def run(self):\n self.notify()",
"Finally we create the function that will bring all this protocol together :\n\nRead the list of recievers\nload the path we want to look into\nfor each subdirectory calculate the size of it and append it to a list\ncreate a Table to be populated row by row\nadd subdirectories and their sizes\nCalculate the total of sizes in subdirectories\nIf one of the subdirectories has a size higher than the threshold specified, trigger the email\nReport the usage as a percentage",
"def arguments():\n \"\"\"Defines the command line arguments for the script.\"\"\"\n main_desc = \"\"\"Monitors changes in the size of dirs for a given path\"\"\"\n\n parser = ArgumentParser(description=main_desc)\n parser.add_argument(\"path\", default=os.path.expanduser('~'), nargs='?',\n help=\"The path to monitor. If none is given, takes the home directory\")\n parser.add_argument(\"list\", help=\"text file containing the list of persons to be notified, one per line\")\n parser.add_argument(\"-s\", \"--notification_subject\", default=None, help=\"Email subject of the notification\")\n parser.add_argument(\"-t\", \"--threshold\", default=2500000000000,\n help=\"The threshold that will trigger the notification\")\n parser.add_argument(\"-v\", \"--version\", action=\"version\",\n version=\"%(prog)s {0}\".format(__version__),\n help=\"show program's version number and exit\")\n return parser",
"The program takes in account : the path to examine, the list of emails in a file, the subject of the alert, the thresold that will trigger the email (here by defailt 2.5T)",
"def main():\n\n args = arguments().parse_args()\n notifier = Notifier()\n loggy = notifier.loggy\n # Set parameters\n loggy.info(\"Starting QuotaWatcher session...\")\n loggy.info(\"Setting parameters ...\")\n notifier.list = args.list\n notifier.threshold = args.threshold\n notifier.path = args.path\n\n # Configure the app\n try:\n loggy.info(\"Loading environment variables ...\")\n notifier.email_sender = os.environ[\"NOTIFIER_SENDER\"]\n notifier.email_password = os.environ[\"NOTIFIER_PASSWD\"]\n notifier.gmail_smtp = os.environ[\"NOTIFIER_SMTP\"]\n notifier.gmail_smtp_port = os.environ[\"NOTIFIER_SMTP_PORT\"]\n notifier.text_subtype = os.environ[\"NOTIFIER_SUBTYPE\"]\n notifier.email_subject = args.notification_subject\n notifier.cap_reached = False\n except Exception, e:\n loggy.exception(e)\n\n notifier.run()\n loggy.info(\"End of QuotaWatcher session\")\n\n",
"Note that in the main we load some environment variable that you should specify in advance. This is up to the user to fill these out, It is always preferable to declare these as environment variable, most of the time these are confidential so we better not show them here, it is always safe to set environment variable for these\nThat's it\nthis is an example of the LOG output.\n2015-07-03 10:40:46,968 - quota_logger - INFO - Starting QuotaWatcher session...\n2015-07-03 10:40:46,969 - quota_logger - INFO - Setting parameters ...\n2015-07-03 10:40:46,969 - quota_logger - INFO - Loading environment variables ...\n2015-07-03 10:40:46,969 - quota_logger - INFO - Loading recipient emails...\n2015-07-03 10:40:47,011 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/amcpherson ..\n.\n2015-07-03 11:21:09,442 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/andrewjlroth\n...\n2015-07-03 15:31:41,500 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/asteif ...\n2015-07-03 15:40:34,268 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/clefebvre ...\n2015-07-03 15:42:47,483 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/dgrewal ...\n2015-07-03 16:01:30,588 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/fdorri ...\n2015-07-03 16:03:43,850 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/fong ...\n2015-07-03 16:16:13,781 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/gha ...\n2015-07-03 16:16:38,673 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jding ...\n2015-07-03 16:16:50,820 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/cdesouza ...\n2015-07-03 16:16:52,585 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jrosner ...\n2015-07-03 16:27:30,684 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jtaghiyar ...\n2015-07-03 16:28:16,982 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/kareys ...\n2015-07-03 19:21:07,607 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/hfarahani ...\n2015-07-03 19:22:07,618 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/jzhou ...\n2015-07-03 19:38:28,147 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/pipelines ...\n2015-07-03 19:53:20,771 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/projects ...\n2015-07-03 20:52:45,001 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/raniba ...\n2015-07-03 20:59:50,543 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/tfunnell ...\n2015-07-03 21:00:47,216 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/ykwang ...\n2015-07-03 21:03:30,277 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/azhang ...\n2015-07-03 21:03:30,820 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/softwares ...\n2015-07-03 21:03:42,679 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/sjewell ...\n2015-07-03 21:03:51,711 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/kastonl ...\n2015-07-03 21:04:52,536 - quota_logger - INFO - calculating disk usage for /genesis/extscratch/shahlab/amazloomian .\n..\n2015-07-03 21:07:43,501 - quota_logger - INFO - End of QuotaWatcher session\nAnd as of the email triggered, it will look like \n```\n THIS IS AN ALERT MESSAGE : DISK USAGE SPIKE \nThis is a warning message about the disk usage relative to the Shahlab group at GSC\nWe detected a spike > 2.5 T for some accounts and here is a list of the space usage per account reported today\nDirectory Size \namcpherson 1.96 TB \nandrewjlroth 390.19 GB \nasteif 2.05 TB \nclefebvre 16.07 GB \ndgrewal 1.61 TB \nfdorri 486.49 GB \n*fong* *9.67 TB* \ngha 50.7 GB \njding 638.72 GB \ncdesouza 56.15 GB \njrosner 1.82 TB \njtaghiyar 253.84 GB \n*kareys* *11.26 TB* \nhfarahani 1.09 TB \njzhou 1.19 TB \npipelines 2.1 TB \n*projects* *4.09 TB* \nraniba 2.03 TB \ntfunnell 1.02 TB \nykwang 1.71 TB \nazhang 108.4 MB \nsoftwares 34.67 GB \nsjewell 24.53 GB \nkastonl 118.51 GB \namazloomian 1.71 TB \nTOTAL 45.34 TB \nUsage 71.218%\n\nPlease do the necessary to remove temporary files and take the time to clean up your working directories\nThank you for your cooperation\n(am a cron job, don't reply to this message, if you have questions ask Ali)\nPS : This is a very close estimation, some directories may have strict permissions, for an accurate disk usage please make sure that you set your files permissions so that anyone can see them.\n```\nThe logger",
"import logging\nimport datetime\n\ndef init_log():\n current_time = datetime.datetime.now()\n logger = logging.getLogger(__name__)\n logger.setLevel(logging.INFO)\n handler = logging.FileHandler(current_time.isoformat()+'_quotawatcher.log')\n handler.setLevel(logging.INFO)\n # create a logging format\n formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n return logger",
"Before you start\nexport NOTIFIER_SENDER=\"[email protected]\"\nexport NOTIFIER_PASSWD=\"passwordhere\"\nexport NOTIFIER_SMTP=\"smtp.gmail.com\"\nexport NOTIFIER_SMTP_PORT=587\nexport NOTIFIER_SUBTYPE=\"plain\"\nexport GNU_PARALLEL=\"/path/to/your/gnu/parallel\"\nHow to run the program\npython quotawatcher.py /genesis/extscratch/shahlab/ dev_list -s \"Hey Test\" -t 2500000000000"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ioos/notebooks_demos | notebooks/2018-03-15-ssh-skillscore.ipynb | mit | [
"Investigating ocean models skill for sea surface height with IOOS catalog and Python\nThe IOOS catalog offers access to hundreds of datasets and data access services provided by the 11 regional associations.\nIn the past we demonstrate how to tap into those datasets to obtain sea surface temperature data from observations,\ncoastal velocity from high frequency radar data,\nand a simple model vs observation visualization of temperatures for the Boston Light Swim competition.\nIn this notebook we'll demonstrate a step-by-step workflow on how ask the catalog for a specific variable, extract only the model data, and match the nearest model grid point to an observation. The goal is to create an automated skill score for quick assessment of ocean numerical models.\nThe first cell is only to reduce iris' noisy output,\nthe notebook start on cell [2] with the definition of the parameters:\n- start and end dates for the search;\n- experiment name;\n- a bounding of the region of interest;\n- SOS variable name for the observations;\n- Climate and Forecast standard names;\n- the units we want conform the variables into;\n- catalogs we want to search.",
"import warnings\n\n# Suppresing warnings for a \"pretty output.\"\nwarnings.simplefilter(\"ignore\")\n\n%%writefile config.yaml\n\ndate:\n start: 2018-2-28 00:00:00\n stop: 2018-3-5 00:00:00\n\nrun_name: 'latest'\n\nregion:\n bbox: [-71.20, 41.40, -69.20, 43.74]\n crs: 'urn:ogc:def:crs:OGC:1.3:CRS84'\n\nsos_name: 'water_surface_height_above_reference_datum'\n\ncf_names:\n - sea_surface_height\n - sea_surface_elevation\n - sea_surface_height_above_geoid\n - sea_surface_height_above_sea_level\n - water_surface_height_above_reference_datum\n - sea_surface_height_above_reference_ellipsoid\n\nunits: 'm'\n\ncatalogs:\n - https://data.ioos.us/csw",
"To keep track of the information we'll setup a config variable and output them on the screen for bookkeeping.",
"import os\nimport shutil\nfrom datetime import datetime\n\nfrom ioos_tools.ioos import parse_config\n\nconfig = parse_config(\"config.yaml\")\n\n# Saves downloaded data into a temporary directory.\nsave_dir = os.path.abspath(config[\"run_name\"])\nif os.path.exists(save_dir):\n shutil.rmtree(save_dir)\nos.makedirs(save_dir)\n\nfmt = \"{:*^64}\".format\nprint(fmt(\"Saving data inside directory {}\".format(save_dir)))\nprint(fmt(\" Run information \"))\nprint(\"Run date: {:%Y-%m-%d %H:%M:%S}\".format(datetime.utcnow()))\nprint(\"Start: {:%Y-%m-%d %H:%M:%S}\".format(config[\"date\"][\"start\"]))\nprint(\"Stop: {:%Y-%m-%d %H:%M:%S}\".format(config[\"date\"][\"stop\"]))\nprint(\n \"Bounding box: {0:3.2f}, {1:3.2f},\"\n \"{2:3.2f}, {3:3.2f}\".format(*config[\"region\"][\"bbox\"])\n)",
"To interface with the IOOS catalog we will use the Catalogue Service for the Web (CSW) endpoint and python's OWSLib library.\nThe cell below creates the Filter Encoding Specification (FES) with configuration we specified in cell [2]. The filter is composed of:\n- or to catch any of the standard names;\n- not some names we do not want to show up in the results;\n- date range and bounding box for the time-space domain of the search.",
"def make_filter(config):\n from owslib import fes\n from ioos_tools.ioos import fes_date_filter\n\n kw = dict(\n wildCard=\"*\", escapeChar=\"\\\\\", singleChar=\"?\", propertyname=\"apiso:Subject\"\n )\n\n or_filt = fes.Or(\n [fes.PropertyIsLike(literal=(\"*%s*\" % val), **kw) for val in config[\"cf_names\"]]\n )\n\n not_filt = fes.Not([fes.PropertyIsLike(literal=\"GRIB-2\", **kw)])\n\n begin, end = fes_date_filter(config[\"date\"][\"start\"], config[\"date\"][\"stop\"])\n\n bbox_crs = fes.BBox(config[\"region\"][\"bbox\"], crs=config[\"region\"][\"crs\"])\n\n filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])]\n return filter_list\n\n\nfilter_list = make_filter(config)",
"We need to wrap OWSlib.csw.CatalogueServiceWeb object with a custom function,\nget_csw_records, to be able to paginate over the results.\nIn the cell below we loop over all the catalogs returns and extract the OPeNDAP endpoints.",
"from ioos_tools.ioos import get_csw_records, service_urls\nfrom owslib.csw import CatalogueServiceWeb\n\ndap_urls = []\nprint(fmt(\" Catalog information \"))\nfor endpoint in config[\"catalogs\"]:\n print(\"URL: {}\".format(endpoint))\n try:\n csw = CatalogueServiceWeb(endpoint, timeout=120)\n except Exception as e:\n print(\"{}\".format(e))\n continue\n csw = get_csw_records(csw, filter_list, esn=\"full\")\n OPeNDAP = service_urls(csw.records, identifier=\"OPeNDAP:OPeNDAP\")\n odp = service_urls(\n csw.records, identifier=\"urn:x-esri:specification:ServiceType:odp:url\"\n )\n dap = OPeNDAP + odp\n dap_urls.extend(dap)\n\n print(\"Number of datasets available: {}\".format(len(csw.records.keys())))\n\n for rec, item in csw.records.items():\n print(\"{}\".format(item.title))\n if dap:\n print(fmt(\" DAP \"))\n for url in dap:\n print(\"{}.html\".format(url))\n print(\"\\n\")\n\n# Get only unique endpoints.\ndap_urls = list(set(dap_urls))",
"We found 10 dataset endpoints but only 9 of them have the proper metadata for us to identify the OPeNDAP endpoint,\nthose that contain either OPeNDAP:OPeNDAP or urn:x-esri:specification:ServiceType:odp:url scheme.\nUnfortunately we lost the COAWST model in the process.\nThe next step is to ensure there are no observations in the list of endpoints.\nWe want only the models for now.",
"from ioos_tools.ioos import is_station\nfrom timeout_decorator import TimeoutError\n\n# Filter out some station endpoints.\nnon_stations = []\nfor url in dap_urls:\n try:\n if not is_station(url):\n non_stations.append(url)\n except (IOError, OSError, RuntimeError, TimeoutError) as e:\n print(\"Could not access URL {}.html\\n{!r}\".format(url, e))\n\ndap_urls = non_stations\n\nprint(fmt(\" Filtered DAP \"))\nfor url in dap_urls:\n print(\"{}.html\".format(url))",
"Now we have a nice list of all the models available in the catalog for the domain we specified.\nWe still need to find the observations for the same domain.\nTo accomplish that we will use the pyoos library and search the SOS CO-OPS services using the virtually the same configuration options from the catalog search.",
"from pyoos.collectors.coops.coops_sos import CoopsSos\n\ncollector_coops = CoopsSos()\n\ncollector_coops.set_bbox(config[\"region\"][\"bbox\"])\ncollector_coops.end_time = config[\"date\"][\"stop\"]\ncollector_coops.start_time = config[\"date\"][\"start\"]\ncollector_coops.variables = [config[\"sos_name\"]]\n\nofrs = collector_coops.server.offerings\ntitle = collector_coops.server.identification.title\nprint(fmt(\" Collector offerings \"))\nprint(\"{}: {} offerings\".format(title, len(ofrs)))",
"To make it easier to work with the data we extract the time-series as pandas tables and interpolate them to a common 1-hour interval index.",
"import pandas as pd\nfrom ioos_tools.ioos import collector2table\n\ndata = collector2table(\n collector=collector_coops,\n config=config,\n col=\"water_surface_height_above_reference_datum (m)\",\n)\n\ndf = dict(\n station_name=[s._metadata.get(\"station_name\") for s in data],\n station_code=[s._metadata.get(\"station_code\") for s in data],\n sensor=[s._metadata.get(\"sensor\") for s in data],\n lon=[s._metadata.get(\"lon\") for s in data],\n lat=[s._metadata.get(\"lat\") for s in data],\n depth=[s._metadata.get(\"depth\") for s in data],\n)\n\npd.DataFrame(df).set_index(\"station_code\")\n\nindex = pd.date_range(\n start=config[\"date\"][\"start\"].replace(tzinfo=None),\n end=config[\"date\"][\"stop\"].replace(tzinfo=None),\n freq=\"1H\",\n)\n\n# Preserve metadata with `reindex`.\nobservations = []\nfor series in data:\n _metadata = series._metadata\n series.index = series.index.tz_localize(None)\n obs = series.reindex(index=index, limit=1, method=\"nearest\")\n obs._metadata = _metadata\n observations.append(obs)",
"The next cell saves those time-series as CF-compliant netCDF files on disk,\nto make it easier to access them later.",
"import iris\nfrom ioos_tools.tardis import series2cube\n\nattr = dict(\n featureType=\"timeSeries\",\n Conventions=\"CF-1.6\",\n standard_name_vocabulary=\"CF-1.6\",\n cdm_data_type=\"Station\",\n comment=\"Data from http://opendap.co-ops.nos.noaa.gov\",\n)\n\n\ncubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations])\n\noutfile = os.path.join(save_dir, \"OBS_DATA.nc\")\niris.save(cubes, outfile)",
"We still need to read the model data from the list of endpoints we found.\nThe next cell takes care of that.\nWe use iris, and a set of custom functions from the ioos_tools library,\nthat downloads only the data in the domain we requested.",
"from ioos_tools.ioos import get_model_name\nfrom ioos_tools.tardis import is_model, proc_cube, quick_load_cubes\nfrom iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError\n\nprint(fmt(\" Models \"))\ncubes = dict()\nfor k, url in enumerate(dap_urls):\n print(\"\\n[Reading url {}/{}]: {}\".format(k + 1, len(dap_urls), url))\n try:\n cube = quick_load_cubes(url, config[\"cf_names\"], callback=None, strict=True)\n if is_model(cube):\n cube = proc_cube(\n cube,\n bbox=config[\"region\"][\"bbox\"],\n time=(config[\"date\"][\"start\"], config[\"date\"][\"stop\"]),\n units=config[\"units\"],\n )\n else:\n print(\"[Not model data]: {}\".format(url))\n continue\n mod_name = get_model_name(url)\n cubes.update({mod_name: cube})\n except (\n RuntimeError,\n ValueError,\n ConstraintMismatchError,\n CoordinateNotFoundError,\n IndexError,\n ) as e:\n print(\"Cannot get cube for: {}\\n{}\".format(url, e))",
"Now we can match each observation time-series with its closest grid point (0.08 of a degree) on each model.\nThis is a complex and laborious task! If you are running this interactively grab a coffee and sit comfortably :-)\nNote that we are also saving the model time-series to files that align with the observations we saved before.",
"import iris\nfrom ioos_tools.tardis import (\n add_station,\n ensure_timeseries,\n get_nearest_water,\n make_tree,\n)\nfrom iris.pandas import as_series\n\nfor mod_name, cube in cubes.items():\n fname = \"{}.nc\".format(mod_name)\n fname = os.path.join(save_dir, fname)\n print(fmt(\" Downloading to file {} \".format(fname)))\n try:\n tree, lon, lat = make_tree(cube)\n except CoordinateNotFoundError:\n print(\"Cannot make KDTree for: {}\".format(mod_name))\n continue\n # Get model series at observed locations.\n raw_series = dict()\n for obs in observations:\n obs = obs._metadata\n station = obs[\"station_code\"]\n try:\n kw = dict(k=10, max_dist=0.08, min_var=0.01)\n args = cube, tree, obs[\"lon\"], obs[\"lat\"]\n try:\n series, dist, idx = get_nearest_water(*args, **kw)\n except RuntimeError as e:\n print(\"Cannot download {!r}.\\n{}\".format(cube, e))\n series = None\n except ValueError:\n status = \"No Data\"\n print(\"[{}] {}\".format(status, obs[\"station_name\"]))\n continue\n if not series:\n status = \"Land \"\n else:\n raw_series.update({station: series})\n series = as_series(series)\n status = \"Water \"\n print(\"[{}] {}\".format(status, obs[\"station_name\"]))\n if raw_series: # Save cube.\n for station, cube in raw_series.items():\n cube = add_station(cube, station)\n try:\n cube = iris.cube.CubeList(raw_series.values()).merge_cube()\n except MergeError as e:\n print(e)\n ensure_timeseries(cube)\n try:\n iris.save(cube, fname)\n except AttributeError:\n # FIXME: we should patch the bad attribute instead of removing everything.\n cube.attributes = {}\n iris.save(cube, fname)\n del cube\n print(\"Finished processing [{}]\".format(mod_name))",
"With the matched set of models and observations time-series it is relatively easy to compute skill score metrics on them. In cells [13] to [16] we apply both mean bias and root mean square errors to the time-series.",
"from ioos_tools.ioos import stations_keys\n\n\ndef rename_cols(df, config):\n cols = stations_keys(config, key=\"station_name\")\n return df.rename(columns=cols)\n\nfrom ioos_tools.ioos import load_ncs\nfrom ioos_tools.skill_score import apply_skill, mean_bias\n\ndfs = load_ncs(config)\n\ndf = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)\nskill_score = dict(mean_bias=df.to_dict())\n\n# Filter out stations with no valid comparison.\ndf.dropna(how=\"all\", axis=1, inplace=True)\ndf = df.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")\n\nfrom ioos_tools.skill_score import rmse\n\ndfs = load_ncs(config)\n\ndf = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)\nskill_score[\"rmse\"] = df.to_dict()\n\n# Filter out stations with no valid comparison.\ndf.dropna(how=\"all\", axis=1, inplace=True)\ndf = df.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")\n\nimport pandas as pd\n\n# Stringfy keys.\nfor key in skill_score.keys():\n skill_score[key] = {str(k): v for k, v in skill_score[key].items()}\n\nmean_bias = pd.DataFrame.from_dict(skill_score[\"mean_bias\"])\nmean_bias = mean_bias.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")\n\nskill_score = pd.DataFrame.from_dict(skill_score[\"rmse\"])\nskill_score = skill_score.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")",
"Last but not least we can assemble a GIS map, cells [17-23],\nwith the time-series plot for the observations and models,\nand the corresponding skill scores.",
"import folium\nfrom ioos_tools.ioos import get_coordinates\n\n\ndef make_map(bbox, **kw):\n line = kw.pop(\"line\", True)\n zoom_start = kw.pop(\"zoom_start\", 5)\n\n lon = (bbox[0] + bbox[2]) / 2\n lat = (bbox[1] + bbox[3]) / 2\n m = folium.Map(\n width=\"100%\", height=\"100%\", location=[lat, lon], zoom_start=zoom_start\n )\n\n if line:\n p = folium.PolyLine(\n get_coordinates(bbox), color=\"#FF0000\", weight=2, opacity=0.9,\n )\n p.add_to(m)\n return m\n\nbbox = config[\"region\"][\"bbox\"]\n\nm = make_map(bbox, zoom_start=8, line=True, layers=True)\n\nall_obs = stations_keys(config)\n\nfrom glob import glob\nfrom operator import itemgetter\n\nimport iris\nfrom folium.plugins import MarkerCluster\n\niris.FUTURE.netcdf_promote = True\n\nbig_list = []\nfor fname in glob(os.path.join(save_dir, \"*.nc\")):\n if \"OBS_DATA\" in fname:\n continue\n cube = iris.load_cube(fname)\n model = os.path.split(fname)[1].split(\"-\")[-1].split(\".\")[0]\n lons = cube.coord(axis=\"X\").points\n lats = cube.coord(axis=\"Y\").points\n stations = cube.coord(\"station_code\").points\n models = [model] * lons.size\n lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist())\n big_list.extend(lista)\n\nbig_list.sort(key=itemgetter(3))\ndf = pd.DataFrame(big_list, columns=[\"name\", \"lon\", \"lat\", \"station\"])\ndf.set_index(\"station\", drop=True, inplace=True)\ngroups = df.groupby(df.index)\n\n\nlocations, popups = [], []\nfor station, info in groups:\n sta_name = all_obs[station]\n for lat, lon, name in zip(info.lat, info.lon, info.name):\n locations.append([lat, lon])\n popups.append(\"[{}]: {}\".format(name, sta_name))\n\nMarkerCluster(locations=locations, popups=popups, name=\"Cluster\").add_to(m)\n\ntitles = {\n \"coawst_4_use_best\": \"COAWST_4\",\n \"pacioos_hycom-global\": \"HYCOM\",\n \"NECOFS_GOM3_FORECAST\": \"NECOFS_GOM3\",\n \"NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST\": \"NECOFS_MassBay\",\n \"NECOFS_FVCOM_OCEAN_BOSTON_FORECAST\": \"NECOFS_Boston\",\n \"SECOORA_NCSU_CNAPS\": \"SECOORA/CNAPS\",\n \"roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best\": \"ESPRESSO Avg\",\n \"roms_2013_da-ESPRESSO_Real-Time_v2_History_Best\": \"ESPRESSO Hist\",\n \"OBS_DATA\": \"Observations\",\n}\n\nfrom itertools import cycle\n\nfrom bokeh.embed import file_html\nfrom bokeh.models import HoverTool, Legend\nfrom bokeh.palettes import Category20\nfrom bokeh.plotting import figure\nfrom bokeh.resources import CDN\nfrom folium import IFrame\n\n# Plot defaults.\ncolors = Category20[20]\ncolorcycler = cycle(colors)\ntools = \"pan,box_zoom,reset\"\nwidth, height = 750, 250\n\n\ndef make_plot(df, station):\n p = figure(\n toolbar_location=\"above\",\n x_axis_type=\"datetime\",\n width=width,\n height=height,\n tools=tools,\n title=str(station),\n )\n leg = []\n for column, series in df.iteritems():\n series.dropna(inplace=True)\n if not series.empty:\n if \"OBS_DATA\" not in column:\n bias = mean_bias[str(station)][column]\n skill = skill_score[str(station)][column]\n line_color = next(colorcycler)\n kw = dict(alpha=0.65, line_color=line_color)\n else:\n skill = bias = \"NA\"\n kw = dict(alpha=1, color=\"crimson\")\n line = p.line(\n x=series.index,\n y=series.values,\n line_width=5,\n line_cap=\"round\",\n line_join=\"round\",\n **kw\n )\n leg.append((\"{}\".format(titles.get(column, column)), [line]))\n p.add_tools(\n HoverTool(\n tooltips=[\n (\"Name\", \"{}\".format(titles.get(column, column))),\n (\"Bias\", bias),\n (\"Skill\", skill),\n ],\n renderers=[line],\n )\n )\n legend = Legend(items=leg, location=(0, 60))\n legend.click_policy = \"mute\"\n p.add_layout(legend, \"right\")\n p.yaxis[0].axis_label = \"Water Height (m)\"\n p.xaxis[0].axis_label = \"Date/time\"\n return p\n\n\ndef make_marker(p, station):\n lons = stations_keys(config, key=\"lon\")\n lats = stations_keys(config, key=\"lat\")\n\n lon, lat = lons[station], lats[station]\n html = file_html(p, CDN, station)\n iframe = IFrame(html, width=width + 40, height=height + 80)\n\n popup = folium.Popup(iframe, max_width=2650)\n icon = folium.Icon(color=\"green\", icon=\"stats\")\n marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon)\n return marker\n\ndfs = load_ncs(config)\n\nfor station in dfs:\n sta_name = all_obs[station]\n df = dfs[station]\n if df.empty:\n continue\n p = make_plot(df, station)\n marker = make_marker(p, station)\n marker.add_to(m)\n\nfolium.LayerControl().add_to(m)\n\ndef embed_map(m):\n from IPython.display import HTML\n\n m.save(\"index.html\")\n with open(\"index.html\") as f:\n html = f.read()\n\n iframe = '<iframe srcdoc=\"{srcdoc}\" style=\"width: 100%; height: 750px; border: none\"></iframe>'\n srcdoc = html.replace('\"', \""\")\n return HTML(iframe.format(srcdoc=srcdoc))\n\n\nembed_map(m)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
texib/deeplearning_homework | Keras_LSTM2.ipynb | mit | [
"import keras",
"原始資料來源的 SQL,這是抽樣過的資料,當中也有一筆資料是修改過的,因為當天 Server 似乎出了一些問題,導至流量大幅下降",
"sql = \"\"\"\nSELECT \ndate,count(distinct cookie_pta) as uv\nfrom\nTABLE_DATE_RANGE(pixinsight.article_visitor_log_1_100_, TIMESTAMP('2017-01-01'), CURRENT_TIMESTAMP())\nwhere venue = 'pixnet'\ngroup by date\norder by date\n\"\"\"\n\nfrom os import environ\n# load and plot dataset\nimport pandas as pd\nfrom pandas import read_csv\nfrom pandas import datetime\nfrom matplotlib import pyplot\nimport matplotlib.dates as mdates\n\n%matplotlib notebook\n\n# %matplotlib inline \n\n# load dataset\ndef parser(x):\n return datetime.strptime(x, '%Y%m%d')\n\nseries = pd.read_gbq(sql,project_id=environ['PROJECT_ID'], verbose=False, private_key=environ['GOOGLE_KEY'])#,header=0, parse_dates=[0], index_col='date', squeeze=True, date_parser=parser)\nseries['date'] = pd.to_datetime(series['date'],format='%Y%m%d')\nseries.index = series['date']\ndel series['date']\n\n# summarize first few rows\nprint(series.head())\n",
"進行 scale to 0-1 ,方便作為 input 及 output (因為 sigmoid 介於 0~1 之間)",
"from sklearn.preprocessing import scale,MinMaxScaler\nscaler = MinMaxScaler()\n\nx = series.values\n\nx = x.reshape([x.shape[0],1])\n\nscaler.fit(x)\n\nx_scaled = scaler.transform(x)\n\npyplot.figure()\npyplot.plot(x_scaled)\npyplot.show()",
"產生 x,y pair\n\n舉列來說假設將 Step Size 設為 4 天,故一筆 Training Data ,為連續 4 天的流量。再來利用這4天的資料來預測第 5 天的流量\n綠色的部是 Training Data(前4天的資料),藍色的部份是需要被預測的部份。示意如下圖\n<img align=\"left\" width=\"50%\" src=\"./imgs/sequence_uv.png\" />",
"#往回看 30 天前的每一筆資料\nstep_size = 15\n\nprint(\"原始資料長度:{}\".format(x_scaled.shape))\n\ndef window_stack(a, stepsize=1, width=3):\n return np.hstack( a[i:1+i-width or None:stepsize] for i in range(0,width) )\n\nimport numpy as np\ntrain_x = window_stack(x_scaled, stepsize=1, width=step_size)\n\n# 最後一筆資料要放棄,因為沒有未來的答案作驗證\n\ntrain_x = train_x[:-1]\ntrain_x.shape\n\n# 請注意千萬不將每一筆(Row) 當中的最後一天資料作為 Training Data 中的 Input Data\ntrain_y = np.array([i for i in x_scaled[step_size:]]) ",
"確認產出來的 Training Data 沒有包含到 Testing Data",
"train_y.shape\n\ntrain_x[0]\n\ntrain_x[1]\n\ntrain_y[0]",
"Design Graph",
"# reshape input to be [samples, time steps, features]\ntrainX = np.reshape(train_x, (train_x.shape[0], step_size, 1))\n\nfrom keras import Sequential\nfrom keras.layers import LSTM,Dense\n# create and fit the LSTM network\nmodel = Sequential()\n# input_shape(step_size,feature_dim)\nmodel.add(LSTM(4, input_shape=(step_size,1), unroll=True))\nmodel.add(Dense(1))\nmodel.compile(loss='mean_squared_error', optimizer='adam',metrics=['accuracy'])\nmodel.summary()",
"最後30 筆資料不要看",
"validation_size = 60\n\nval_loss = []\nloss = []\nfor _ in range(400):\n history = model.fit(trainX[:-1*validation_size],\n train_y[:-1*validation_size],\n epochs=1,shuffle=False, \n validation_data=(trainX[-1*validation_size:],\n train_y[-1*validation_size:]))\n \n loss.append(history.history['loss'])\n val_loss.append(history.history['val_loss'])\n model.reset_states()",
"看一下 Error Rate 曲線",
"pyplot.figure()\npyplot.plot(loss)\npyplot.plot(val_loss)\npyplot.show()",
"看一下曲線擬合效果",
"predict_y = model.predict(trainX)\n\ntrain_y.shape\n\npyplot.figure()\npyplot.plot(scaler.inverse_transform(predict_y))\npyplot.plot(scaler.inverse_transform(train_y))\n\npyplot.show()",
"來預測最後 60 天資料預出來的結果",
"predict_y = model.predict(trainX[-1*validation_size:])\n\npredict_y = scaler.inverse_transform(predict_y)\n\npredict_y.shape\n\npyplot.figure()\npyplot.plot(x[-1*(validation_size+1):-1])\npyplot.plot(predict_y)\n\n\npyplot.show()",
"心得觀察\n\nLSTM 可以學習到 Period Pattern 是沒有問題的,但是似乎對於大幅的震盪以目前的 Model 來說無法完全的 Catch 到,但是還是有學到漲幅的趨勢預測\n至於 LSTM 要如何調整震盪的幅度有兩個想法可以實驗看看\n直接 Modified Training,將震盪幅度加大\n修改 Loss Function ,把平方改為 2.x 次方不知道是否有效果"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
matthijsvk/multimodalSR | code/Experiments/Lasagne_examples/examples/ResNets/resnet50/ImageNet Pretrained Network (ResNet-50).ipynb | mit | [
"Introduction\nThis example demonstraites how to convert Caffe pretrained ResNet-50 model from https://github.com/KaimingHe/deep-residual-networks (firstly described in http://arxiv.org/pdf/1512.03385v1.pdf) into Theano/Lasagne format.\nWe will create a set of Lasagne layers corresponding to the Caffe model specification (prototxt), then copy the parameters from the caffemodel file into our model (like <a href=\"https://github.com/Lasagne/Recipes/blob/master/examples/Using%20a%20Caffe%20Pretrained%20Network%20-%20CIFAR10.ipynb\">here</a>).\nThis notebook produce resnet50.pkl file, which contains dictionary with following foelds:\n * values: numpy array with parameters of the model\n * synset_words: labels of classes\n * mean_image: mean image which should be subtracted from each input image\nThis file can be used for initialization of weights of the model created by modelzoo/resnet50.py.\nLicense\nSame as in parent project https://github.com/KaimingHe/deep-residual-networks/blob/master/LICENSE\nRequirements\nDownload the required files\n<a href=\"https://onedrive.live.com/?authkey=%21AAFW2-FVoxeVRck&id=4006CBB8476FF777%2117887&cid=4006CBB8476FF777\">Here</a> you can find folder with caffe/proto files, we need followings to be stored in ./:\n * ResNet-50-deploy.prototxt contains architecture of ResNet-50 in proto format\n * ResNet-50-model.caffemodel is proto serialization of model parameters\n * ResNet_mean.binaryproto contains mean values\nImports\nWe need caffe to load weights and compare results",
"import caffe\n",
"We need a lot of building blocks from Lasagne to build network",
"import lasagne\nfrom lasagne.utils import floatX\nfrom lasagne.layers import InputLayer\nfrom lasagne.layers import Conv2DLayer as ConvLayer # can be replaced with dnn layers\nfrom lasagne.layers import BatchNormLayer\nfrom lasagne.layers import Pool2DLayer as PoolLayer\nfrom lasagne.layers import NonlinearityLayer\nfrom lasagne.layers import ElemwiseSumLayer\nfrom lasagne.layers import DenseLayer\nfrom lasagne.nonlinearities import rectify, softmax",
"Helper modules, some of them will help us to download images and plot them",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = 8, 6\nimport io\nimport urllib\nimport skimage.transform\nfrom IPython.display import Image\nimport pickle",
"Build Lasagne model\nBatchNormalization issue in caffe\nCaffe doesn't have correct BN layer as described in https://arxiv.org/pdf/1502.03167.pdf:\n * it can collect datasets mean ($\\hat{\\mu}$) and variance ($\\hat{\\sigma}^2$)\n * it can't fit $\\gamma$ and $\\beta$ parameters to scale and shift standardized distribution of feature in following formula: $\\hat{x}_i = \\dfrac{x_i - \\hat{\\mu}_i}{\\sqrt{\\hat{\\sigma}_i^2 + \\epsilon}}\\cdot\\gamma + \\beta$\nTo fix this issue, <a href=\"https://github.com/KaimingHe/deep-residual-networks\">here</a> authors use such BN layer followed by Scale layer, which can fit scale and shift parameters, but can't standardize data:\n<pre>\nlayer {\n bottom: \"res2a_branch1\"\n top: \"res2a_branch1\"\n name: \"bn2a_branch1\"\n type: \"BatchNorm\"\n batch_norm_param {\n use_global_stats: true\n }\n}\n\nlayer {\n bottom: \"res2a_branch1\"\n top: \"res2a_branch1\"\n name: \"scale2a_branch1\"\n type: \"Scale\"\n scale_param {\n bias_term: true\n }\n}\n</pre>\n\nIn Lasagne we have correct BN layer, so we do not need use such a trick.\nReplicated blocks\nSimple blocks\nResNet contains a lot of similar replicated blocks, lets call them simple blocks, which have one of two architectures:\n * Convolution $\\rightarrow$ BN $\\rightarrow$ Nonlinearity\n * Convolution $\\rightarrow$ BN\nhttp://ethereon.github.io/netscope/#/gist/2f702ea9e05900300462102a33caff9c",
"Image(filename='images/head.png', width='40%')",
"We can increase, decrease or keep same dimensionality of data using such blocks. In ResNet-50 only several transformation are used.\nKeep shape with 1x1 convolution\nWe can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the origin of a network after first pool layer):\n * num_filters: same as parent has\n * filter_size: 1\n * stride: 1\n * pad: 0",
"Image(filename='images/conv1x1.png', width='40%')",
"Keep shape with 3x3 convolution\nAlso we can apply nonlinearity transformation from (None, 64, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the middle of any residual blocks):\n * num_filters: same as parent has\n * filter_size: 3x3\n * stride: 1\n * pad: 1",
"Image(filename='images/conv3x3.png', width='40%')",
"Increase shape using number of filters\nWe can nonlinearly increase shape from (None, 64, 56, 56) to (None, 256, 56, 56) if we apply simple block with following parameters (look at the last simple block of any risidual block):\n * num_filters: four times greater then parent has\n * filter_size: 1x1\n * stride: 1\n * pad: 0",
"Image(filename='images/increase_fn.png', width='40%')",
"Increase shape using number of filters\nWe can nonlinearly decrease shape from (None, 256, 56, 56) to (None, 64, 56, 56) if we apply simple block with following parameters (look at the first simple block of any risidual block without left branch):\n * num_filters: four times less then parent has\n * filter_size: 1x1\n * stride: 1\n * pad: 0",
"Image(filename='images/decrease_fn.png', width='40%')",
"Increase shape using number of filters\nWe can also nonlinearly decrease shape from (None, 256, 56, 56) to (None, 128, 28, 28) if we apply simple block with following parameters (look at the first simple block of any risidual block with left branch):\n * num_filters: two times less then parent has\n * filter_size: 1x1\n * stride: 2\n * pad: 0",
"Image(filename='images/decrease_fnstride.png', width='40%')",
"Following function creates simple block",
"def build_simple_block(incoming_layer, names,\n num_filters, filter_size, stride, pad, \n use_bias=False, nonlin=rectify):\n \"\"\"Creates stacked Lasagne layers ConvLayer -> BN -> (ReLu)\n \n Parameters:\n ----------\n incoming_layer : instance of Lasagne layer\n Parent layer\n \n names : list of string\n Names of the layers in block\n \n num_filters : int\n Number of filters in convolution layer\n \n filter_size : int\n Size of filters in convolution layer\n \n stride : int\n Stride of convolution layer\n \n pad : int\n Padding of convolution layer\n \n use_bias : bool\n Whether to use bias in conlovution layer\n \n nonlin : function\n Nonlinearity type of Nonlinearity layer\n \n Returns\n -------\n tuple: (net, last_layer_name)\n net : dict\n Dictionary with stacked layers\n last_layer_name : string\n Last layer name\n \"\"\"\n net = []\n net.append((\n names[0], \n ConvLayer(incoming_layer, num_filters, filter_size, pad, stride, \n flip_filters=False, nonlinearity=None) if use_bias \n else ConvLayer(incoming_layer, num_filters, filter_size, stride, pad, b=None, \n flip_filters=False, nonlinearity=None)\n ))\n \n net.append((\n names[1], \n BatchNormLayer(net[-1][1])\n ))\n if nonlin is not None:\n net.append((\n names[2], \n NonlinearityLayer(net[-1][1], nonlinearity=nonlin)\n ))\n \n return dict(net), net[-1][0]",
"Residual blocks\nResNet also contains several residual blockes built from simple blocks, each of them have two branches; left branch sometimes contains simple block, sometimes not. Each block ends with Elementwise sum layer followed by ReLu nonlinearity. \nhttp://ethereon.github.io/netscope/#/gist/410e7e48fa1e5a368ee7bca5eb3bf0ca",
"Image(filename='images/left_branch.png', width='40%')\n\nImage(filename='images/no_left_branch.png', width='40%')\n\nsimple_block_name_pattern = ['res%s_branch%i%s', 'bn%s_branch%i%s', 'res%s_branch%i%s_relu']\n\ndef build_residual_block(incoming_layer, ratio_n_filter=1.0, ratio_size=1.0, has_left_branch=False, \n upscale_factor=4, ix=''):\n \"\"\"Creates two-branch residual block\n \n Parameters:\n ----------\n incoming_layer : instance of Lasagne layer\n Parent layer\n \n ratio_n_filter : float\n Scale factor of filter bank at the input of residual block\n \n ratio_size : float\n Scale factor of filter size\n \n has_left_branch : bool\n if True, then left branch contains simple block\n \n upscale_factor : float\n Scale factor of filter bank at the output of residual block\n \n ix : int\n Id of residual block\n \n Returns\n -------\n tuple: (net, last_layer_name)\n net : dict\n Dictionary with stacked layers\n last_layer_name : string\n Last layer name\n \"\"\"\n net = {}\n \n # right branch\n net_tmp, last_layer_name = build_simple_block(\n incoming_layer, map(lambda s: s % (ix, 2, 'a'), simple_block_name_pattern),\n int(lasagne.layers.get_output_shape(incoming_layer)[1]*ratio_n_filter), 1, int(1.0/ratio_size), 0)\n net.update(net_tmp)\n \n net_tmp, last_layer_name = build_simple_block(\n net[last_layer_name], map(lambda s: s % (ix, 2, 'b'), simple_block_name_pattern),\n lasagne.layers.get_output_shape(net[last_layer_name])[1], 3, 1, 1)\n net.update(net_tmp)\n \n net_tmp, last_layer_name = build_simple_block(\n net[last_layer_name], map(lambda s: s % (ix, 2, 'c'), simple_block_name_pattern),\n lasagne.layers.get_output_shape(net[last_layer_name])[1]*upscale_factor, 1, 1, 0,\n nonlin=None)\n net.update(net_tmp)\n \n right_tail = net[last_layer_name]\n left_tail = incoming_layer\n \n # left branch\n if has_left_branch:\n net_tmp, last_layer_name = build_simple_block(\n incoming_layer, map(lambda s: s % (ix, 1, ''), simple_block_name_pattern),\n int(lasagne.layers.get_output_shape(incoming_layer)[1]*4*ratio_n_filter), 1, int(1.0/ratio_size), 0,\n nonlin=None)\n net.update(net_tmp)\n left_tail = net[last_layer_name]\n \n net['res%s' % ix] = ElemwiseSumLayer([left_tail, right_tail], coeffs=1)\n net['res%s_relu' % ix] = NonlinearityLayer(net['res%s' % ix], nonlinearity=rectify)\n \n return net, 'res%s_relu' % ix",
"Gathering everighting together\nCreate head of the network (everithing before first residual block)",
"net = {}\nnet['input'] = InputLayer((None, 3, 224, 224))\nsub_net, parent_layer_name = build_simple_block(\n net['input'], ['conv1', 'bn_conv1', 'conv1_relu'],\n 64, 7, 3, 2, use_bias=True)\nnet.update(sub_net)\nnet['pool1'] = PoolLayer(net[parent_layer_name], pool_size=3, stride=2, pad=0, mode='max', ignore_border=False)",
"Create four groups of residual blocks",
"block_size = list('abc')\nparent_layer_name = 'pool1'\nfor c in block_size:\n if c == 'a':\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1, 1, True, 4, ix='2%s' % c)\n else:\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='2%s' % c)\n net.update(sub_net)\n \nblock_size = list('abcd')\nfor c in block_size:\n if c == 'a':\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='3%s' % c)\n else:\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='3%s' % c)\n net.update(sub_net)\n \nblock_size = list('abcdef')\nfor c in block_size:\n if c == 'a':\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='4%s' % c)\n else:\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='4%s' % c)\n net.update(sub_net)\n \nblock_size = list('abc')\nfor c in block_size:\n if c == 'a':\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/2, 1.0/2, True, 4, ix='5%s' % c)\n else:\n sub_net, parent_layer_name = build_residual_block(net[parent_layer_name], 1.0/4, 1, False, 4, ix='5%s' % c)\n net.update(sub_net)",
"Create tail of the network (everighting after last resudual block)",
"net['pool5'] = PoolLayer(net[parent_layer_name], pool_size=7, stride=1, pad=0, \n mode='average_exc_pad', ignore_border=False)\nnet['fc1000'] = DenseLayer(net['pool5'], num_units=1000, nonlinearity=None)\nnet['prob'] = NonlinearityLayer(net['fc1000'], nonlinearity=softmax)\n\nprint 'Total number of layers:', len(lasagne.layers.get_all_layers(net['prob']))",
"Transfer weights from caffe to lasagne\nLoad pretrained caffe model",
"net_caffe = caffe.Net('./ResNet-50-deploy.prototxt', './ResNet-50-model.caffemodel', caffe.TEST)\nlayers_caffe = dict(zip(list(net_caffe._layer_names), net_caffe.layers))\nprint 'Number of layers: %i' % len(layers_caffe.keys())",
"Copy weights\nThere is one more issue with BN layer: caffa stores variance $\\sigma^2$, but lasagne stores inverted standard deviation $\\dfrac{1}{\\sigma}$, so we need make simple transfommation to handle it.\nOther issue reffers to weights ofthe dense layer, in caffe it is transposed, we should handle it too.",
"for name, layer in net.items(): \n if name not in layers_caffe:\n print name, type(layer).__name__\n continue\n if isinstance(layer, BatchNormLayer):\n layer_bn_caffe = layers_caffe[name]\n layer_scale_caffe = layers_caffe['scale' + name[2:]]\n layer.gamma.set_value(layer_scale_caffe.blobs[0].data)\n layer.beta.set_value(layer_scale_caffe.blobs[1].data)\n layer.mean.set_value(layer_bn_caffe.blobs[0].data)\n layer.inv_std.set_value(1/np.sqrt(layer_bn_caffe.blobs[1].data) + 1e-4)\n continue\n if isinstance(layer, DenseLayer):\n layer.W.set_value(layers_caffe[name].blobs[0].data.T)\n layer.b.set_value(layers_caffe[name].blobs[1].data)\n continue\n if len(layers_caffe[name].blobs) > 0:\n layer.W.set_value(layers_caffe[name].blobs[0].data)\n if len(layers_caffe[name].blobs) > 1:\n layer.b.set_value(layers_caffe[name].blobs[1].data)",
"Testing\nRead ImageNet synset",
"with open('./imagenet_classes.txt', 'r') as f:\n classes = map(lambda s: s.strip(), f.readlines())",
"Download some image urls for recognition",
"index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()\nimage_urls = index.split('<br>')\nnp.random.seed(23)\nnp.random.shuffle(image_urls)\nimage_urls = image_urls[:100]",
"Load mean values",
"blob = caffe.proto.caffe_pb2.BlobProto()\ndata = open('./ResNet_mean.binaryproto', 'rb').read()\nblob.ParseFromString(data)\nmean_values = np.array(caffe.io.blobproto_to_array(blob))[0]",
"Image loader",
"def prep_image(url, fname=None):\n if fname is None:\n ext = url.split('.')[-1]\n im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)\n else:\n ext = fname.split('.')[-1]\n im = plt.imread(fname, ext)\n h, w, _ = im.shape\n if h < w:\n im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)\n else:\n im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)\n h, w, _ = im.shape\n im = im[h//2-112:h//2+112, w//2-112:w//2+112]\n rawim = np.copy(im).astype('uint8')\n im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)\n im = im[::-1, :, :]\n im = im - mean_values\n return rawim, floatX(im[np.newaxis])",
"Lets take five images and compare prediction of Lasagne with Caffe",
"n = 5\nm = 5\ni = 0\nfor url in image_urls:\n print url\n try:\n rawim, im = prep_image(url)\n except:\n print 'Failed to download'\n continue\n\n prob_lasangne = np.array(lasagne.layers.get_output(net['prob'], im, deterministic=True).eval())[0]\n prob_caffe = net_caffe.forward_all(data=im)['prob'][0]\n\n \n print 'Lasagne:'\n res = sorted(zip(classes, prob_lasangne), key=lambda t: t[1], reverse=True)[:n]\n for c, p in res:\n print ' ', c, p\n \n print 'Caffe:'\n res = sorted(zip(classes, prob_caffe), key=lambda t: t[1], reverse=True)[:n]\n for c, p in res:\n print ' ', c, p\n \n plt.figure()\n plt.imshow(rawim.astype('uint8'))\n plt.axis('off')\n plt.show()\n \n i += 1\n if i == m:\n break\n \n print '\\n\\n'\n\nmodel = {\n 'values': lasagne.layers.get_all_param_values(net['prob']),\n 'synset_words': classes,\n 'mean_image': mean_values\n}\n\npickle.dump(model, open('./resnet50.pkl', 'wb'), protocol=-1)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ppham27/MLaPP-solutions | chap12/8.ipynb | mit | [
"Latent Semantic Indexing\nHere, we apply the technique Latent Semantic Indexing to capture the similarity of words. We are given a list of words and their frequencies in 9 documents, found on GitHub.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn import preprocessing\n\nplt.rcParams['font.size'] = 16\n\nwords_list = list()\nwith open('lsiWords.txt') as f:\n for line in f:\n words_list.append(line.strip())\nwords = pd.Series(words_list, name=\"words\")\nword_frequencies = pd.read_csv('lsiMatrix.txt', sep=' ', index_col=False,\n header=None, names=words)\nword_frequencies.T.head(20)",
"Now as per part (a), we compute the SVD and use the first two singular values. Recall the model is that\n\\begin{equation}\n\\mathbf{x} \\sim \\mathcal{N}\\left(W\\mathbf{z},\\Psi\\right),\n\\end{equation}\nwhere $\\Psi$ is diagonal. If the SVD is $X = UDV^\\intercal,$ $W$ will be the first two columns of $V$.",
"X = word_frequencies.as_matrix().astype(np.float64)\nU, D, V = np.linalg.svd(X.T) # in matlab the matrix is read in as its transpose",
"In this way, we let $Z = UD$, so $X = ZV^\\intercal$. Now, let $\\tilde{Z}$ be the approximation from using 2 singular values, so $\\tilde{X} = \\tilde{Z}W^\\intercal$, so $\\tilde{Z} = \\tilde{U}\\tilde{D}$. For some reason, the textbook chooses not to scale by $\\tilde{D}$, so we just have $\\tilde{U}$. Recall that all the variables are messed up because we used the tranpose.",
"Z = V.T[:,:2]\nZ",
"Now, let's plot these results.",
"plt.figure(figsize=(8,8))\ndef plot_latent_variables(Z, ax=None):\n if ax == None:\n ax = plt.gca()\n ax.plot(Z[:,0], Z[:,1], 'o', markerfacecolor='none')\n for i in range(len(Z)):\n ax.text(Z[i,0] + 0.005, Z[i,1], i, \n verticalalignment='center')\n ax.set_xlabel('$z_1$')\n ax.set_ylabel('$z_2$')\n ax.set_title('PCA with $L = 2$ for Alien Documents')\n ax.grid(True)\nplot_latent_variables(Z)\nplt.show()",
"I, respectfully, disagree with the book for this reason. The optimal latent representation $Z = XW$ (observations are rows here), should be chosen such that\n\\begin{equation}\nJ(W,Z) = \\frac{1}{N}\\left\\lVert X - ZW^\\intercal\\right\\rVert^2\n\\end{equation}\nis minimized, where $W$ is orthonormal.",
"U, D, V = np.linalg.svd(X)\nV = V.T # python implementation of SVD factors X = UDV (note that V is not tranposed)",
"By section 12.2.3 of the book, $W$ is the first $2$ columns of $V$. Thus, our actual plot should be below.",
"W = V[:,:2]\nZ = np.dot(X, W)\nplt.figure(figsize=(8,8))\nax = plt.gca();\nplot_latent_variables(Z, ax=ax)\nax.set_aspect('equal')\nplt.show()",
"Note that this is very similar with the $y$-axis flipped. That part does not actually matter. What matters is the scaling by eigenvalues for computing. Before that scaling the proximity of points may not mean much if the eigenvalue is actually very large.\nNow, the second part asks us to see if we can properly identify documents related to abductions by using a document with the single word abducted as a probe.",
"probe_document = np.zeros_like(words, dtype=np.float64)\nabducted_idx = (words=='abducted').as_matrix()\nprobe_document[abducted_idx] = 1\nX[0:3,abducted_idx]",
"Note that despite the first document being about abductions, it doesn't contain the word abducted.\nLet's look at the latent variable representation. We'll use cosine similarity to account for the difference in magnitude.",
"from scipy.spatial import distance\nz = np.dot(probe_document, W)\nsimilarities = list(map(lambda i : (i, 1 - distance.cosine(z,Z[i,:])), range(len(Z))))\nsimilarities.sort(key=lambda similarity_tuple : -similarity_tuple[1])\nsimilarities",
"Indeed, we find the three alien abduction documents, $0$, $2$, and $1$ are most similar to our probe."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
YufeiZhang/Principles-of-Programming-Python-3 | Lectures/Lecture_2/Jupyter_notebook_sheets/functions.ipynb | gpl-3.0 | [
"<h1 align=\"center\">Functions</h1>\n\nScope rules",
"a = 0\nm = 0\nn = 0\nx = 0\n\nprint('PRINT 0: a =', a,\n ' m =', m, ' n =', n,\n ' x =', x)\n\ndef f_1():\n m = 1\n global n\n n = 1\n x = 1\n y = 1\n z = 1\n print('PRINT 1: a =', a,\n ' m =', m, ' n =', n,\n ' x =', x, ' y =', y, ' z =', z)\n \n def f_2():\n global m\n m = 2\n # Cannot write:\n # nonlocal n\n global n\n n = 2\n global p\n p = 2\n x = 2\n nonlocal y\n y = 2\n # Cannot write:\n # nonlocal u\n print('PRINT 2: a =', a,\n ' m =', m, ' n =', n, ' p =', p,\n ' x =', x, ' y =', y, ' z =', z)\n\n def f_3():\n nonlocal x\n x = 3\n nonlocal y\n y = 3\n nonlocal z\n z = 3\n print('PRINT 3: a =', a,\n ' m =', m, ' n =', n, ' p =', p,\n ' x =', x, ' y =', y, ' z =', z)\n\n f_3()\n print('PRINT 4: a =', a,\n ' m =', m, ' n =', n, ' p =', p,\n ' x =', x, ' y =', y, ' z =', z)\n\n f_2()\n print('PRINT 5: a =', a,\n ' m =', m, ' n =', n, ' p =', p,\n ' x =', x, ' y =', y, ' z =', z)\n\nf_1()\nprint('PRINT 6: a =', a,\n ' m =', m, ' n =', n, ' p =', p,\n ' x =', x)\n\nx = 0\n\ndef f():\n print(x)\n x = 1\n\nf()\n\ndef f():\n m = 0\n class C:\n m = 1\n def g(self):\n print(m)\n C().g()\nf()\n\ni = 1\nbad_increment = lambda x: x + i\ni = 0\nprint(bad_increment(2))\n\ni = 1\ngood_increment = lambda x, i = i: x + i\ni = 0\nprint(good_increment(2))",
"Closures (factory functions)",
"def v1_multiply_by(m):\n def multiply(n):\n return n * m\n return multiply\n\nmultiply_by_7 = v1_multiply_by(7)\nprint(multiply_by_7(4))\n\ndef v2_multiply_by(m):\n return lambda n: n * m\n\nmultiply_by_7 = v2_multiply_by(7)\nprint(multiply_by_7(4))\n\ndef multiplications_between_0_and_9():\n multiply_by = []\n for m in range(10):\n # If \"lambda n, m = m: n * m\" is replaced by \"lambda n, m: n * m\"\n # then all mulplications are by 9\n multiply_by.append(lambda n, m = m: n * m)\n return multiply_by\n\nmultiply_by = multiplications_between_0_and_9()\nmultiply_by_7 = multiply_by[7]\nprint(multiply_by_7(4))",
"Function states",
"from random import randrange\n\ndef randomly_odd_or_even_random_digit():\n odd = randrange(2)\n if odd:\n def random_odd_or_random_even_digit():\n return randrange(1, 10, 2)\n else:\n def random_odd_or_random_even_digit():\n return randrange(0, 10, 2)\n random_odd_or_random_even_digit.odd = odd\n return random_odd_or_random_even_digit\n\nfor i in range(10):\n random_odd_or_random_even_digit = randomly_odd_or_even_random_digit()\n if random_odd_or_random_even_digit.odd:\n print('Will be a random odd digit.... ', random_odd_or_random_even_digit())\n else:\n print('Will be a random even digit... ', random_odd_or_random_even_digit())",
"Function parameters:\n\nfirst parameters without default values, if any,\nthen parameters with default values, if any,\nthen, possibly,\neither a starred parameter to\ngather values and assign them to parameters of the first and second type beyond the longest initial segment of those that are otherwise assigned an argument, if any, provided none of those parameters is assigned a keyword argument,\nand to store an arbitray number of positional arguments beyond those that have been assigned to a parameter, if any,\n\n\nor only a star,\nif a starred parameter or only a star is present, then parameters for required keyword arguments (so called \"keyword-only arguments\"), if any, with or without defaults (actually the defaults make the associated keyword-only arguments not truly required and these parameters could be part of the second group),\nthen a double starred parameter to store an arbitray number of keyword arguments, if any.\n\nFunction arguments:\n\npositional arguments precede keyword arguments and double starred ones, and\nstarred arguments precede double starred ones.",
"def f1(a, b, c = 3, d = 4, e = 5, f = 6):\n print(a, b, c, d, e, f)\n\nf1(11, 12, 13, 14, 15, 16)\nf1(11, 12, 13, *(14, 15, 16))\nf1(11, *(12, 13, 14), **{'f': 16, 'e': 15})\nf1(11, 12, 13, e = 15)\nf1(11, c = 13, b = 12, e = 15)\nf1(11, c = 13, *(12,), e = 15)\nf1(11, *(12, 13), e = 15)\nf1(11, e = 15, *(12, 13))\nf1(11, f = 16, e = 15, b = 12, c = 13)\nf1(11, f = 16, **{'e': 15, 'b': 12, 'c': 13})\nf1(11, *(12, 13), e = 15, **{'f': 16, 'd': 14})\nf1(11, e = 15, *(12,), **{'f': 16, 'd': 14})\nf1(11, f = 16, *(12, 13), e = 15, **{'d': 14})\n\ndef f2(*x):\n print(x)\n\nf2()\nf2(11)\nf2(11, 12, *(13, 14, 15))\n\ndef f3(*x, a, b = -2, c):\n print(x, a, b, c)\n\nf3(c = 23, a = 21)\nf3(11, 12, a = 21, **{'b': 22, 'c': 23})\nf3(11, *(12, 13), c = 23, a = 21)\nf3(11, 12, 13, c = 23, *(14, 15), **{'a': 21})\n\ndef f4(*, a, b = -2, c):\n print(a, b, c)\n\nf4(c = 23, a = 21)\nf4(**{'a': 21, 'b': 22, 'c': 23})\nf4(c = 23, **{'a': 21})\nf4(a = 21, **{'c': 23, 'b': 22})\n\ndef f5(**x):\n print(x)\n\nf5()\nf5(a = 11, b = 12)\nf5(**{'a': 11, 'b': 12, 'c': 13})\nf5(a = 11, c = 12, e = 15, **{'b': 13, 'd': 14})\n\ndef f6(a, b, c, d = 4, e = 5, *x, m, n = -2, o, **z):\n print(a, b, c, d, e, x, m, n, o, z)\n\n# Cannot replace \"*(12,)\" by \"*(12, 21)\"\nf6(11, t = 40, e = 15, *(12,), o = 33, c = 13, m = 31, u = 41,\n **{'v': 42, 'w': 43}) \n# Cannot replace \"*(13, 14)\" by \"*(13, 14, 21)\"\nf6(11, 12, u = 41, m = 31, t = 40, e = 15, *(13, 14), o = 33,\n **{'v': 42, 'w': 43}) \nf6(11, u = 41, o = 33, *(12, 13, 14, 15, 21, 22), n = 32, t = 40, m = 31,\n **{'v': 42, 'w': 43}) \nf6(11, 12, 13, n = 32, t = 40, *(14, 15, 21, 22, 23), o = 33, u = 41, m = 31,\n **{'v': 42, 'w': 43})",
"Function annotations",
"def f(w: str, a: int, b: int = -2, x: float = -3.) -> int:\n if w == 'incorrect_return_type':\n return '0'\n return 0\n\nfrom inspect import signature\n\ndef type_check(function, *args, **kwargs):\n '''Assumes that \"function\" has nothing but variables possibly with defaults\n as arguments and has type annotations for all arguments and the returned value.\n Checks whether a combination of positional and default arguments is correct,\n and in case it is whether those arguments are of the appropriate types,\n and in case they are whether the returned value is of the appropriate type.\n '''\n good_arguments = True\n argument_type_errors = ''\n parameters = list(reversed(function.__code__.co_varnames))\n if len(args) > len(parameters):\n print('Incorrect sequence of arguments')\n return\n for argument in args:\n parameter = parameters.pop()\n if not isinstance(argument, function.__annotations__[parameter]):\n argument_type_errors += ('{} should be of type {}\\n'\n .format(parameter, function.__annotations__[parameter]))\n good_arguments = False\n for argument in kwargs:\n if not argument in parameters:\n print('Incorrect sequence of arguments')\n return\n if not isinstance(kwargs[argument], function.__annotations__[argument]):\n argument_type_errors += ('{} should be of type {}\\n'\n .format(argument, function.__annotations__[argument]))\n good_arguments = False\n parameters.remove(argument)\n # Make sure that all parameters left are given a default value.\n if any([parameter for parameter in parameters\n if signature(function).parameters[parameter].default is\n signature(function).parameters[parameter].empty]):\n print('Incorrect sequence of arguments')\n return\n if good_arguments:\n if isinstance(function(*args, **kwargs), function.__annotations__['return']):\n print('All good')\n else:\n (print('The returned value should be of type {}'\n .format(function.__annotations__['return'])))\n else:\n print(argument_type_errors, end = '')\n\nfor args, kwargs in [(('0', 1, 2, 3.), {}),\n (('0', 1, 2), {'x': 3.}),\n (('0', 1), {'b': 2, 'x': 3.}),\n (('0',), {'x': 3., 'a': 1, 'b': 2}),\n ((), {'x': 3., 'w': '0', 'a': 1}),\n (('0', 1, 2), {}),\n (('0',), {}),\n (('0'), {'x': 3.}),\n (('0', 1, 2, 3., 4), {}),\n (('incorrect_return_type', 1, 2, 3.), {'x' : 3}),\n (('incorrect_return_type', 1, 2), {'y': 3}),\n (('0', 1), {'x': 3, 'c': 2}),\n ((), {'a': 1, 'b': 2,'x': 3}),\n ((0, 1, 2, 3.), {}),\n (('0', 1., 2, 3), {'w': 'incorrect_return_type'}),\n (('incorrect_return_type', 1, 2), {'x': 3}),\n ((0, 1), {'b': 2., 'x': 3.}),\n ((0,), {'x': 3, 'a': 1., 'b': 2.}),\n ((), {'x': 3, 'w': 0, 'a': 1.}),\n (('incorrect_return_type', 1, 2, 3.), {})]:\n print('Testing {}, {}:'.format(args, kwargs))\n type_check(f, *args, **kwargs)\n print()",
"Mutable versus immutable default values",
"def append_one_v1(L = []):\n L.append(1)\n return L\n\ndef append_one_v2(L = None):\n if L == None:\n L = []\n L.append(1)\n return L\n\nfor i in range(5):\n print(append_one_v1([0]))\nprint()\nfor i in range(5):\n print(append_one_v1())\nprint()\nfor i in range(5):\n print(append_one_v2([0]))\nprint()\nfor i in range(5):\n print(append_one_v2())\n\n_nothing = object()\n\ndef f(x = _nothing):\n if x is _nothing:\n print('Nothing')\n else:\n print('Something')\n\nf(0), f(1), f([]), f([1]), f(None)\nprint()\nf()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DigitalSlideArchive/HistomicsTK | docs/examples/segmentation_masks_to_annotations.ipynb | apache-2.0 | [
"Converting masks back to annotations\nOverview:\n\nMost segmentation algorithms produce outputs in an image format. Visualizing these outputs in HistomicsUI requires conversion from mask images to an annotation document containing (x,y) coordinates in the whole-slide image coordinate frame. This notebook demonstrates this conversion process in two steps:\n\n\nConverting a mask image into contours (coordinates in the mask frame)\n\n\nPlacing contours data into a format following the annotation document schema that can be pushed to DSA for visualization in HistomicsUI.\n\n\nThis notebook is based on work described in Amgad et al, 2019:\nMohamed Amgad, Habiba Elfandy, Hagar Hussein, ..., Jonathan Beezley, Deepak R Chittajallu, David Manthey, David A Gutman, Lee A D Cooper, Structured crowdsourcing enables convolutional segmentation of histology images, Bioinformatics, 2019, btz083\nWhere to look?\n|_ histomicstk/\n |_annotations_and_masks/\n | |_masks_to_annotations_handler.py\n |_tests/\n |_test_masks_to_annotations_handler.py",
"import os\nCWD = os.getcwd()\nimport girder_client\nfrom pandas import read_csv\nfrom imageio import imread\nfrom histomicstk.annotations_and_masks.masks_to_annotations_handler import (\n get_contours_from_mask,\n get_single_annotation_document_from_contours,\n get_annotation_documents_from_contours)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 7, 7",
"1. Connect girder client and set parameters",
"# APIURL = 'http://demo.kitware.com/histomicstk/api/v1/'\n# SAMPLE_SLIDE_ID = '5bbdee92e629140048d01b5d'\nAPIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'\nSAMPLE_SLIDE_ID = '5d586d76bd4404c6b1f286ae'\n\n# Connect to girder client\ngc = girder_client.GirderClient(apiUrl=APIURL)\ngc.authenticate(interactive=True)\n# gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')",
"Let's inspect the ground truth codes file\nThis contains the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns:\n\ngroup: group name of annotation (string), eg. \"mostly_tumor\"\nGT_code: int, desired ground truth code (in the mask) Pixels of this value belong to corresponding group (class)\ncolor: str, rgb format. eg. rgb(255,0,0).\n\nNOTE:\nZero pixels have special meaning and do not encode specific ground truth class. Instead, they simply mean 'Outside ROI' and should be ignored during model training or evaluation.",
"# read GTCodes dataframe\nGTCODE_PATH = os.path.join(\n CWD, '..', '..', 'tests', 'test_files', 'sample_GTcodes.csv')\nGTCodes_df = read_csv(GTCODE_PATH)\nGTCodes_df.index = GTCodes_df.loc[:, 'group']\n\nGTCodes_df.head()",
"Read and visualize mask",
"# read mask\nX_OFFSET = 59206\nY_OFFSET = 33505\nMASKNAME = \"TCGA-A2-A0YE-01Z-00-DX1.8A2E3094-5755-42BC-969D-7F0A2ECA0F39\" + \\\n \"_left-%d_top-%d_mag-BASE.png\" % (X_OFFSET, Y_OFFSET)\nMASKPATH = os.path.join(CWD, '..', '..', 'tests', 'test_files', 'annotations_and_masks', MASKNAME)\nMASK = imread(MASKPATH)\n\nplt.figure(figsize=(7,7))\nplt.imshow(MASK)\nplt.title(MASKNAME[:23])\nplt.show()",
"2. Get contours from mask\nThis function get_contours_from_mask() generates contours from a mask image. There are many parameters that can be set but most have defaults set for the most common use cases. The only required parameters you must provide are MASK and GTCodes_df, but you may want to consider setting the following parameters based on your specific needs: get_roi_contour, roi_group, discard_nonenclosed_background, background_group, that control behaviour regarding region of interest (ROI) boundary and background pixel class (e.g. stroma).",
"print(get_contours_from_mask.__doc__)",
"Extract contours",
"# Let's extract all contours from a mask, including ROI boundary. We will\n# be discarding any stromal contours that are not fully enclosed within a \n# non-stromal contour since we already know that stroma is the background\n# group. This is so things look uncluttered when posted to DSA.\ngroups_to_get = None\ncontours_df = get_contours_from_mask(\n MASK=MASK, GTCodes_df=GTCodes_df, groups_to_get=groups_to_get,\n get_roi_contour=True, roi_group='roi',\n discard_nonenclosed_background=True,\n background_group='mostly_stroma',\n MIN_SIZE=30, MAX_SIZE=None, verbose=True,\n monitorPrefix=MASKNAME[:12] + \": getting contours\")",
"Let's inspect the contours dataframe\nThe columns that really matter here are group, color, coords_x, and coords_y.",
"contours_df.head()",
"3. Get annotation documents from contours\nThis method get_annotation_documents_from_contours() generates formatted annotation documents from contours that can be posted to the DSA server.",
"print(get_annotation_documents_from_contours.__doc__)",
"As mentioned in the docs, this function wraps get_single_annotation_document_from_contours()",
"print(get_single_annotation_document_from_contours.__doc__)",
"Let's get a list of annotation documents (each is a dictionary). For the purpose of this tutorial, \nwe separate the documents by group (i.e. each document is composed of polygons from the same\nstyle/group). You could decide to allow heterogeneous groups in the same annotation document by\nsetting separate_docs_by_group to False. We place 10 polygons in each document for this demo\nfor illustration purposes. Realistically you would want each document to contain several hundred depending on their complexity. Placing too many polygons in each document can lead to performance issues when rendering in HistomicsUI.\nGet annotation documents",
"# get list of annotation documents\nannprops = {\n 'X_OFFSET': X_OFFSET,\n 'Y_OFFSET': Y_OFFSET,\n 'opacity': 0.2,\n 'lineWidth': 4.0,\n}\nannotation_docs = get_annotation_documents_from_contours(\n contours_df.copy(), separate_docs_by_group=True, annots_per_doc=10,\n docnamePrefix='demo', annprops=annprops,\n verbose=True, monitorPrefix=MASKNAME[:12] + \": annotation docs\")",
"Let's examine one of the documents.\nLimit display to the first two elements (polygons) and cap the vertices for clarity.",
"ann_doc = annotation_docs[0].copy()\nann_doc['elements'] = ann_doc['elements'][:2]\nfor i in range(2):\n ann_doc['elements'][i]['points'] = ann_doc['elements'][i]['points'][:5]\n\nann_doc",
"Post the annotation to the correct item/slide in DSA",
"# deleting existing annotations in target slide (if any)\nexisting_annotations = gc.get('/annotation/item/' + SAMPLE_SLIDE_ID)\nfor ann in existing_annotations:\n gc.delete('/annotation/%s' % ann['_id'])\n\n# post the annotation documents you created \nfor annotation_doc in annotation_docs:\n resp = gc.post(\n \"/annotation?itemId=\" + SAMPLE_SLIDE_ID, json=annotation_doc)",
"Now you can go to HistomicsUI and confirm that the posted annotations make\nsense and correspond to tissue boundaries and expected labels."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
OSGeo-live/CesiumWidget | Examples/CesiumWidget Example KML.ipynb | apache-2.0 | [
"Cesium Widget Example KML\nIf the installation of Cesiumjs is ok, it should be reachable here:\nhttp://localhost:8888/nbextensions/CesiumWidget/cesium/index.html",
"from CesiumWidget import CesiumWidget\nfrom IPython import display\nimport numpy as np",
"Create widget object",
"cesium = CesiumWidget()",
"Display the widget:",
"cesium",
"Cesium is packed with example data. Let's look at some GDP per captia data from 2008.",
"cesium.kml_url = '/nbextensions/CesiumWidget/cesium/Apps/SampleData/kml/gdpPerCapita2008.kmz'",
"Example zoomto",
"for lon in np.arange(0, 360, 0.5):\n cesium.zoom_to(lon, 0, 36000000, 0 ,-90, 0)\n\ncesium._zoomto",
"Example flyto",
"cesium.fly_to(14, 90, 20000001)\n\ncesium._flyto"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.18/_downloads/db126f84a1b5439712a1d57b1be2255c/plot_time_frequency_global_field_power.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Explore event-related dynamics for specific frequency bands\nThe objective is to show you how to explore spectrally localized\neffects. For this purpose we adapt the method described in [1]_ and use it on\nthe somato dataset. The idea is to track the band-limited temporal evolution\nof spatial patterns by using the Global Field Power (GFP).\nWe first bandpass filter the signals and then apply a Hilbert transform. To\nreveal oscillatory activity the evoked response is then subtracted from every\nsingle trial. Finally, we rectify the signals prior to averaging across trials\nby taking the magniude of the Hilbert.\nThen the GFP is computed as described in [2], using the sum of the squares\nbut without normalization by the rank.\nBaselining is subsequently applied to make the GFPs comparable between\nfrequencies.\nThe procedure is then repeated for each frequency band of interest and\nall GFPs are visualized. To estimate uncertainty, non-parametric confidence\nintervals are computed as described in [3] across channels.\nThe advantage of this method over summarizing the Space x Time x Frequency\noutput of a Morlet Wavelet in frequency bands is relative speed and, more\nimportantly, the clear-cut comparability of the spectral decomposition (the\nsame type of filter is used across all bands).\nReferences\n.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic\n view through the skull (1997). Trends in Neuroscience 20 (1),\n pp. 44-49.\n.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in\n covariance estimation and spatial whitening of MEG and EEG signals,\n vol. 108, 328-342, NeuroImage.\n.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).\n Cambrdige University Press, Chapter 11.2.",
"# Authors: Denis A. Engemann <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import somato\nfrom mne.baseline import rescale\nfrom mne.stats import _bootstrap_ci",
"Set parameters",
"data_path = somato.data_path()\nraw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'\n\n# let's explore some frequency bands\niter_freqs = [\n ('Theta', 4, 7),\n ('Alpha', 8, 12),\n ('Beta', 13, 25),\n ('Gamma', 30, 45)\n]",
"We create average power time courses for each frequency band",
"# set epoching parameters\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = None\n\n# get the header to extract events\nraw = mne.io.read_raw_fif(raw_fname, preload=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nfrequency_map = list()\n\nfor band, fmin, fmax in iter_freqs:\n # (re)load the data to save memory\n raw = mne.io.read_raw_fif(raw_fname, preload=True)\n raw.pick_types(meg='grad', eog=True) # we just look at gradiometers\n\n # bandpass filter and compute Hilbert\n raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.\n l_trans_bandwidth=1, # make sure filter params are the same\n h_trans_bandwidth=1, # in each band and skip \"auto\" option.\n fir_design='firwin')\n raw.apply_hilbert(n_jobs=1, envelope=False)\n\n epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,\n reject=dict(grad=4000e-13, eog=350e-6), preload=True)\n # remove evoked response and get analytic signal (envelope)\n epochs.subtract_evoked() # for this we need to construct new epochs.\n epochs = mne.EpochsArray(\n data=np.abs(epochs.get_data()), info=epochs.info, tmin=epochs.tmin)\n # now average and move on\n frequency_map.append(((band, fmin, fmax), epochs.average()))",
"Now we can compute the Global Field Power\nWe can track the emergence of spatial patterns compared to baseline\nfor each frequency band, with a bootstrapped confidence interval.\nWe see dominant responses in the Alpha and Beta bands.",
"fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)\ncolors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))\nfor ((freq_name, fmin, fmax), average), color, ax in zip(\n frequency_map, colors, axes.ravel()[::-1]):\n times = average.times * 1e3\n gfp = np.sum(average.data ** 2, axis=0)\n gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))\n ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)\n ax.axhline(0, linestyle='--', color='grey', linewidth=2)\n ci_low, ci_up = _bootstrap_ci(average.data, random_state=0,\n stat_fun=lambda x: np.sum(x ** 2, axis=0))\n ci_low = rescale(ci_low, average.times, baseline=(None, 0))\n ci_up = rescale(ci_up, average.times, baseline=(None, 0))\n ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)\n ax.grid(True)\n ax.set_ylabel('GFP')\n ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),\n xy=(0.95, 0.8),\n horizontalalignment='right',\n xycoords='axes fraction')\n ax.set_xlim(-1000, 3000)\n\naxes.ravel()[-1].set_xlabel('Time [ms]')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dinrker/PredictiveModeling | Session 1 - Linear_Regression.ipynb | mit | [
"",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Goals of this Lesson\n\nPresent the fundamentals of Linear Regression for Prediction\nNotation and Framework\nGradient Descent for Linear Regression\nAdvantages and Issues\n\n\nClosed form Matrix Solutions for Linear Regression\nAdvantages and Issues\n\n\n\n\nDemonstrate Python \nExploratory Plotting\nSimple plotting with pyplot from matplotlib\n\n\nCode Gradient Descent\nCode Closed Form Matrix Solution\nPerform Linear Regression in scikit-learn\n\n\n\nReferences for Linear Regression\n\nElements of Statistical Learning by Hastie, Tibshriani, Friedman - Chapter 3 \nAlex Ihler's Course Notes on Linear Models for Regression - http://sli.ics.uci.edu/Classes/2015W-273a\nscikit-learn Documentation - http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares\nLinear Regression Analysis By Seber and Lee - http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471415405,subjectCd-ST24.html\nApplied Linear Regression by Weisberg - http://onlinelibrary.wiley.com/book/10.1002/0471704091\nWikipedia - http://en.wikipedia.org/wiki/Linear_regression\n\nLinear Regression Notation and Framework\nLinear Regression is a supervised learning technique that is interested in predicting a response or target $\\mathbf{y}$, based on a linear combination of a set $D$ predictors or features, $\\mathbf{x}= (1, x_1,\\dots, x_D)$ such that,\n\\begin{equation}\ny = \\beta_0 + \\beta_1 x_1 + \\dots + \\beta_D x_D = \\mathbf{x_i}^T\\mathbf{\\beta}\n\\end{equation}\nData We Observe\n\\begin{eqnarray}\ny &:& \\mbox{response or target variable} \\\n\\mathbf{x} &:& \\mbox{set of $D$ predictor or explanatory variables } \\mathbf{x}^T = (1, x_1, \\dots, x_D) \n\\end{eqnarray}\n What We Are Trying to Learn\n\\begin{eqnarray}\n\\beta^T = (\\beta_0, \\beta_1, \\dots, \\beta_D) : \\mbox{Parameter values for a \"best\" prediction of } y \\rightarrow \\hat y\n\\end{eqnarray}\nOutcomes We are Trying to Predict\n\\begin{eqnarray}\n\\hat y : \\mbox{Prediction for the data that we observe}\n\\end{eqnarray}\nMatrix Notation\n\\begin{equation}\n\\mathbf{Y} = \\left( \\begin{array}{ccc}\ny_1 \\\ny_2 \\\n\\vdots \\\ny_i \\\n\\vdots \\\ny_N\n\\end{array} \\right)\n\\qquad\n\\mathbf{X} = \\left( \\begin{array}{ccc}\n1 & x_{1,1} & x_{1,2} & \\dots & x_{1,D} \\\n1 & x_{2,1} & x_{2,2} & \\dots & x_{2,D} \\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\n1 & x_{i,1} & x_{i,2} & \\dots & x_{i,D} \\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\n1 & x_{N,1} & x_{N,2} & \\dots & x_{N,D} \\\n\\end{array} \\right)\n\\qquad\n\\beta = \\left( \\begin{array}{ccc}\n\\beta_0 \\\n\\beta_1 \\\n\\vdots \\\n\\beta_j \\\n\\vdots \\\n\\beta_D\n\\end{array} \\right)\n\\end{equation}\nWhy is it called Linear Regression?\nIt is often asked, why is it called linear regression if we can use polynomial terms and other transformations as the predictors. That is \n\\begin{equation}\n y = \\beta_0 + \\beta_1 x_1 + \\beta_2 x_1^2 + \\beta_3 x_1^3 + \\beta_4 \\sin(x_1)\n\\end{equation}\nis still a linear regression, though it contains polynomial and trigonometric transformations of $x_1$. This is due to the fact that the term linear applies to the learned coefficients $\\beta$ and not the input features $\\mathbf{x}$. \n How can we Learn $\\beta$? \nLinear Regression can be thought of as an optimization problem where we want to minimize some loss function of the error between the prediction $\\hat y$ and the observed data $y$. \n\\begin{eqnarray}\n error_i &=& y_i - \\hat y_i \\\n &=& y_i - \\mathbf{x_i^T}\\beta\n\\end{eqnarray}\nLet's see what these errors look like...\nBelow we show a simulation where the observed $y$ was generated such that $y= 1 + 0.5 x + \\epsilon$ and $\\epsilon \\sim N(0,1)$. If we assume that know the truth that $y=1 + 0.5 x$, the red lines demonstrate the error (or residuals) between the observed and the truth.",
"#############################################################\n# Demonstration - What do Residuals Look Like\n#############################################################\n\nnp.random.seed(33) # Setting a seed allows reproducability of experiments\n\nbeta0 = 1 # Creating an intercept\nbeta1 = 0.5 # Creating a slope\n\n# Randomly sampling data points\nx_example = np.random.uniform(0,5,10)\ny_example = beta0 + beta1 * x_example + np.random.normal(0,1,10)\nline1 = beta0 + beta1 * np.arange(-1, 6)\n\nf = plt.figure()\nplt.scatter(x_example,y_example) # Plotting observed data\nplt.plot(np.arange(-1,6), line1) # Plotting the true line\nfor i, xi in enumerate(x_example):\n plt.vlines(xi, beta0 + beta1 * xi, y_example[i], colors='red') # Plotting Residual Lines\nplt.annotate('Error or \"residual\"', xy = (x_example[5], 2), xytext = (-1.5,2.1),\n arrowprops=dict(width=1,headwidth=7,facecolor='black', shrink=0.01))\nf.set_size_inches(10,5)\nplt.title('Errors in Linear Regression')\nplt.show()",
"Choosing a Loss Function to Optimize\nHistorically Linear Regression has been solved using the method of Least Squares where we are interested in minimizing the mean squared error loss function of the form:\n\\begin{eqnarray}\n Loss(\\beta) = MSE &=& \\frac{1}{N} \\sum_{i=1}^{N} (y_i - \\hat y_i)^2 \\\n &=& \\frac{1}{N} \\sum_{i=1}^{N} (y_i - \\mathbf{x_i^T}\\beta)^2 \\\n\\end{eqnarray}\nWhere $N$ is the total number of observations. Other loss functions can be used, but using mean squared error (also referred to sum of the squared residuals in other text) has very nice properities for closed form solutions. We will use this loss function for both gradient descent and to create a closed form matrix solution.\nBefore We Present Solutions for Linear Regression: Introducing a Baseball Dataset\nWe'll use this dataset to investigate Linear Regression. The dataset consists of 337 observations and 18 variables from the set of Major League Baseball players who played at least one game in both the 1991 and 1992\nseasons, excluding pitchers. The dataset contains the 1992 salaries for that population, along with performance measures for each player. Four categorical variables indicate how free each player was to move to other teams.\n Reference \n\nPay for Play: Are Baseball Salaries Based on Performance?\nhttp://www.amstat.org/publications/jse/v6n2/datasets.watnik.html\n\n\n\nFilename\n\n'baseball.dat.txt'.\n\nVariables\n\nSalary: Thousands of dollars\nAVG: Batting average\nOBP: On-base percentage\nRuns: Number of runs\nHits: Number of hits\nDoubles: Number of doubles\nTriples: Number of triples\nHR: Number of home runs\nRBI: Number of runs batted in\nWalks: Number of walks\nSO: Number of strike-outs\nSB: Number of stolen bases\nErrs: Number of errors\nfree agency eligibility: Indicator of \"free agency eligibility\"\nfree agent in 1991/2: Indicator of \"free agent in 1991/2\"\narbitration eligibility: Indicator of \"arbitration eligibility\"\narbitration in 1991/2: Indicator of \"arbitration in 1991/2\"\nName: Player's name (in quotation marks)\n\n What we will try to predict \nWe will attempt to predict the players salary based upon some predictor variables such as Hits, OBP, Walks, RBIs, etc. \nLoad The Data\nLoading data in python from csv files in python can be done by a few different ways. The numpy package has a function called 'genfromtxt' that can read csv files, while the pandas library has the 'read_csv' function. Remember that we have imported numpy and pandas as np and pd respectively at the top of this notebook. An example using pandas is as follows:\npd.read_csv(filename, **args)\n\nhttp://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html\n<span style=\"color:red\">STUDENT ACTIVITY (2 MINS)</span>\nStudent Action - Load the 'baseball.dat.txt' file into a variable called 'baseball'. Then use baseball.head() to view the first few entries",
"#######################################################################\n# Student Action - Load the file 'baseball.dat.txt' using pd.read_csv()\n#######################################################################\nbaseball = pd.read_csv('data/baseball.dat.txt')",
"Crash Course: Plotting with Matplotlib\nAt the top of this notebook we have imported the the package pyplot as plt from the matplotlib library. matplotlib is a great package for creating simple plots in Python. Below is a link to their tutorial for basic plotting.\nTutorials\n\nhttp://matplotlib.org/users/pyplot_tutorial.html\nhttps://scipy-lectures.github.io/intro/matplotlib/matplotlib.html\n\nSimple Plotting\n\nStep 0: Import the packge pyplot from matplotlib for plotting \nimport matplotlib.pyplot as plt\n\n\nStep 1: Create a variable to store a new figure object\nfig = plt.figure()\n\n\nStep 2: Create the plot of your choice\nCommon Plots\nplt.plot(x,y) - A line plot\nplt.scatter(x,y) - Scatter Plots\nplt.hist(x) - Histogram of a variable\nExample Plots: http://matplotlib.org/gallery.html\n\n\n\n\nStep 3: Create labels for your plot for better interpretability\nX Label\nplt.xlabel('String')\n\n\nY Label\nplt.ylabel('String')\n\n\nTitle\nplt.title('String')\n\n\n\n\nStep 4: Change the figure size for better viewing within the iPython Notebook\nfig.set_size_inches(width, height)\n\n\nStep 5: Show the plot\nplt.show()\nThe above command allows the plot to be shown below the cell that you're currently in. This is made possible by the magic command %matplotlib inline. \n\n\n\n\nNOTE: This may not always be the best way to create plots, but it is a quick template to get you started.\n\nTransforming Variables\nWe'll talk more about numpy later, but to perform the logarithmic transformation use the command\n\nnp.log($array$)",
"#############################################################\n# Demonstration - Plot a Histogram of Hits \n#############################################################\nf = plt.figure()\nplt.hist(baseball['Hits'], bins=15)\nplt.xlabel('Number of Hits')\nplt.ylabel('Frequency')\nplt.title('Histogram of Number of Hits')\nf.set_size_inches(10, 5)\nplt.show()",
"<span style=\"color:red\">STUDENT ACTIVITY (7 MINS)</span>\nData Exploration - Investigating Variables\nWork in pairs to import the package matplotlib.pyplot, create the following two plots. \n\nA histogram of the $log(Salary)$\nhint: np.log()\n\n\na scatterplot of $log(Salary)$ vs $Hits$.",
"#############################################################\n# Student Action - import matplotlib.pylot \n# - Plot a Histogram of log(Salaries)\n#############################################################\n\nf = plt.figure()\nplt.hist(np.log(baseball['Salary']), bins = 15)\nplt.xlabel('log(Salaries)')\nplt.ylabel('Frequency')\nplt.title('Histogram of log Salaries')\nf.set_size_inches(10, 5)\nplt.show()\n\n#############################################################\n# Studdent Action - Plot a Scatter Plot of Salarie vs. Hitting\n#############################################################\n\nf = plt.figure()\nplt.scatter(baseball['Hits'], np.log(baseball['Salary']))\nplt.xlabel('Hits')\nplt.ylabel('log(Salaries)')\nplt.title('Scatter Plot of Salarie vs. Hitting')\nf.set_size_inches(10, 5)\nplt.show()",
"Gradient Descent for Linear Regression\nIn Linear Regression we are interested in optimizing our loss function $Loss(\\beta)$ to find the optimatal $\\beta$ such that \n\\begin{eqnarray}\n\\hat \\beta &=& \\arg \\min_{\\beta} \\frac{1}{N} \\sum_{i=1}^{N} (y_i - \\mathbf{x_i^T}\\beta)^2 \\\n&=& \\arg \\min_{\\beta} \\frac{1}{N} \\mathbf{(Y - X\\beta)^T (Y - X\\beta)} \\\n\\end{eqnarray}\nOne optimization technique called 'Gradient Descent' is useful for finding an optimal solution to this problem. Gradient descent is a first order optimization technique that attempts to find a local minimum of a function by updating its position by taking steps proportional to the negative gradient of the function at its current point. The gradient at the point indicates the direction of steepest ascent and is the best guess for which direction the algorithm should go. \nIf we consider $\\theta$ to be some parameters we are interested in optimizing, $L(\\theta)$ to be our loss function, and $\\alpha$ to be our step size proportionality, then we have the following algorithm:\n\nAlgorithm - Gradient Descent\n\nInitialize $\\theta$\nUntil $\\alpha || \\nabla L(\\theta) || < tol $:\n$\\theta^{(t+1)} = \\theta^{(t)} - \\alpha \\nabla_{\\theta} L(\\theta^{(t)})$\n\n\n\n\nFor our problem at hand, we therefore need to find $\\nabla L(\\beta)$. The deriviative of $L(\\beta)$ due to the $j^{th}$ feature is:\n\\begin{eqnarray}\n \\frac{\\partial L(\\beta)}{\\partial \\beta_j} = -\\frac{2}{N}\\sum_{i=1}^{N} (y_i - \\mathbf{x_i^T}\\beta)\\cdot{x_{i,j}}\n\\end{eqnarray}\nIn matrix notation this can be written:\n\\begin{eqnarray}\nLoss(\\beta) &=& \\frac{1}{N}\\mathbf{(Y - X\\beta)^T (Y - X\\beta)} \\\n&=& \\frac{1}{N}\\mathbf{(Y^TY} - 2 \\mathbf{\\beta^T X^T Y + \\beta^T X^T X\\beta)} \\\n\\nabla_{\\beta} L(\\beta) &=& \\frac{1}{N} (-2 \\mathbf{X^T Y} + 2 \\mathbf{X^T X \\beta)} \\\n&=& -\\frac{2}{N} \\mathbf{X^T (Y - X \\beta)} \\\n\\end{eqnarray}\n<span style=\"color:red\">STUDENT ACTIVITY (7 MINS)</span>\nCreate a function that returns the gradient of $L(\\beta)$",
"###################################################################\n# Student Action - Programming the Gradient\n###################################################################\n\ndef gradient(X, y, betas):\n #****************************\n # Your code here!\n return -2.0/len(X)*np.dot(X.T, y - np.dot(X, betas))\n #****************************\n \n\n#########################################################\n# Testing your gradient function\n#########################################################\nnp.random.seed(33)\nX = pd.DataFrame({'ones':1, \n 'X1':np.random.uniform(0,1,50)})\ny = np.random.normal(0,1,50)\nbetas = np.array([-1,4])\ngrad_expected = np.array([ 2.98018138, 7.09758971])\ngrad = gradient(X,y,betas)\ntry:\n np.testing.assert_almost_equal(grad, grad_expected)\n print \"Test Passed!\"\nexcept AssertionError:\n print \"*******************************************\"\n print \"ERROR: Something isn't right... Try Again!\"\n print \"*******************************************\"\n",
"<span style=\"color:red\">STUDENT ACTIVITY (15 MINS)</span>\n Student Action - Use your Gradient Function to complete the Gradient Descent for the Baseball Dataset\nCode Gradient Descent Here\nWe have set-up the all necessary matrices and starting values. In the designated section below code the algorithm from the previous section above.",
"# Setting up our matrices \nY = np.log(baseball['Salary'])\nN = len(Y)\nX = pd.DataFrame({'ones' : np.ones(N), \n 'Hits' : baseball['Hits']})\np = len(X.columns)\n\n# Initializing the beta vector \nbetas = np.array([0.015,5.13])\n\n# Initializing Alpha\nalph = 0.00001\n\n# Setting a tolerance \ntol = 1e-8\n\n###################################################################\n# Student Action - Programming the Gradient Descent Algorithm Below\n###################################################################\n\nniter = 1.\nwhile (alph*np.linalg.norm(gradient(X,Y,betas)) > tol) and (niter < 20000):\n #****************************\n # Your code here!\n betas -= alph*gradient(X, Y, betas)\n niter += 1\n \n #****************************\n\nprint niter, betas\n\ntry:\n beta_expected = np.array([ 0.01513772, 5.13000121])\n np.testing.assert_almost_equal(betas, beta_expected)\n print \"Test Passed!\"\nexcept AssertionError:\n print \"*******************************************\"\n print \"ERROR: Something isn't right... Try Again!\"\n print \"*******************************************\"\n",
"Comments on Gradient Descent\n\nAdvantage: Very General Algorithm $\\rightarrow$ Gradient Descent and its variants are used throughout Machine Learning and Statistics\nDisadvantage: Highly Sensitive to Initial Starting Conditions\nNot gauranteed to find the global optima\n\n\nDisadvantage: How do you choose step size $\\alpha$?\nToo small $\\rightarrow$ May never find the minima\nToo large $\\rightarrow$ May step past the minima\nCan we fix it?\nAdaptive step sizes\nNewton's Method for Optimization\nhttp://en.wikipedia.org/wiki/Newton%27s_method_in_optimization\n\n\nEach correction obviously comes with it's own computational considerations.\n\n\n\n\n\nSee the Supplementary Material for any help necessary with scripting this in Python.\nVisualizing Gradient Descent to Understand its Limitations\nLet's try to find the value of $X$ that maximizes the following function:\n\\begin{equation}\n f(x) = w \\times \\frac{1}{\\sqrt{2\\pi \\sigma_1^2}} \\exp \\left( - \\frac{(x-\\mu_1)^2}{2\\sigma_1^2}\\right) + (1-w) \\times \\frac{1}{\\sqrt{2\\pi \\sigma_2^2}} \\exp \\left( - \\frac{(x-\\mu_2)^2}{2\\sigma_2^2}\\right)\n\\end{equation}\nwhere $w=0.3$, $\\mu_1 = 3, \\sigma_1^2=1$ and $\\mu_2 = -1, \\sigma_2^2=0.5$\nLet's visualize this function",
"x1 = np.arange(-10, 15, 0.05)\nmu1 = 6.5 \nvar1 = 3\nmu2 = -1\nvar2 = 10\nweight = 0.3\ndef mixed_normal_distribution(x, mu1, var1, mu2, var2):\n pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) / np.sqrt(2 * np.pi * var1)\n pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) / np.sqrt(2 * np.pi * var2)\n return weight * pdf1 + (1-weight )*pdf2\n\npdf = mixed_normal_distribution(x1, mu1, var1, mu2, var2)\nfig = plt.figure()\nplt.plot(x1, pdf)\nfig.set_size_inches([10,5])\nplt.show()",
"Now let's show visualize happens for different starting conditions and different step sizes",
"def mixed_gradient(x, mu1, var1, mu2, var2):\n grad_pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) * ((x-mu1)/var1) / np.sqrt(2 * np.pi * var1)\n grad_pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) * ((x-mu2)/var2) / np.sqrt(2 * np.pi * var2)\n return weight * grad_pdf1 + (1-weight)*grad_pdf2\n\n# Initialize X\nx = 3.25\n# Initializing Alpha\nalph = 5\n# Setting a tolerance \ntol = 1e-8\nniter = 1.\nresults = []\nwhile (alph*np.linalg.norm(mixed_gradient(x, mu1, var1, mu2, var2)) > tol) and (niter < 500000):\n #****************************\n results.append(x)\n x = x - alph * mixed_gradient(x, mu1, var1, mu2, var2)\n niter += 1\n \n #****************************\nprint x, niter\n\nif niter < 500000:\n exes = mixed_normal_distribution(np.array(results), mu1, var1, mu2, var2)\n fig = plt.figure()\n plt.plot(x1, pdf)\n plt.plot(results, exes, color='red', marker='x')\n plt.ylim([0,0.1])\n fig.set_size_inches([20,10])\n plt.show()",
"Linear Regression Matrix Solution\nFrom the last section, you may have recognized that we could actually solve for $\\beta$ directly. \n\\begin{eqnarray}\nLoss(\\beta) &=& \\frac{1}{N}\\mathbf{(Y - X\\beta)^T (Y - X\\beta)} \\\n\\nabla_{\\beta} L(\\beta) &=& \\frac{1}{N} (-2 \\mathbf{X^T Y} + 2 \\mathbf{X^T X \\beta}) \\\n\\end{eqnarray}\nSetting to zero\n\\begin{eqnarray}\n-2 \\mathbf{X^T Y} + 2 \\mathbf{X^T X} \\beta &=& 0 \\\n\\mathbf{X^T X \\beta} &=& \\mathbf{X^T Y} \\\n\\end{eqnarray}\nIf we assume that the columns $X$ are linearly independent then\n\\begin{eqnarray}\n \\hat \\beta &=& \\mathbf{(X^T X)^{-1}X^T Y} \\\n\\end{eqnarray}\nThis is called the Ordinary Least Squares (OLS) Estimator \n<span style=\"color:red\">STUDENT ACTIVITY (10 MINS)</span>\n_ Student Action - Solve for $\\hat \\beta$ directly using OLS on the Baseball Dataset - 10 mins _\n\nReview the Supplementary Materials for help with Linear Algebra",
"# Setting up our matrices \ny = np.log(baseball['Salary'])\nN = len(Y)\nX = pd.DataFrame({'ones' : np.ones(N), \n 'Hits' : baseball['Hits']})\n\n#############################################################\n# Student Action - Program a closed form solution for \n# Linear Regression. Compare with Gradient\n# Descent.\n#############################################################\n\ndef solve_linear_regression(X, y):\n #****************************\n return np.dot(np.linalg.inv(np.dot(X.T, X)), np.dot(X.T, y))\n \n #****************************\n\nbetas = solve_linear_regression(X,y)\n\ntry:\n beta_expected = np.array([ 0.01513353, 5.13051682])\n np.testing.assert_almost_equal(betas, beta_expected)\n print \"Betas: \", betas\n print \"Test Passed!\"\nexcept AssertionError:\n print \"*******************************************\"\n print \"ERROR: Something isn't right... Try Again!\"\n print \"*******************************************\"",
"Comments on solving the loss function directly \n\nAdvantage: Simple solution to code \nDisadvantage: The Design Matrix must be Full Rank to invert\nCan be corrected with a Generalized Inverse Solution\n\n\n\nDisadvantage: Inverting a Matrix can be a computational expensive operation\n\nIf we have a design matrix that has $N$ observations and $D$ predictors, then X is $(N\\times D)$ it follows then that\n\n\\begin{eqnarray}\n \\mathbf{X^TX} \\mbox{ is of size } (D \\times N) \\times (N \\times D) = (D \\times D) \\\n\\end{eqnarray}\n\nIf a matrix is of size $(D\\times D)$, the computational cost of inverting it is $O(D^3)$. \nThus inverting a matrix is directly related to the number of predictors that are included in the analysis. \n\n\n\nSci-Kit Learn Linear Regression\nAs we've shown in the previous two exercises, when coding these algorithms ourselves, we must consider many things such as selecting step sizes, considering the computational cost of inverting matrices. For many applications though, packages have been created that have taken into consideration many of these parameter selections. We now turn our attention to the Python package for Machine Learning called 'scikit-learn'. \n\nhttp://scikit-learn.org/stable/\n\nIncluded is the documentation for the scikit-learn implementation of Ordinary Least Squares from their linear models package\n\n\nGeneralized Linear Models Documentation: \n\nhttp://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares\n\n\n\nLinearRegression Class Documentation: \n\nhttp://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression\n\n\n\nFrom this we that we'll need to import the module linear_model using the following:\nfrom sklearn import linear_model\n\nLet's examine an example using the LinearRegression class from scikit-learn. We'll continue with the simulated data from the beginning of the exercise.\nExample using the variables from the Residual Example\n Notes \n\nCalling linear_model.LinearRegression() creates an object of class sklearn.linear_model.base.LinearRegression\nDefaults \nfit_intercept = True: automatically adds a column vector of ones for an intercept\nnormalize = False: defaults to not normalizing the input predictors\ncopy_X = False: defaults to not copying X\nn_jobs = 1: The number of jobs to use for the computation. If -1 all CPUs are used. This will only provide speedup for n_targets > 1 and sufficient large problems.\n\n\nExample\n`lmr = linear_model.LinearRegression()\n\n\n\n\nTo fit a model, the method .fit(X,y) can be used\nX must be a column vector for scikit-learn\nThis can be accomplished by creating a DataFrame using pd.DataFrame()\n\n\nExample\nlmr.fit(X,y)\n\n\n\n\nTo predict out of sample values, the method .predict(X) can be used\nTo see the $\\beta$ estimates use .coef_ for the coefficients for the predictors and .intercept for $\\beta_0$",
"#############################################################\n# Demonstration - scikit-learn with Regression Example\n#############################################################\n\nfrom sklearn import linear_model\n\nlmr = linear_model.LinearRegression()\nlmr.fit(pd.DataFrame(x_example), pd.DataFrame(y_example))\n\nxTest = pd.DataFrame(np.arange(-1,6))\nyHat = lmr.predict(xTest)\n\nf = plt.figure()\nplt.scatter(x_example, y_example)\np1, = plt.plot(np.arange(-1,6), line1)\np2, = plt.plot(xTest, yHat)\nplt.legend([p1, p2], ['y = 1 + 0.5x', 'OLS Estimate'], loc=2)\nf.set_size_inches(10,5)\nplt.show()\n\nprint lmr.coef_, lmr.intercept_",
"<span style=\"color:red\">STUDENT ACTIVITY (15 MINS)</span>\nFinal Student Task\nProgramming Linear Regression using the scikit-learn method. For the ambitious students, plot all results on one plot.",
"#######################################################################\n# Student Action - Use scikit-learn to calculate the beta coefficients\n#\n# Note: You no longer need the intercept column in your X matrix for \n# sci-kit Learn. It will add that column automatically.\n#######################################################################\n\nlmr2 = linear_model.LinearRegression(fit_intercept=True)\nlmr2.fit(pd.DataFrame(baseball['Hits']), np.log(baseball['Salary']))\n\nxtest = np.arange(0,200)\nytest = lmr2.intercept_ + lmr2.coef_*xtest\n\nf = plt.figure()\nplt.scatter(baseball['Hits'], np.log(baseball['Salary']))\nplt.plot(xtest, ytest, color='r', linewidth=3)\nf.set_size_inches(10,5)\nplt.show()\nprint lmr2.coef_, lmr2.intercept_",
"Linear Regression in the Real World\nIn the real world, Linear Regression for predictive modeling doesn't end once you've fit the model. Models are often fit and used to predict user behavior, used to quantify business metrics, or sometimes used to identify cats faces for internet points. In that pursuit, it isn't really interesting to fit a model and assess its performance on data that has already been observed. The real interest lies in how it predicts future observations!\nOften times then, we may be susceptible to creating a model that is perfected for our observed data, but that does not generalize well to new data. In order to assess how we perform to new data, we can score the model on both the old and new data, and compare the models performance with the hope that the it generalizes well to the new data. After lunch we'll introduce some techniques and other methods to better our chances of performing well on new data. \nBefore we break for lunch though, let's take a look at a simulated dataset to see what we mean...\nSituation\nImagine that last year a talent management company managed 400 celebrities and tracked how popular they were within the public eye, as well various predictors for that metric. The company is now interested in managing a few new celebrities, but wants to sign those stars that are above a certain 'popularity' threshold to maintain their image.\nOur job is to predict how popular each new celebrity will be over the course of the coming year so that we make that best decision about who to manage. For this analysis we'll use a function l2_error to compare our errors on a training set, and on a test set of celebrity data.\nThe variable celeb_data_old represents things we know about the previous batch of celebrities. Each row represents one celeb. Each column represents some tangible measure about them -- their age at the time, number of Twitter followers, voice squeakiness, etc. The specifics of what each column represents aren't important.\nSimilarly, popularity_score_old is a previous measure of the celebrities popularity.\nFinally, celeb_data_new represents the same information that we had from celeb_data_old but for the new batch of internet wonders that we're considering.\nHow can we predict how popular the NEW batch of celebrities will be ahead of time so that we can decide who to sign? And are these estimates stable from year to year?",
"with np.load('data/mystery_data_old.npz') as data:\n celeb_data_old = data['celeb_data_old']\n popularity_old = data['popularity_old']\n celeb_data_new = data['celeb_data_new']\n\nlmr3 = linear_model.LinearRegression()\nlmr3.fit(celeb_data_old, popularity_old)\npredicted_popularity_old = lmr3.predict(celeb_data_old)\npredicted_popularity_new = lmr3.predict(celeb_data_new)\n\ndef l2_error(y_true, y_pred):\n \"\"\"\n calculate the sum of squared errors (i.e. \"L2 error\") \n given a vector of true ys and a vector of predicted ys\n \"\"\"\n diff = (y_true-y_pred)\n return np.sqrt(np.dot(diff, diff))\n\nprint \"Predicted L2 Error:\", l2_error(popularity_old, predicted_popularity_old)",
"Checking How We Did\nAt the end of the year, we tally up the popularity numbers for each celeb and check how well we did on our predictions.",
"with np.load('data/mystery_data_new.npz') as data:\n popularity_new = data['popularity_new']\n\nprint \"Predicted L2 Error:\", l2_error(popularity_new, predicted_popularity_new)",
"Something's not right... our model seems to be performing worse on this data! Our model performed so well on last year's data, why didn't it work on the data from this year?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cathalmccabe/PYNQ | boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb | bsd-3-clause | [
"Pattern Generator and Trace Analyzer\nThis notebook will show how to use the Pattern Generator to generate patterns on I/O pins. The pattern that will be generated is 3-bit up count performed 4 times. \nStep 1: Download the logictools overlay",
"from pynq.overlays.logictools import LogicToolsOverlay\n\nlogictools_olay = LogicToolsOverlay('logictools.bit')",
"Step 2: Create WaveJSON waveform\nThe pattern to be generated is specified in the waveJSON format \nThe pattern is applied to the Arduino interface, pins D0, D1 and D2 are set to generate a 3-bit count. \nTo check the generated pattern we loop them back to pins D19, D18 and D17 respectively and use the the trace analyzer to view the loopback signals\nThe Waveform class is used to display the specified waveform.",
"from pynq.lib.logictools import Waveform\n\nup_counter = {'signal': [\n ['stimulus',\n {'name': 'bit0', 'pin': 'D0', 'wave': 'lh' * 8},\n {'name': 'bit1', 'pin': 'D1', 'wave': 'l.h.' * 4},\n {'name': 'bit2', 'pin': 'D2', 'wave': 'l...h...' * 2}], \n \n ['analysis',\n {'name': 'bit2_loopback', 'pin': 'D17'},\n {'name': 'bit1_loopback', 'pin': 'D18'},\n {'name': 'bit0_loopback', 'pin': 'D19'}]], \n\n 'foot': {'tock': 1},\n 'head': {'text': 'up_counter'}}\n\nwaveform = Waveform(up_counter)\nwaveform.display()",
"Note: Since there are no captured samples at this moment, the analysis group will be empty.\nStep 3: Instantiate the pattern generator and trace analyzer objects\nUsers can choose whether to use the trace analyzer by calling the trace() method. \nThe analyzer can be set to trace a specific number of samples using, num_analyzer_samples argument.",
"pattern_generator = logictools_olay.pattern_generator\npattern_generator.trace(num_analyzer_samples=16)",
"Step 4: Setup the pattern generator\nThe pattern generator will work at the default frequency of 10MHz. This can be modified using a frequency argument in the setup() method.",
"pattern_generator.setup(up_counter,\n stimulus_group_name='stimulus',\n analysis_group_name='analysis')",
"Set the loopback connections using jumper wires on the Arduino Interface\n\n\nOutput pins D0, D1 and D2 are connected to pins D19, D18 and D17 respectively \nLoopback/Input pins D19, D18 and D17 are observed using the trace analyzer as shown below\nAfter setup, the pattern generator should be ready to run\n\nNote: Make sure all other pins are disconnected.\nStep 5: Run and display waveform\nThe run() method will execute all the samples, show_waveform() method is used to display the waveforms. \nAlternatively, we can also use step() method to single step the pattern.",
"pattern_generator.run()\npattern_generator.show_waveform()",
"Step 6: Stop the pattern generator\nCalling stop() will clear the logic values on output pins; however, the waveform will be recorded locally in the pattern generator instance.",
"pattern_generator.stop()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
batfish/pybatfish | jupyter_notebooks/Pandas Examples.ipynb | apache-2.0 | [
"Pandas Examples\nBatfish questions can return a huge amount of data, which you may want to filter in various ways based on your task. While most Batfish questions support basic filtering, they may not support your desired filtering criteria. Further, for performance, you may want to fetch the answer once and filter it using multiple different criteria. These scenarios are where Pandas-based filtering can help. \nBatfish answers can be easily turned into a Pandas DataFrame (using .frame()), after which you can use the full power of Pandas to filter and manipulate data. This notebook provides a few examples of common manipulations for Batfish. It is not intended as a complete guide of Pandas data manipulation.\n \nLet's first initialize a snapshot that we will use in our examples.",
"# Import packages\n%run startup.py\nbf = Session(host=\"localhost\")\n\n# Initialize a network and a snapshot\nbf.set_network(\"pandas-example\")\n\nSNAPSHOT_NAME = \"snapshot\"\nSNAPSHOT_PATH = \"networks/hybrid-cloud/\"\nbf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)",
"Filtering initIssues\nAfter initializing the snapshot, you often want to look at the <code>initIssues</code> answer. If there are too many issues, you may want to ignore a particular class of issues. We show below how to do that.",
"# Lets get the initIssues for our snapshot\nissues = bf.q.initIssues().answer().frame()\nissues\n\n# Ignore all issues whose Line_Text contain one of these as a substring\nline_texts_to_ignore = [\"transceiver\"]\n\n\ndef has_substring(text: Optional[str], substrings: List[str]) -> bool:\n \"\"\"Returns True if 'text' is not None and contains one of the 'substrings'\"\"\"\n return text is not None and any(substr in text for substr in substrings)\n\n\nissues[\n issues.apply(\n lambda issue: not has_substring(issue[\"Line_Text\"], line_texts_to_ignore),\n axis=1,\n )\n]",
"In the code above, we are using the Pandas method <code>apply</code> to map issues to a binary array based on whether the issue has one of the substrings in line_texts_to_ignore. Passing axis=1 makes apply iterate over rows instead of columns. The helper method has_substring makes this determination. It returns True if text is not None and has any of the substrings. The Python method <code>any</code> returns True if any element of the input iterable is True. Using the binary array as a filter for issues produces rows that match our criterion. \nInstead of ignoring some issues, you may want to focus on issues that match a certain criteria. That too can be easily accomplished, as follows.",
"# Only show issues whose details match these substrings\nfocus_details = [\"Unrecognized element 'ServiceDetails' in AWS\"]\n\nissues[\n issues.apply(lambda issue: has_substring(issue[\"Details\"], focus_details), axis=1)\n]",
"The code above is similar to the one we used earlier, with the only differences being that we use the focus_details list as the argument to the has_substrings helper and we do not invert its result.\nFiltering objects",
"# Fetch interface properties and display its first five rows\ninterfaces = bf.q.interfaceProperties().answer().frame()\ninterfaces.head(5)",
"To filter based on a column, we need to know its data type. We can learn that in the Batfish documentation or by inspecting the answer we got from Batfish (e.g., using Python's type() method).\nWe show three examples of filtering based on the Interface and Active columns, which are of type <code>pybatfish.datamodel.primitives.Interface</code> and bool, respectively. The former has hostname and interface properties (which are strings).",
"# Display all interfaces on node 'exitgw'\ninterfaces[interfaces.apply(lambda row: row[\"Interface\"].hostname == \"exitgw\", axis=1)]\n\n# Display all GigabitEthernet interfaces on node 'exitgw'\ninterfaces[\n interfaces.apply(\n lambda row: row[\"Interface\"].hostname == \"exitgw\"\n and row[\"Interface\"].interface.startswith(\"GigabitEthernet\"),\n axis=1,\n )\n]\n\n# Display all active GigabitEthernet interfaces on node 'exitgw'\ninterfaces[\n interfaces.apply(\n lambda row: row[\"Interface\"].hostname == \"exitgw\"\n and row[\"Interface\"].interface.startswith(\"GigabitEthernet\")\n and row[\"Active\"],\n axis=1,\n )\n]",
"Filtering columns\nWhen viewing Batfish answers, you may want to view only some of the columns. Pandas makes that easy for both original answers and answers where some rows have been filtered, as both of them are just DataFrames.",
"# Filter interfaces to all active GigabitEthernet interfaces on node exitgw\nexitgw_gige_active_interfaces = interfaces[\n interfaces.apply(\n lambda row: row[\"Interface\"].hostname == \"exitgw\"\n and row[\"Interface\"].interface.startswith(\"GigabitEthernet\")\n and row[\"Active\"],\n axis=1,\n )\n]\n# Display only the Interface and All_Prefixes columns of the filtered DataFrame\nexitgw_gige_active_interfaces[[\"Interface\", \"All_Prefixes\"]]",
"Counting rows\nOften, you would be interested in counting the number of rows in the filtered answer. This is super easy because Python's len() method, which we use for iterables, can be used on DataFrames as well.",
"# Show the number of rows in the filtered DataFrame that we obtained above\nlen(exitgw_gige_active_interfaces)",
"Grouping rows\nFor more advanced operations than filtering rows and columns, chances are that you will find Pandas <code>groupyby</code> pretty handy. This method enables you to group rows using a custom criteria and analyze those groups. For instance, if you wanted to group interfaces by nodes, you may do the following:",
"# Get interfaces grouped by node name\nintefaces_by_hostname = interfaces.groupby(\n lambda index: interfaces.loc[index][\"Interface\"].hostname\n)",
"We obtained a Pandas DataFrameGroupBy object above. The groupby method iterates over row indexes (apply iterated over rows), calls the lambda over each, and groups rows whose indices yield the same value. In our example, the lambda first gets the row using interfaces.loc[index], then gets the interface (which is of type pybatfish.datamodel.primitives.Interface), and finally the hostname. \nDataFrameGroupBy objects offer many functions that are useful for analysis. We demonstrate two of them below.",
"# Display the rows corresponding to node 'exitgw' group\nintefaces_by_hostname.get_group(\"exitgw\")",
"Here, we used the <code>get_group</code> method to get all information for 'exitgw', thus viewing all interfaces for that node. This is possible using row filtering as well, but we can do other things that are not, such as:",
"# Display the number of interfaces per node\nintefaces_by_hostname.count()[[\"Interface\"]]",
"In this example, we used the <code>count</code> method, which counts non-null entries for each column in the group. We then filtered by the Interface column to see interfaces per node.\nSummary\nIn this notebook, we showed how you can use Pandas methods to manipulate Batfish answers, including filtering rows, filtering columns, and grouping rows. Hopefully, these examples help you get started with your analyses. Find us on Slack (link below) if you have questions.\n\nGet involved with the Batfish community\nJoin our community on Slack and GitHub."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phobson/statsmodels | examples/notebooks/glm_formula.ipynb | bsd-3-clause | [
"Generalized Linear Models (Formula)\nThis notebook illustrates how you can use R-style formulas to fit Generalized Linear Models.\nTo begin, we load the Star98 dataset and we construct a formula and pre-process the data:",
"from __future__ import print_function\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nstar98 = sm.datasets.star98.load_pandas().data\nformula = 'SUCCESS ~ LOWINC + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \\\n PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'\ndta = star98[['NABOVE', 'NBELOW', 'LOWINC', 'PERASIAN', 'PERBLACK', 'PERHISP',\n 'PCTCHRT', 'PCTYRRND', 'PERMINTE', 'AVYRSEXP', 'AVSALK',\n 'PERSPENK', 'PTRATIO', 'PCTAF']]\nendog = dta['NABOVE'] / (dta['NABOVE'] + dta.pop('NBELOW'))\ndel dta['NABOVE']\ndta['SUCCESS'] = endog",
"Then, we fit the GLM model:",
"mod1 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()\nmod1.summary()",
"Finally, we define a function to operate customized data transformation using the formula framework:",
"def double_it(x):\n return 2 * x\nformula = 'SUCCESS ~ double_it(LOWINC) + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \\\n PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'\nmod2 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()\nmod2.summary()",
"As expected, the coefficient for double_it(LOWINC) in the second model is half the size of the LOWINC coefficient from the first model:",
"print(mod1.params[1])\nprint(mod2.params[1] * 2)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
numenta/nupic.research | projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-TestRestoration.ipynb | agpl-3.0 | [
"Experiment: test_restoration\nEvaluate if restoration affected existing capabilities. Comparing two approaches to calculate coactivations to see if they are getting to the same values.\nConclusion",
"%load_ext autoreload\n%autoreload 2\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport glob\nimport tabulate\nimport pprint\nimport click\nimport numpy as np\nimport pandas as pd\nfrom ray.tune.commands import *\nfrom nupic.research.frameworks.dynamic_sparse.common.browser import *\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import rcParams\n\n%config InlineBackend.figure_format = 'retina'\n\nimport seaborn as sns\nsns.set(style=\"whitegrid\")\nsns.set_palette(\"colorblind\")",
"Load and check data",
"exps = ['test_restoration_5']\npaths = [os.path.expanduser(\"~/nta/results/{}\".format(e)) for e in exps]\ndf = load_many(paths)\n\ndf.head(5)\n\n# replace hebbian prine\n# df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)\n# df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)\n\ndf.columns\n\ndf.shape\n\ndf.iloc[1]\n\ndf.groupby('model')['model'].count()",
"## Analysis\nExperiment Details",
"num_epochs=100\n\n# Did any trials failed?\ndf[df[\"epochs\"]<num_epochs][\"epochs\"].count()\n\n# Removing failed or incomplete trials\ndf_origin = df.copy()\ndf = df_origin[df_origin[\"epochs\"]>=num_epochs]\ndf.shape\n\n# which ones failed?\n# failed, or still ongoing?\ndf_origin['failed'] = df_origin[\"epochs\"]<num_epochs\ndf_origin[df_origin['failed']]['epochs']\n\n# helper functions\ndef mean_and_std(s):\n return \"{:.3f} ± {:.3f}\".format(s.mean(), s.std())\n\ndef round_mean(s):\n return \"{:.0f}\".format(round(s.mean()))\n\nstats = ['min', 'max', 'mean', 'std']\n\ndef agg(columns, filter=None, round=3):\n if filter is None:\n return (df.groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n else:\n return (df[filter].groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n",
"Does improved weight pruning outperforms regular SET",
"agg(['on_perc', 'network'])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kylepjohnson/notebooks | fluent_python/Chapter 2, An Array of Sequences.ipynb | mit | [
"Generator expressions",
"symbols = '$#%^&'\n[ord(s) for s in symbols]\n\ntuple(ord(s) for s in symbols)\n\n(ord(s) for s in symbols)\n\nfor x in (ord(s) for s in symbols):\n print(x)\n\nimport array\narray.array('I', (ord(s) for s in symbols))\n\ncolors = ['black', 'white']\nsizes = ['S', 'M', 'L']\nfor tshirt in ((c, s) for c in colors for s in sizes):\n print(tshirt)\n\nfor tshirt in ('%s %s' % (c, s) for c in colors for s in sizes):\n print(tshirt)",
"Tuples as Records",
"lax_coordinates = (33.9425, -118.408056)\ncity, year, pop, chg, area = ('Tokyo', 2003, 32450, 0.66, 8014)\ntraveler_ids = [('USA', '31195855'), ('BRA', 'CE342567'), ('ESP', 'XDA205856')]\n\nfor passport in sorted(traveler_ids):\n print('%s/%s' % passport)\n\nfor country, _ in traveler_ids:\n print(country)",
"Tuple Unpacking",
"import os\n_, filename = os.path.split('/home/kyle/afile.txt')\nprint(filename)\n\na, b, *rest = range(5)\n\na, b, rest\n\na, b, *rest = range(3)\na, b, rest\n\na, b, *rest = range(2)\na, b, rest\n\na, *body, c, d = range(5)\na, body, c, d\n\n*head, b, c, d = range(5)\nhead, b, c, d\n\nmetro_areas = [('Tokyo','JP',36.933,(35.689722,139.691667)),\n ('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),\n ('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),\n ('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),\n ('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),\n ]\n\nprint('{:15} | {:^9} | {:^9}'.format('', 'lat.', 'long.'))\n\nfmt = '{:15} | {:9.4f} | {:9.4f}'\n\nfmt\n\nfor name, cc, pop, (latitude, longitude) in metro_areas:\n if longitude <= 0:\n print(fmt.format(name, latitude, longitude))",
"Named tuples",
"from collections import namedtuple\n\nCity = namedtuple('City', 'name country population coordinates')\n\ntokyo = City('Tokyo', 'JP', 36.933, (35.689722, 139.691667))\n\ntokyo\n\ntokyo.population\n\ntokyo.name\n\ntokyo.coordinates\n\ntokyo[1]\n\n# a few useful methods on namedtuple\nCity._fields\n\nLatLong = namedtuple('LatLong', 'lat long')\ndelhi_data = ('Delhi NCR', 'IN', 21.935, LatLong(28.613889, 77.208889))\ndelhi = City._make(delhi_data) # instantiate a named tuple from an iterable\n\ndelhi._asdict()\n\nfor key, value in delhi._asdict().items():\n print(key + ':', value)",
"Slicing",
"# why slices and range exclude the last item\n\nl = [10,20,30,40,50,60]\nl[:2]\n\nl[2:]\n\n# slice objects\ns = 'bicycle'\ns[::3]\n\ns[::-1]\n\ns[::-2]\n\ninvoice = \"\"\"\n0.....6.................................40........52...55........\n1909 Pimoroni PiBrella $17.50 3 $52.50\n1489 6mm Tactile Switch x20 $4.95 2 $9.90\n1510 Panavise Jr. - PV-201 $28.00 1 $28.00\n1601 PiTFT Mini Kit 320x240 $34.95 1 $34.95\n\"\"\"\n\nSKU = slice(0,6)\nDESCRIPTION = slice(6, 40)\nUNIT_PRICE = slice(40, 52)\nQUANTITY = slice(52, 55)\nITEM_TOTAL = slice(55, None)\n\nline_items = invoice.split('\\n')[2:]\nfor item in line_items:\n print(item[UNIT_PRICE], item[DESCRIPTION])",
"Assigning to Slices",
"l = list(range(10))\nl\n\nl[2:5] = [20, 30]\nl\n\ndel l[5:7]\nl\n\nl[3::2] = [11, 22]\nl\n\nl[2:5] = 100\nl\n\nl[2:5] = [100]\nl",
"Using + and * with Sequences",
"l = [1, 2, 3]\nl * 5\n\n5 * 'abcd'",
"Building Lists of Lists",
"board = [['_'] *3 for i in range(3)]\nboard\n\nboard[1][2] = 'X'\nboard",
"Augmented Assignment with Sequences",
"l = [1, 2, 3]\nid(l)\n\nl *= 2\nid(l) # same list\n\nt=(1,2,3)\nid(t)\n\nt *= 2\nid(t) # new tuple was created",
"A += Assignment Puzzler",
"import dis\ndis.dis('s[a] += b')",
"• Putting mutable items in tuples is not a good idea.\n• Augmented assignment is not an atomic operation—we just saw it throwing an exception after doing part of its job.\n• Inspecting Python bytecode is not too difficult, and is often helpful to see what is going on under the hood.\nlist.sort and the sorted Built-In Function\nsorted() makes a new list, doesn't touch the original.\nsort() changes list in place.",
"fruits = ['grape', 'raspberry', 'apple', 'banana']\nsorted(fruits)\n\nfruits\n\nsorted(fruits, reverse=True)\n\nsorted(fruits, key=len)\n\nsorted(fruits, key=len, reverse=True)\n\nfruits\n\nfruits.sort() # note that sort() returns None\n\nfruits",
"Next: use bisect module to better search sorted lists.\nManaging Ordered Sequences with bisect",
"breakpoints=[60, 70, 80, 90]\ngrades='FDCBA'\nbisect.bisect(breakpoints, 99)\n\nbisect.bisect(breakpoints, 59)\n\nbisect.bisect(breakpoints, 75)\n\ndef grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'):\n i = bisect.bisect(breakpoints, score)\n return grades[i]\n\n[grade(score) for score in [33, 99, 77, 70, 89, 90, 100]]\n\ngrade(4)\n\ngrade(93)",
"Inserting with bisect.insort",
"import bisect\nimport random\n\nSIZE = 7\n\nrandom.seed(1729)\n\nmy_list = []\nfor i in range(SIZE):\n new_item = random.randrange(SIZE*2)\n bisect.insort(my_list, new_item)\n print('%2d ->' % new_item, my_list)",
"Arrays",
"from array import array\nfrom random import random\n\nfloats = array('d', (random() for i in range(10**7)))\nfloats[-1]\n\nfp = open('floats.bin', 'wb')\nfloats.tofile(fp)\nfp.close()\n\nfloats2 = array('d')\nfp = open('floats.bin', 'rb')\nfloats2.fromfile(fp, 10**7)\nfp.close()\nfloats2[-1]\nfloats2 == floats",
"To sort an array, use a = array.array(a.typecode, sorted(a)). To keep it sorted while adding to it, use bisect.insort.\nMemory Views\nThe built-in memorview class is a shared-memory sequence type that lets you handle slices of arrays without copying bytes.",
"# Changing the value of an array item by poking one of its bytes\nimport array\n\nnumbers = array.array('h', [-2, -1, 0, 1, 2])\nmemv = memoryview(numbers)\nlen(memv)\n\nmemv[0]\n\nmemv_oct = memv.cast('B') # ch type of array to unsigned char\nmemv_oct.tolist()\n\nmemv_oct[5] = 4\n\nnumbers",
"NumPy and SciPy",
"import numpy\n\na = numpy.arange(12)\na\n\ntype(a)\n\na.shape\n\na.shape = 3, 4 # turn a into three units of 4\na\n\na[2]\n\na[2, 1]\n\na[:, 1]\n\na.transpose()",
"Loading, saving, and operating:\nUse numpy.loadtxt()\nDeques and Other Queues\nInserting and removing from the left of a list (the 0-index end) is costly. collections.deque is a thread-safe double-ended queue designed for fast inserting and removing from both ends.",
"from collections import deque\n\ndq = deque(range(10), maxlen=10)\ndq\n\ndq.rotate(3)\ndq\n\ndq.rotate(-4)\ndq\n\ndq.appendleft(-1)\ndq\n\ndq.extend([11, 22, 33])\ndq\n\ndq.extendleft([10, 20, 30, 40])\ndq",
"a hidden cost: removing items from the middle of a deque is not as fast\nOn using single type in list: \"we put items in a list to process them later, which implies that all items should support at least some operation in common\".",
"# but a workaround with `key`\nl = [28, 14, '28', 5, '9', '1', 0, 6, '23', 19]\nsorted(l, key=int)\n\nsorted(l, key=str)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Bio204-class/bio204-notebooks | Introduction-to-Simulation.ipynb | cc0-1.0 | [
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nmatplotlib.style.use(\"bmh\")",
"A brief note about pseudo-random numbers\nWhen carrying out simulations, it is typical to use random number generators. Most computers can not generate true random numbers -- instead we use algorithms that approximate the generation of random numbers (pseudo-random number generators). One important difference between a true random number generator and a pseudo-random number generator is that a series of pseudo-random numbers can be regenerated if you know the \"seed\" value that initialized the algorithm. We can specifically set this seed value, so that we can guarantee that two different people evaluating this notebook get the same results, even though we're using (pseudo)random numbers in our simulation.",
"# set the seed for the pseudo-random number generator\n# the seed is any 32 bit integer\n# different seeds will generate different results for the \n# simulations that follow\nnp.random.seed(20160208) ",
"Generating a population to sample from\nWe'll start by simulating our \"population of interest\" -- i.e. the population we want to make inferences about. We'll assume that our variable of interest (e.g. circulating stress hormone levels) is normally distributed with a mean of 10 nM and a standard deviation of 1 nM.",
"popn = np.random.normal(loc=10, scale=1, size=6500)\n\nplt.hist(popn,bins=50)\nplt.xlabel(\"Glucorticoid concentration (nM)\")\nplt.ylabel(\"Frequency\")\npass\n\nprint(\"Mean glucorticoid concentration:\", np.mean(popn))\nprint(\"Standard deviation of glucocorticoid concentration:\", np.std(popn))",
"Take a random sample of the population of interest\nWe'll use the np.random.choice function to take a sample from our population of interest.",
"sample1 = np.random.choice(popn, size=25)\n\nplt.hist(sample1)\nplt.xlabel(\"Glucorticoid concentration (nM)\")\nplt.ylabel(\"Frequency\")\npass\n\nnp.mean(sample1), np.std(sample1,ddof=1)",
"Take a second random sample of size 25",
"sample2 = np.random.choice(popn, size=25)\n\nnp.mean(sample2), np.std(sample2,ddof=1)",
"Compare the first and second samples",
"plt.hist(sample1)\nplt.hist(sample2,alpha=0.5)\nplt.xlabel(\"Glucorticoid concentration (nM)\")\nplt.ylabel(\"Frequency\")\npass",
"## Generate a large number of samples of size 25 \nEvery time we take a random sample from our population of interest we'll get a different estimate of the mean and standard deviation (or whatever other statistics we're interested in). To explore how well random samples of size 25 perform, generally, in terms of estimating the mean and standard deviation of the population of interest we need a large number of such samples. \nIt's tedious to take one sample at a time, so we'll generate 100 samples of size 25, and calculate the mean and standard deviation for each of those samples (storing the means and standard deviations in lists).",
"means25 = []\nstd25 = []\nfor i in range(100):\n s = np.random.choice(popn, size=25)\n means25.append(np.mean(s))\n std25.append(np.std(s,ddof=1))\n\nplt.hist(means25,bins=15)\nplt.xlabel(\"Mean glucocorticoid concentration\")\nplt.ylabel(\"Frequency\")\nplt.title(\"Distribution of estimates of the\\n mean glucocorticoid concentration\\n for 100 samples of size 25\")\nplt.vlines(np.mean(popn), 0, 18, linestyle='dashed', color='red',label=\"True Mean\")\nplt.legend(loc=\"upper right\")\npass",
"Relative Frequency Histogram\nA relative frequency histogram is like a frequency histogram, except the bin heights are given in fractions of the total sample size (relative frequency) rather than absolute frequency. This is equivalent to adding the constraint that the total height of all the bars in the histogram will add to 1.0.",
"# Relative Frequency Histogram\nplt.hist(means25, bins=15, weights=np.ones_like(means25) * (1.0/len(means25)))\nplt.xlabel(\"mean glucocorticoid concentration\")\nplt.ylabel(\"Relative Frequency\")\nplt.vlines(np.mean(popn), 0, 0.20, linestyle='dashed', color='red',label=\"True Mean\")\nplt.legend(loc=\"upper right\")\npass",
"Density histogram\nIf instead of constraining the total height of the bars, we constrain the total area of the bars to sum to one, we call this a density histogram. When comparing histograms based on different numbers of samples, with different bin width, etc. you should usually use the density histogram.\nThe argument normed=True to the pyplot.hist function will this function calculate a density histogram instead of the default frequency histogram.",
"plt.hist(means25,bins=15,normed=True)\nplt.xlabel(\"Mean glucocorticoid concentration\")\nplt.ylabel(\"Density\")\nplt.vlines(np.mean(popn), 0, 2.5, linestyle='dashed', color='red',label=\"True Mean\")\nplt.legend(loc=\"upper right\")\npass",
"How does the spread of our estimates of the mean change as sample size increases?\nWhat happens as we increase the size of our samples? Let's draw 100 random samples of size 50, 100, and 200 observations to compare.",
"means50 = []\nstd50 = []\nfor i in range(100):\n s = np.random.choice(popn, size=50)\n means50.append(np.mean(s))\n std50.append(np.std(s,ddof=1))\n \nmeans100 = []\nstd100 = []\nfor i in range(100):\n s = np.random.choice(popn, size=100)\n means100.append(np.mean(s))\n std100.append(np.std(s,ddof=1))\n \nmeans200 = []\nstd200 = []\nfor i in range(100):\n s = np.random.choice(popn, size=200)\n means200.append(np.mean(s))\n std200.append(np.std(s,ddof=1)) \n\n# the label arguments get used when we create a legend\nplt.hist(means25, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=25\")\nplt.hist(means50, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=50\")\nplt.hist(means100, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=100\")\nplt.hist(means200, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=200\")\nplt.xlabel(\"Mean glucocorticoid concentration\")\nplt.ylabel(\"Density\")\nplt.vlines(np.mean(popn), 0, 7, linestyle='dashed', color='black',label=\"True Mean\")\nplt.legend()\npass",
"Standard Error of the Mean\nWe see from the graph above that our estimates of the mean cluster more tightly about the true mean as our sample size increases. Let's quantify that by calculating the standard deviation of our mean estimates as a function of sample size.\nThe standard deviation of the sampling distribution of a statistic of interest is called the \"Standard Error\" of that statistic. Here, through simulation, we are estimating the \"Standard Error of the Mean\".",
"sm25 = np.std(means25,ddof=1)\nsm50 = np.std(means50,ddof=1)\nsm100 = np.std(means100,ddof=1)\nsm200 = np.std(means200, ddof=1)\n\nx = [25,50,100,200]\ny = [sm25,sm50,sm100,sm200]\nplt.scatter(x,y)\nplt.xlabel(\"Sample size\")\nplt.ylabel(\"Std Dev of Mean Estimates\")\npass",
"You can show mathematically for normally distributed data, that the expected Standard Error of the Mean as a function of sample size is:\n$$\n\\mbox{Standard Error of Mean} = \\frac{\\sigma}{\\sqrt{n}}\n$$\nwhere $\\sigma$ is the population standard deviation, and $n$ is the sample size.\nLet's compare that theoretical expectation to our simulated estimates.",
"x = [25,50,100,200]\ny = [sm25,sm50,sm100,sm200]\ntheory = [np.std(popn)/np.sqrt(i) for i in range(10,250)]\nplt.scatter(x,y, label=\"Simulation estimates\")\nplt.plot(range(10,250), theory, color='red', label=\"Theoretical expectation\")\nplt.xlabel(\"Sample size\")\nplt.ylabel(\"Std Error of Mean\")\nplt.legend()\nplt.xlim(0,300)\npass\n",
"Standard Errors of the Standard Deviation\nAbove we explored how the spread in our estimates of the mean changed with sample size. We can similarly explore how our estimates of the standard deviation of the population change as we vary our sample size.",
"# the label arguments get used when we create a legend\nplt.hist(std25, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=25\")\nplt.hist(std50, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=50\")\nplt.hist(std100, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=100\")\nplt.hist(std200, normed=True, alpha=0.75, histtype=\"stepfilled\", label=\"n=200\")\nplt.xlabel(\"Standard Deviation of Glucocorticoid Concentration\")\nplt.ylabel(\"Density\")\nplt.vlines(np.std(popn), 0, 9, linestyle='dashed', color='black',label=\"True Standard Deviation\")\n#plt.legend()\npass",
"You can show mathematically for normally distributed data, that the expected Standard Error of the Standard Deviation is approximately\n$$\n\\mbox{Standard Error of Standard Deviation} \\approx \\frac{\\sigma}{\\sqrt{2(n-1)}}\n$$\nwhere $\\sigma$ is the population standard deviation, and $n$ is the sample size.\nLet's compare that theoretical expectation to our simulated estimates.",
"x = [25,50,100,200]\ny = [ss25,ss50,ss100,ss200]\nplt.scatter(x,y, label=\"Simulation estimates\")\nplt.xlabel(\"Sample size\")\nplt.ylabel(\"Std Error of Std Dev\")\n\ntheory = [np.std(popn)/(np.sqrt(2.0*(i-1))) for i in range(10,250)]\nplt.plot(range(10,250), theory, color='red', label=\"Theoretical expectation\")\n\nplt.xlim(0,300)\nplt.legend()\npass\n"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/workshops | tfx_labs/Lab_6_Model_Analysis.ipynb | apache-2.0 | [
"Copyright © 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorFlow Model Analysis\nAn Example of a Key TFX Library\nThis example colab notebook illustrates how TensorFlow Model Analysis (TFMA) can be used to investigate and visualize the characteristics of a dataset and the performance of a model. We'll use a model that we trained previously, and now you get to play with the results!\nThe model we trained was for the Chicago Taxi Example, which uses the Taxi Trips dataset released by the City of Chicago.\nNote: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.\nRead more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI.\nKey Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about <a target='_blank' href='https://developers.google.com/machine-learning/fairness-overview/'>ML fairness</a>.\nKey Point: In order to understand TFMA and how it works with Apache Beam, you'll need to know a little bit about Apache Beam itself. The <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/'>Beam Programming Guide</a> is a great place to start.\nThe columns in the dataset are:\n<table>\n<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>\n\n<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>\n<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>\n<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>\n<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>\n<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>\n</table>\n\nInstall Jupyter Extensions\nNote: If running TFMA in a local Jupyter notebook, then these Jupyter extensions must be installed in the environment before running Jupyter.\nbash\njupyter nbextension enable --py widgetsnbextension\njupyter nbextension install --py --symlink tensorflow_model_analysis\njupyter nbextension enable --py tensorflow_model_analysis\nSetup\nFirst, we install the necessary packages, download data, import modules and set up paths.\nInstall TensorFlow, TensorFlow Model Analysis (TFMA) and TensorFlow Data Validation (TFDV)",
"!pip install -q -U \\\n tensorflow==2.0.0 \\\n tfx==0.15.0rc0",
"Import packages\nWe import necessary packages, including standard TFX component classes.",
"import csv\nimport io\nimport os\nimport requests\nimport tempfile\nimport zipfile\n\nfrom google.protobuf import text_format\n\nimport tensorflow as tf\n\nimport tensorflow_data_validation as tfdv\nimport tensorflow_model_analysis as tfma\nfrom tensorflow_metadata.proto.v0 import schema_pb2\n\ntf.__version__\n\ntfma.version.VERSION_STRING",
"Load The Files\nWe'll download a zip file that has everything we need. That includes:\n\nTraining and evaluation datasets\nData schema\nTraining results as EvalSavedModels\n\nNote: We are downloading with HTTPS from a Google Cloud server.",
"# Download the zip file from GCP and unzip it\nBASE_DIR = tempfile.mkdtemp()\nTFMA_DIR = os.path.join(BASE_DIR, 'eval_saved_models-2.0')\nDATA_DIR = os.path.join(TFMA_DIR, 'data')\nOUTPUT_DIR = os.path.join(TFMA_DIR, 'output')\nSCHEMA = os.path.join(TFMA_DIR, 'schema.pbtxt')\n\nresponse = requests.get('https://storage.googleapis.com/tfx-colab-datasets/eval_saved_models-2.0.zip', stream=True)\nzipfile.ZipFile(io.BytesIO(response.content)).extractall(BASE_DIR)\n\nprint(\"Here's what we downloaded:\")\n!cd {TFMA_DIR} && find .",
"Parse the Schema\nAmong the things we downloaded was a schema for our data that was created by TensorFlow Data Validation. Let's parse that now so that we can use it with TFMA.",
"schema = schema_pb2.Schema()\ncontents = tf.io.read_file(SCHEMA).numpy()\nschema = text_format.Parse(contents, schema)\n\ntfdv.display_schema(schema)",
"Use the Schema to Create TFRecords\nWe need to give TFMA access to our dataset, so let's create a TFRecords file. We can use our schema to create it, since it gives us the correct type for each feature.",
"datafile = os.path.join(DATA_DIR, 'eval', 'data.csv')\nreader = csv.DictReader(open(datafile))\nexamples = []\nfor line in reader:\n example = tf.train.Example()\n for feature in schema.feature:\n key = feature.name\n if len(line[key]) > 0:\n if feature.type == schema_pb2.FLOAT:\n example.features.feature[key].float_list.value[:] = [float(line[key])]\n elif feature.type == schema_pb2.INT:\n example.features.feature[key].int64_list.value[:] = [int(line[key])]\n elif feature.type == schema_pb2.BYTES:\n example.features.feature[key].bytes_list.value[:] = [line[key].encode('utf8')]\n else:\n if feature.type == schema_pb2.FLOAT:\n example.features.feature[key].float_list.value[:] = []\n elif feature.type == schema_pb2.INT:\n example.features.feature[key].int64_list.value[:] = []\n elif feature.type == schema_pb2.BYTES:\n example.features.feature[key].bytes_list.value[:] = []\n examples.append(example)\n\nTFRecord_file = os.path.join(BASE_DIR, 'train_data.rio')\nwith tf.io.TFRecordWriter(TFRecord_file) as writer:\n for example in examples:\n writer.write(example.SerializeToString())\n writer.flush()\n writer.close()\n\n!ls {TFRecord_file}",
"Run TFMA and Render Metrics\nNow we're ready to create a function that we'll use to run TFMA and render metrics. It requires an EvalSavedModel, a list of SliceSpecs, and an index into the SliceSpec list. It will create an EvalResult using tfma.run_model_analysis, and use it to create a SlicingMetricsViewer using tfma.view.render_slicing_metrics, which will render a visualization of our dataset using the slice we created.",
"def run_and_render(eval_model=None, slice_list=None, slice_idx=0):\n \"\"\"Runs the model analysis and renders the slicing metrics\n\n Args:\n eval_model: An instance of tf.saved_model saved with evaluation data\n slice_list: A list of tfma.slicer.SingleSliceSpec giving the slices\n slice_idx: An integer index into slice_list specifying the slice to use\n\n Returns:\n A SlicingMetricsViewer object if in Jupyter notebook; None if in Colab.\n \"\"\"\n eval_result = tfma.run_model_analysis(eval_shared_model=eval_model,\n data_location=TFRecord_file,\n file_format='tfrecords',\n slice_spec=slice_list,\n output_path='sample_data',\n extractors=None)\n return tfma.view.render_slicing_metrics(eval_result, slicing_spec=slice_list[slice_idx] if slice_list else None)",
"Slicing and Dicing\nWe previously trained a model, and now we've loaded the results. Let's take a look at our visualizations, starting with using TFMA to slice along particular features. But first we need to read in the EvalSavedModel from one of our previous training runs.\n\n\nTo define the slice you want to visualize you create a tfma.slicer.SingleSliceSpec\n\n\nTo use tfma.view.render_slicing_metrics you can either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec)\n\nIf neither is provided, the overview will be displayed\n\nPlots are interactive:\n\nClick and drag to pan\nScroll to zoom\nRight click to reset the view\n\nSimply hover over the desired data point to see more details. Select from four different types of plots using the selections at the bottom.\nFor example, we'll be setting slicing_column to look at the trip_start_hour feature in our SliceSpec.",
"# Load the TFMA results for the first training run\n# This will take a minute\neval_model_base_dir_0 = os.path.join(TFMA_DIR, 'run_0', 'eval_model_dir')\neval_model_dir_0 = os.path.join(eval_model_base_dir_0,\n max(os.listdir(eval_model_base_dir_0)))\neval_shared_model_0 = tfma.default_eval_shared_model(\n eval_saved_model_path=eval_model_dir_0)\n\n# Slice our data by the trip_start_hour feature\nslices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])]\n\nrun_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)",
"Slices Overview\nThe default visualization is the Slices Overview when the number of slices is small. It shows the values of metrics for each slice. Since we've selected trip_start_hour above, it's showing us metrics like accuracy and AUC for each hour, which allows us to look for issues that are specific to some hours and not others.\nIn the visualization above:\n\nTry sorting the feature column, which is our trip_start_hours feature, by clicking on the column header\nTry sorting by precision, and notice that the precision for some of the hours with examples is 0, which may indicate a problem\n\nThe chart also allows us to select and display different metrics in our slices.\n\nTry selecting different metrics from the \"Show\" menu\nTry selecting recall in the \"Show\" menu, and notice that the recall for some of the hours with examples is 0, which may indicate a problem\n\nIt is also possible to set a threshold to filter out slices with smaller numbers of examples, or \"weights\". You can type a minimum number of examples, or use the slider.\nMetrics Histogram\nThis view also supports a Metrics Histogram as an alternative visualization, which is also the default view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Columns can be sorted by clicking on the column header. Slices with small weights can be filtered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can also be used to remove outliers in the visualization and the metrics tables. Click the gear icon to switch to a logarithmic scale instead of a linear scale.\n\nTry selecting \"Metrics Histogram\" in the Visualization menu\n\nMore Slices\nLet's create a whole list of SliceSpecs, which will allow us to select any of the slices in the list. We'll select the trip_start_day slice (days of the week) by setting the slice_idx to 1. Try changing the slice_idx to 0 or 2 and running again to examine different slices.",
"slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour']),\n tfma.slicer.SingleSliceSpec(columns=['trip_start_day']),\n tfma.slicer.SingleSliceSpec(columns=['trip_start_month'])]\nrun_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)",
"You can create feature crosses to analyze combinations of features. Let's create a SliceSpec to look at a cross of trip_start_day and trip_start_hour:",
"slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_hour'])]\nrun_and_render(eval_shared_model_0, slices, 0)",
"Crossing the two columns creates a lot of combinations! Let's narrow down our cross to only look at trips that start at noon. Then let's select accuracy from the visualization:",
"slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])]\nrun_and_render(eval_shared_model_0, slices, 0)",
"Tracking Model Performance Over Time\nYour training dataset will be used for training your model, and will hopefully be representative of your test dataset and the data that will be sent to your model in production. However, while the data in inference requests may remain the same as your training data, in many cases it will start to change enough so that the performance of your model will change.\nThat means that you need to monitor and measure your model's performance on an ongoing basis, so that you can be aware of and react to changes. Let's take a look at how TFMA can help.\nMeasure Performance For New Data\nWe downloaded the results of three different training runs above, so let's load them now:",
"def get_eval_result(base_dir, run_name, data_loc, slice_spec):\n eval_model_base_dir = os.path.join(base_dir, run_name, \"eval_model_dir\")\n versions = os.listdir(eval_model_base_dir)\n eval_model_dir = os.path.join(eval_model_base_dir, max(versions))\n output_dir = os.path.join(base_dir, \"output\", run_name)\n eval_shared_model = tfma.default_eval_shared_model(eval_saved_model_path=eval_model_dir)\n\n return tfma.run_model_analysis(eval_shared_model=eval_shared_model,\n data_location=data_loc,\n file_format='tfrecords',\n slice_spec=slice_spec,\n output_path=output_dir,\n extractors=None)\n\nslices = [tfma.slicer.SingleSliceSpec()]\nresult_ts0 = get_eval_result(TFMA_DIR, 'run_0', TFRecord_file, slices)\nresult_ts1 = get_eval_result(TFMA_DIR, 'run_1', TFRecord_file, slices)\nresult_ts2 = get_eval_result(TFMA_DIR, 'run_2', TFRecord_file, slices)",
"Next, let's use TFMA to see how these runs compare using render_time_series.\nHow does it look today?\nFirst, we'll imagine that we've trained and deployed our model yesterday, and now we want to see how it's doing on the new data coming in today. We can specify particular slices to look at. Let's compare our training runs for trips that started at noon.\nNote:\n* The visualization will start by displaying accuracy. Add AUC and average loss by using the \"Add metric series\" menu.\n* Hover over the curves to see the values.\n* In the metric series charts the X axis is the model ID number of the model run that you're examining. The numbers themselves are not meaningful.",
"output_dirs = [os.path.join(TFMA_DIR, \"output\", run_name)\n for run_name in (\"run_0\", \"run_1\", \"run_2\")]\n\neval_results_from_disk = tfma.load_eval_results(\n output_dirs[:2], tfma.constants.MODEL_CENTRIC_MODE)\n\ntfma.view.render_time_series(eval_results_from_disk, slices[0])",
"Now we'll imagine that another day has passed and we want to see how it's doing on the new data coming in today, compared to the previous two days. Again add AUC and average loss by using the \"Add metric series\" menu:",
"eval_results_from_disk = tfma.load_eval_results(\n output_dirs, tfma.constants.MODEL_CENTRIC_MODE)\n\ntfma.view.render_time_series(eval_results_from_disk, slices[0])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbliss1234/ML | tf_kdd99.ipynb | apache-2.0 | [
"T81-558: Applications of Deep Neural Networks\nTensorFlow (SKFLOW) Meets KDD-99\n\nInstructor: Jeff Heaton, School of Engineering and Applied Science, Washington University in St. Louis\nFor more information visit the class website.\n\nThis simple example shows how to load a non-trivial dataset from CSV and train a neural network. The dataset is the\nKDD99 dataset. This dataset is used to detect between normal and malicious network activity.",
"# Imports for this Notebook\n\n# Imports\nimport pandas as pd\nfrom sklearn import preprocessing\nfrom sklearn.cross_validation import train_test_split\nimport tensorflow.contrib.learn as skflow\nfrom sklearn import metrics",
"Several Useful Functions\nThese are functions that I reuse often to encode the feature vector (FV).",
"# These are several handy functions that I use in my class:\n\n# Encode a text field to dummy variables\ndef encode_text_dummy(df,name):\n dummies = pd.get_dummies(df[name])\n for x in dummies.columns:\n dummy_name = \"{}-{}\".format(name,x)\n df[dummy_name] = dummies[x]\n df.drop(name, axis=1, inplace=True)\n \n# Encode a text field to a single index value\ndef encode_text_index(df,name): \n le = preprocessing.LabelEncoder()\n df[name] = le.fit_transform(df[name])\n return le.classes_\n \n# Encode a numeric field to Z-Scores\ndef encode_numeric_zscore(df,name,mean=None,sd=None):\n if mean is None:\n mean = df[name].mean()\n \n if sd is None:\n sd = df[name].std()\n \n df[name] = (df[name]-mean)/sd\n \n# Encode a numeric field to fill missing values with the median.\ndef missing_median(df, name):\n med = df[name].median()\n df[name] = df[name].fillna(med)\n\n# Convert a dataframe to x/y suitable for training.\ndef to_xy(df,target):\n result = []\n for x in df.columns:\n if x != target:\n result.append(x)\n return df.as_matrix(result),df[target]\n",
"Read in Raw KDD-99 Dataset",
"\n# This file is a CSV, just no CSV extension or headers\n# Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html\ndf = pd.read_csv(\"/Users/jeff/Downloads/data/kddcup.data_10_percent\", header=None)\n\nprint(\"Read {} rows.\".format(len(df)))\n# df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset\ndf.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values)\n\n# The CSV file has no column heads, so add them\ndf.columns = [\n 'duration',\n 'protocol_type',\n 'service',\n 'flag',\n 'src_bytes',\n 'dst_bytes',\n 'land',\n 'wrong_fragment',\n 'urgent',\n 'hot',\n 'num_failed_logins',\n 'logged_in',\n 'num_compromised',\n 'root_shell',\n 'su_attempted',\n 'num_root',\n 'num_file_creations',\n 'num_shells',\n 'num_access_files',\n 'num_outbound_cmds',\n 'is_host_login',\n 'is_guest_login',\n 'count',\n 'srv_count',\n 'serror_rate',\n 'srv_serror_rate',\n 'rerror_rate',\n 'srv_rerror_rate',\n 'same_srv_rate',\n 'diff_srv_rate',\n 'srv_diff_host_rate',\n 'dst_host_count',\n 'dst_host_srv_count',\n 'dst_host_same_srv_rate',\n 'dst_host_diff_srv_rate',\n 'dst_host_same_src_port_rate',\n 'dst_host_srv_diff_host_rate',\n 'dst_host_serror_rate',\n 'dst_host_srv_serror_rate',\n 'dst_host_rerror_rate',\n 'dst_host_srv_rerror_rate',\n 'outcome'\n]\n\n# display 5 rows\ndf[0:5]",
"Encode the feature vector\nEncode every row in the database. This is not instant!",
"# Now encode the feature vector\n\nencode_numeric_zscore(df, 'duration')\nencode_text_dummy(df, 'protocol_type')\nencode_text_dummy(df, 'service')\nencode_text_dummy(df, 'flag')\nencode_numeric_zscore(df, 'src_bytes')\nencode_numeric_zscore(df, 'dst_bytes')\nencode_text_dummy(df, 'land')\nencode_numeric_zscore(df, 'wrong_fragment')\nencode_numeric_zscore(df, 'urgent')\nencode_numeric_zscore(df, 'hot')\nencode_numeric_zscore(df, 'num_failed_logins')\nencode_text_dummy(df, 'logged_in')\nencode_numeric_zscore(df, 'num_compromised')\nencode_numeric_zscore(df, 'root_shell')\nencode_numeric_zscore(df, 'su_attempted')\nencode_numeric_zscore(df, 'num_root')\nencode_numeric_zscore(df, 'num_file_creations')\nencode_numeric_zscore(df, 'num_shells')\nencode_numeric_zscore(df, 'num_access_files')\nencode_numeric_zscore(df, 'num_outbound_cmds')\nencode_text_dummy(df, 'is_host_login')\nencode_text_dummy(df, 'is_guest_login')\nencode_numeric_zscore(df, 'count')\nencode_numeric_zscore(df, 'srv_count')\nencode_numeric_zscore(df, 'serror_rate')\nencode_numeric_zscore(df, 'srv_serror_rate')\nencode_numeric_zscore(df, 'rerror_rate')\nencode_numeric_zscore(df, 'srv_rerror_rate')\nencode_numeric_zscore(df, 'same_srv_rate')\nencode_numeric_zscore(df, 'diff_srv_rate')\nencode_numeric_zscore(df, 'srv_diff_host_rate')\nencode_numeric_zscore(df, 'dst_host_count')\nencode_numeric_zscore(df, 'dst_host_srv_count')\nencode_numeric_zscore(df, 'dst_host_same_srv_rate')\nencode_numeric_zscore(df, 'dst_host_diff_srv_rate')\nencode_numeric_zscore(df, 'dst_host_same_src_port_rate')\nencode_numeric_zscore(df, 'dst_host_srv_diff_host_rate')\nencode_numeric_zscore(df, 'dst_host_serror_rate')\nencode_numeric_zscore(df, 'dst_host_srv_serror_rate')\nencode_numeric_zscore(df, 'dst_host_rerror_rate')\nencode_numeric_zscore(df, 'dst_host_srv_rerror_rate')\noutcomes = encode_text_index(df, 'outcome')\nnum_classes = len(outcomes)\n\n# display 5 rows\n\ndf.dropna(inplace=True,axis=1)\ndf[0:5]\n# This is the numeric feature vector, as it goes to the neural net\n",
"Train the Neural Network",
"# Break into X (predictors) & y (prediction)\nx, y = to_xy(df,'outcome')\n\n# Create a test/train split. 25% test\n# Split into train/test\nx_train, x_test, y_train, y_test = train_test_split(\n x, y, test_size=0.25, random_state=42)\n\n# Create a deep neural network with 3 hidden layers of 10, 20, 10\nclassifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], \n n_classes=num_classes, steps=500)\n\n# Early stopping\nearly_stop = skflow.monitors.ValidationMonitor(x_test, y_test,\n early_stopping_rounds=200,\n n_classes=num_classes,\n print_steps=50)\n \n# Fit/train neural network\nclassifier.fit(x, y, early_stop)\n \n\n# Measure accuracy\npred = classifier.predict(x_test)\nscore = metrics.accuracy_score(y_test, pred)\nprint(\"Validation score: {}\".format(score))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wikistat/Ateliers-Big-Data | Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb | mit | [
"Ateliers: Technologies de l'intelligence Artificielle\n<center>\n<a href=\"http://www.insa-toulouse.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg\" style=\"float:left; max-width: 120px; display: inline\" alt=\"INSA\"/></a> \n<a href=\"http://wikistat.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg\" width=400, style=\"max-width: 150px; display: inline\" alt=\"Wikistat\"/></a>\n<a href=\"http://www.math.univ-toulouse.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg\" width=400, style=\"float:right; display: inline\" alt=\"IMT\"/> </a>\n</center>\nTraitement Naturel du Langage (NLP) : Catégorisation de Produits Cdiscount\nIl s'agit d'une version simplifiée du concours proposé par Cdiscount et paru sur le site datascience.net. Les données d'apprentissage sont accessibles sur demande auprès de Cdiscount mais les solutions de l'échantillon test du concours ne sont pas et ne seront pas rendues publiques. Un échantillon test est donc construit pour l'usage de ce tutoriel. L'objectif est de prévoir la catégorie d'un produit à partir de son descriptif (text mining). Seule la catégorie principale (1er niveau, 47 classes) est prédite au lieu des trois niveaux demandés dans le concours. L'objectif est plutôt de comparer les performances des méthodes et technologies en fonction de la taille de la base d'apprentissage ainsi que d'illustrer sur un exemple complexe le prétraitement de données textuelles. \nLe jeux de données complet (15M produits) permet un test en vrai grandeur du passage à l'échelle volume des phases de préparation (munging), vectorisation (hashage, TF-IDF) et d'apprentissage en fonction de la technologie utilisée.\nLa synthèse des résultats obtenus est développée par Besse et al. 2016 (section 5).\nPartie 1-1 : Exploration et Nettoyage de données textuelles\nDans ce premier notebook nous verrons différent traitements généralement opérés sur des données textuelles :\n\nNettoyage : Suppression des caractères mal codés et de ponctuation, transformation des majuscules en minuscules, en remarquant que ces transformations ne seraient pas pertinentes pour un objectif de détection de pourriels.\nStopWord : Suppression des mots inutiles ou mots de liaison, articles qui n'ont a priori pas de pouvoir discriminant.\nStemming (ou Racinisation): Les mots sont réduits à leur seule racine afin de réduire la taille du dictionnaire.\n\nLibrairies",
"#Importation des librairies utilisées\nimport unicodedata \nimport time\nimport pandas as pd\nimport numpy as np\nimport random\nimport nltk\nimport re \nimport collections\nimport itertools\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport matplotlib.pyplot as plt\nimport seaborn as sb\nsb.set_style(\"whitegrid\")\n\nimport sklearn.cross_validation as scv",
"nltk\nSi vous utilisez la librairie nltk pour la première fois, il est nécessaire d'utiliser la commande suivante. Cette commande permet de télécharger de nombreux corpus de texte, mais également des informations grammaticales sur différentes langues. Information notamment nécessaire à l'étape de racinisation.",
"# nltk.download(\"all\")",
"Les données\nDans le dossier Cdiscount/data de ce répértoire vous trouverez les fichiers suivants :\n\ncdiscount_test.csv.zip: Fichier d'apprentissage constitué de 1.000.000 de lignes\ncdisount_test: Fichier test constitué de 50.000 lignes\n\n### Read & Split Dataset\nOn définit une fonction permettant de lire le fichier d'apprentissage et de créer deux DataFrame Pandas, un pour l'apprentissage, l'autre pour la validation.\n La fonction créée un DataFrame en lisant entièrement le fichier. Puis elle scinde ce DataFrame en deux grâce à la fonction dédiée de sklearn.",
"def split_dataset(input_path, nb_line, tauxValid):\n data_all = pd.read_csv(input_path,sep=\",\", nrows=nb_line)\n data_all = data_all.fillna(\"\")\n data_train, data_valid = scv.train_test_split(data_all, test_size = tauxValid)\n time_end = time.time()\n return data_train, data_valid",
"Bien que déjà réduit par rapport au fichier original du concours, contenant plus de 15M de lignes, le fichier cdiscount_test.csv.zip, contenant 1M de lignes est encore volumineux. \nNous allons charger en mémoire qu'une partie de ce fichier grace à l'argument nb_line afin d'éviter des temps de calcul trop couteux. \nNous allons extraire 5% de ces 1M de lignes commes échantillons de validation.",
"input_path = \"data/cdiscount_train.csv.zip\"\nnb_line=100000 # part totale extraite du fichier initial ici déjà réduit\ntauxValid = 0.05\ndata_train, data_valid = split_dataset(input_path, nb_line, tauxValid)\n# Cette ligne permet de visualiser les 5 premières lignes de la DataFrame \nN_train = data_train.shape[0]\nN_valid = data_valid.shape[0]\nprint(\"Train set : %d elements, Validation set : %d elements\" %(N_train, N_valid))",
"La commande suivante permet d'afficher les premières lignes du fichiers. \nVous pouvez observer que chaque produit possède 3 niveaux de Catégories, qui correspondent au différents niveaux de l'arborescence que vous retrouverez sur le site.\nIl y a 44 catégories de niveau 1, 428 de niveau 2 et 3170 de niveau 3. \nDans ce TP, nous nous interesserons uniquement à classer les produits dans la catégorie de niveau 1.",
"data_train.head(5)",
"La commande suivante permet d'afficher un exemple de produits pour chaque Catégorie de niveau 1.",
"data_train.groupby(\"Categorie1\").first()[[\"Description\",\"Libelle\",\"Marque\"]]",
"Distribution des classes",
"#Count occurence of each Categorie\ndata_count = data_train[\"Categorie1\"].value_counts()\n#Rename index to add percentage\nnew_index = [k+ \": %.2f%%\" %(v*100/N_train) for k,v in data_count.iteritems()]\ndata_count.index = new_index\n\nfig=plt.figure(figsize= (10,10))\nax = fig.add_subplot(1,1,1)\ndata_count.plot.barh(logx = False)\nplt.show()",
"Q Que peut-on dire sur la distribution de ces classes?\nSauvegarde des données\nOn sauvegarde dans des csv les fichiers train et validation afin que ces mêmes fichiers soit ré-utilisés plus tard dans d'autre calepin",
"data_valid.to_csv(\"data/cdiscount_valid.csv\", index=False)\ndata_train.to_csv(\"data/cdiscount_train_subset.csv\", index=False)",
"Nettoyage des données\nAfin de limiter la dimension de l'espace des variables ou features (i.e les mots présents dans le document), tout en conservant les informations essentielles, il est nécessaire de nettoyer les données en appliquant plusieurs étapes:\n\nChaque mot est écrit en minuscule.\nLes termes numériques, de ponctuation et autres symboles sont supprimés.\n155 mots-courants, et donc non informatifs, de la langue française sont supprimés (STOPWORDS). Ex: le, la, du, alors, etc...\nChaque mot est \"racinisé\", via la fonction STEMMER.stem de la librairie nltk. La racinisation transforme un mot en son radical ou sa racine. Par exemple, les mots: cheval, chevaux, chevalier, chevalerie, chevaucher sont tous remplacés par \"cheva\".\n\nExemple\nObservons dans un premier temps l'effet de ces différentes étapes sur un exemple. \nLigne Originale",
"i = 0\ndescription = data_train.Description.values[i]\nprint(\"Original Description : \" + description)",
"Suppression des posibles balises HTML dans la description\nLes descriptions produits étant parfois extraites d'autres sites commerçant, des balises HTML peuvent être incluts dans la description. \nLa librairie 'BeautifulSoup' permet de supprimer ces balises",
"from bs4 import BeautifulSoup #Nettoyage d'HTML\ntxt = BeautifulSoup(description,\"html.parser\",from_encoding='utf-8').get_text()\nprint(txt)",
"Conversion du texte en minuscule\nCertaines mots peuvent être écrits en majuscule dans les descriptions textes, cela à pour conséquence de dupliquer le nombre de features et une perte d'information.",
"txt = txt.lower()\nprint(txt)",
"Remplacement de caractères spéciaux\nCertains caractères spéciaux sont supprimés comme par exemple :\n\n\\u2026: …\n\\u00a0: NO-BREAK SPACE\n\nCette liste est non exhaustive et peut être etayée en fonction du jeu de donées étudié, de l'objectif souhaité ou encore du résultat de l'étude explorative.",
"txt = txt.replace(u'\\u2026','.') \ntxt = txt.replace(u'\\u00a0',' ')\nprint(txt)",
"Suppression des accents",
"txt = unicodedata.normalize('NFD', txt).encode('ascii', 'ignore').decode(\"utf-8\")\nprint(txt)",
"Supprime les caractères qui ne sont ne sont pas des lettres minuscules\nUne fois ces premières étapes passées, on supprime tous les caractères qui sont pas des lettres minusculres, c'est à dire les signes de ponctuation, les caractères numériques etc...",
"txt = re.sub('[^a-z_]', ' ', txt)\nprint(txt)",
"Remplace la description par une liste de mots (tokens), supprime les mots de moins de 2 lettres ainsi que les stopwords\nOn va supprimer maintenant tous les mots considérés comme \"non-informatif\". Par exemple : \"le\", \"la\", \"de\" ...\nDes listes contenants ces mots sont proposés dans des libraires tels que nltk ou encore lucène.",
"## listes de mots à supprimer dans la description des produits\n## Depuis NLTK\nnltk_stopwords = nltk.corpus.stopwords.words('french') \n## Depuis Un fichier externe.\nlucene_stopwords =open(\"data/lucene_stopwords.txt\",\"r\").read().split(\",\") #En local\n## Union des deux fichiers de stopwords \nstopwords = list(set(nltk_stopwords).union(set(lucene_stopwords)))\n\nstopwords[:10]",
"On applique également la suppression des accents à cette liste",
"stopwords = [unicodedata.normalize('NFD', sw).encode('ascii', 'ignore').decode(\"utf-8\") for sw in stopwords]\nstopwords[:10]",
"Enfin on crée des tokens, liste de mots dans la description produit, en supprimant les éléments de notre description produit qui sont présent dans la liste de stopword.",
"tokens = [w for w in txt.split() if (len(w)>2) and (w not in stopwords)]\nremove_words = [w for w in txt.split() if (len(w)<2) or (w in stopwords)]\n\nprint(tokens)\nprint(remove_words)",
"Racinisation (Stem) chaque tokens\nPour chaque mot de notre liste de token, on va ramener ce mot à sa racine au sens de l'algorithme de Snowball présent dans la librairie nltk. \nCette liste de mots néttoyé et racinisé va constitué les features de cette description produits.",
"## Fonction de setmming de stemming permettant la racinisation\nstemmer=nltk.stem.SnowballStemmer('french')\ntokens_stem = [stemmer.stem(token) for token in tokens]\nprint(tokens_stem)",
"Fonction de nettoyage de texte\nOn définit une fonction clean-txt qui prend en entrée un texte de description produit et qui retourne le texte nettoyé en appliquant successivement les étapes présentés précedemment. \nOn définit également une fonction clean_marque qui contient signifcativement moins d'étape de nettoyage.",
"# Fonction clean générale\ndef clean_txt(txt):\n ### remove html stuff\n txt = BeautifulSoup(txt,\"html.parser\",from_encoding='utf-8').get_text()\n ### lower case\n txt = txt.lower()\n ### special escaping character '...'\n txt = txt.replace(u'\\u2026','.')\n txt = txt.replace(u'\\u00a0',' ')\n ### remove accent btw\n txt = unicodedata.normalize('NFD', txt).encode('ascii', 'ignore').decode(\"utf-8\")\n ###txt = unidecode(txt)\n ### remove non alphanumeric char\n txt = re.sub('[^a-z_]', ' ', txt)\n ### remove french stop words\n tokens = [w for w in txt.split() if (len(w)>2) and (w not in stopwords)]\n ### french stemming\n tokens_stem = [stemmer.stem(token) for token in tokens]\n ### tokens = stemmer.stemWords(tokens)\n return ' '.join(tokens), \" \".join(tokens_stem)\n\ndef clean_marque(txt):\n txt = re.sub('[^a-zA-Z0-9]', '_', txt).lower()\n return txt",
"Applique le nettoyage sur toutes les lignes de la DataFrame et créé deux nouvelles Dataframe (avant et sans l'étape de racinisation).",
"\n# fonction de nettoyage du fichier(stemming et liste de mots à supprimer)\ndef clean_df(input_data, column_names= ['Description', 'Libelle', 'Marque']):\n\n nb_line = input_data.shape[0]\n print(\"Start Clean %d lines\" %nb_line)\n \n # Cleaning start for each columns\n time_start = time.time()\n clean_list=[]\n clean_stem_list=[]\n for column_name in column_names:\n column = input_data[column_name].values\n if column_name == \"Marque\":\n array_clean = np.array(list(map(clean_marque,column)))\n clean_list.append(array_clean)\n clean_stem_list.append(array_clean)\n else:\n A = np.array(list(map(clean_txt,column)))\n array_clean = A[:,0]\n array_clean_stem = A[:,1]\n clean_list.append(array_clean)\n clean_stem_list.append(array_clean_stem)\n time_end = time.time()\n print(\"Cleaning time: %d secondes\"%(time_end-time_start))\n \n #Convert list to DataFrame\n array_clean = np.array(clean_list).T\n data_clean = pd.DataFrame(array_clean, columns = column_names)\n \n array_clean_stem = np.array(clean_stem_list).T\n data_clean_stem = pd.DataFrame(array_clean_stem, columns = column_names)\n return data_clean, data_clean_stem",
"Nettoyage des DataFrames",
"# Take approximately 2 minutes fors 100.000 rows\nwarnings.filterwarnings(\"ignore\")\ndata_valid_clean, data_valid_clean_stem = clean_df(data_valid)\n\nwarnings.filterwarnings(\"ignore\")\ndata_train_clean, data_train_clean_stem = clean_df(data_train)",
"Affiche les 5 premières lignes de la DataFrame d'apprentissage après nettoyage.",
"data_train_clean.head(5)\n\ndata_train_clean_stem.head(5)",
"Taille du dictionnaire de mots pour le dataset avant et après la racinisation.",
"concatenate_text = \" \".join(data_train[\"Description\"].values)\nlist_of_word = concatenate_text.split(\" \")\nN = len(set(list_of_word))\nprint(N)\n\nconcatenate_text = \" \".join(data_train_clean[\"Description\"].values)\nlist_of_word = concatenate_text.split(\" \")\nN = len(set(list_of_word))\nprint(N)\n\nconcatenate_text = \" \".join(data_train_clean_stem[\"Description\"].values)\nlist_of_word_stem = concatenate_text.split(\" \")\nN = len(set(list_of_word_stem))\nprint(N)",
"Wordcloud\nLes représentations Wordcloud permettent des représentations de l'ensemble des mots d'un corpus de documents. Dans cette représentation plus un mot apparait de manière fréquent dans le corpus, plus sa taille sera grande dans la représentation du corpus.",
"from wordcloud import WordCloud\n\nA=WordCloud(background_color=\"black\")\nA.generate_from_text?",
"Wordcloud de l'ensemble des description à l'état brut.",
"all_descr = \" \".join(data_valid.Description.values)\nwordcloud_word = WordCloud(background_color=\"black\", collocations=False).generate_from_text(all_descr)\n\nplt.figure(figsize=(10,10))\nplt.imshow(wordcloud_word,cmap=plt.cm.Paired)\nplt.axis(\"off\")\nplt.show()",
"Wordcloud après racinisation et nettoyage",
"all_descr_clean_stem = \" \".join(data_valid_clean_stem.Description.values)\nwordcloud_word = WordCloud(background_color=\"black\", collocations=False).generate_from_text(all_descr_clean_stem)\n\nplt.figure(figsize=(10,10))\nplt.imshow(wordcloud_word,cmap=plt.cm.Paired)\nplt.axis(\"off\")\nplt.show()",
"Vous pouvez observer que les mots \"voir et \"present\" sont les plus représentés. Cela est du au fait que la pluspart des descriptions se terminent par \"Voir la présentation\". C'est deux mots ne sont donc pas informatif car présent dans beaucoup de catégorie différente. C'est une bon exemple de stopword propre à un problème spécifique.\nExercice Ajouter les mots voiret présentationà la liste des stopwords plus hauts et refaites tourner le nettoyage.\nExercice Générer les wordcloud par catégorie pour 3 catégories de votre choix.\nSauvegarde des jeux de données nettoyés dans des fichiers csv.",
"data_valid_clean.to_csv(\"data/cdiscount_valid_clean.csv\", index=False)\ndata_train_clean.to_csv(\"data/cdiscount_train_clean.csv\", index=False)\n\ndata_valid_clean_stem.to_csv(\"data/cdiscount_valid_clean_stem.csv\", index=False)\ndata_train_clean_stem.to_csv(\"data/cdiscount_train_clean_stem.csv\", index=False)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
stonebig/winpython_afterdoc | docs/Winpython_checker.ipynb | mit | [
"Winpython Default checker",
"import warnings\n#warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n#warnings.filterwarnings(\"ignore\", category=UserWarning)\n#warnings.filterwarnings(\"ignore\", category=FutureWarning)\n# warnings.filterwarnings(\"ignore\") # would silence all warnings\n\n%matplotlib inline\n# use %matplotlib widget for the adventurous",
"Compilers: Numba and Cython\nRequirement\nTo get Cython working, Winpython 3.7+ users should install \"Microsoft Visual C++ Build Tools 2017\" (visualcppbuildtools_full.exe, a 4 Go installation) at https://beta.visualstudio.com/download-visual-studio-vs/\nTo get Numba working, not-windows10 users may have to install \"Microsoft Visual C++ Redistributable pour Visual Studio 2017\" (vc_redist) at https://beta.visualstudio.com/download-visual-studio-vs/\nThanks to recent progress, Visual Studio 2017/2018/2019 are cross-compatible now\nCompiler toolchains\nNumba (a JIT Compiler)",
"# checking Numba JIT toolchain\nimport numpy as np\nimage = np.zeros((1024, 1536), dtype = np.uint8)\n\n#from pylab import imshow, show\nimport matplotlib.pyplot as plt\nfrom timeit import default_timer as timer\n\nfrom numba import jit\n\n@jit\ndef create_fractal(min_x, max_x, min_y, max_y, image, iters , mandelx):\n height = image.shape[0]\n width = image.shape[1]\n pixel_size_x = (max_x - min_x) / width\n pixel_size_y = (max_y - min_y) / height\n \n for x in range(width):\n real = min_x + x * pixel_size_x\n for y in range(height):\n imag = min_y + y * pixel_size_y\n color = mandelx(real, imag, iters)\n image[y, x] = color\n\n@jit\ndef mandel(x, y, max_iters):\n c = complex(x, y)\n z = 0.0j\n for i in range(max_iters):\n z = z*z + c\n if (z.real*z.real + z.imag*z.imag) >= 4:\n return i\n return max_iters\n\n# Numba speed\nstart = timer()\ncreate_fractal(-2.0, 1.0, -1.0, 1.0, image, 20 , mandel) \ndt = timer() - start\n\nfig = plt.figure()\nprint (\"Mandelbrot created by numba in %f s\" % dt)\nplt.imshow(image)\nplt.show()",
"Cython (a compiler for writing C extensions for the Python language)\nWinPython 3.5 and 3.6 users may not have mingwpy available, and so need \"VisualStudio C++ Community Edition 2015\" https://www.visualstudio.com/downloads/download-visual-studio-vs#d-visual-c",
"# Cython + Mingwpy compiler toolchain test\n%load_ext Cython\n\n%%cython -a\n# with %%cython -a , full C-speed lines are shown in white, slowest python-speed lines are shown in dark yellow lines \n# ==> put your cython rewrite effort on dark yellow lines\ndef create_fractal_cython(min_x, max_x, min_y, max_y, image, iters , mandelx):\n height = image.shape[0]\n width = image.shape[1]\n pixel_size_x = (max_x - min_x) / width\n pixel_size_y = (max_y - min_y) / height\n \n for x in range(width):\n real = min_x + x * pixel_size_x\n for y in range(height):\n imag = min_y + y * pixel_size_y\n color = mandelx(real, imag, iters)\n image[y, x] = color\n\ndef mandel_cython(x, y, max_iters):\n cdef int i \n cdef double cx, cy , zx, zy\n cx , cy = x, y \n zx , zy =0 ,0 \n for i in range(max_iters):\n zx , zy = zx*zx - zy*zy + cx , zx*zy*2 + cy\n if (zx*zx + zy*zy) >= 4:\n return i\n return max_iters\n\n#Cython speed\nstart = timer()\ncreate_fractal_cython(-2.0, 1.0, -1.0, 1.0, image, 20 , mandel_cython) \ndt = timer() - start\n\nfig = plt.figure()\nprint (\"Mandelbrot created by cython in %f s\" % dt)\nplt.imshow(image)",
"Graphics: Matplotlib, Pandas, Seaborn, Holoviews, Bokeh, bqplot, ipyleaflet, plotnine",
"# Matplotlib 3.4.1\n# for more examples, see: http://matplotlib.org/gallery.html\nfrom mpl_toolkits.mplot3d import axes3d\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\n\nax = plt.figure().add_subplot(projection='3d')\nX, Y, Z = axes3d.get_test_data(0.05)\n\n# Plot the 3D surface\nax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)\n\n# Plot projections of the contours for each dimension. By choosing offsets\n# that match the appropriate axes limits, the projected contours will sit on\n# the 'walls' of the graph\ncset = ax.contourf(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm)\ncset = ax.contourf(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm)\ncset = ax.contourf(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm)\n\nax.set_xlim(-40, 40)\nax.set_ylim(-40, 40)\nax.set_zlim(-100, 100)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('Z')\n\nplt.show()\n\n# Seaborn\n# for more examples, see http://stanford.edu/~mwaskom/software/seaborn/examples/index.html\nimport seaborn as sns\nsns.set()\ndf = sns.load_dataset(\"iris\")\nsns.pairplot(df, hue=\"species\", height=1.5)\n\n# altair-example \nimport altair as alt\n\nalt.Chart(df).mark_bar().encode(\n x=alt.X('sepal_length', bin=alt.Bin(maxbins=50)),\n y='count(*):Q',\n color='species:N',\n #column='species',\n).interactive()\n\n# temporary warning removal\nimport warnings\nimport matplotlib as mpl\nwarnings.filterwarnings(\"ignore\", category=mpl.cbook.MatplotlibDeprecationWarning)\n# Holoviews\n# for more example, see http://holoviews.org/Tutorials/index.html\nimport numpy as np\nimport holoviews as hv\nhv.extension('matplotlib')\ndots = np.linspace(-0.45, 0.45, 11)\nfractal = hv.Image(image)\n\nlayouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +\n fractal.sample(y=y) )\n for y in np.linspace(0, 0.45,11)}\n\nhv.HoloMap(layouts, kdims=['Y']).collate().cols(2)\n\n# Bokeh 0.12.5 \nimport numpy as np\nfrom six.moves import zip\nfrom bokeh.plotting import figure, show, output_notebook\nN = 4000\nx = np.random.random(size=N) * 100\ny = np.random.random(size=N) * 100\nradii = np.random.random(size=N) * 1.5\ncolors = [\"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)]\n\noutput_notebook()\nTOOLS=\"hover,crosshair,pan,wheel_zoom,box_zoom,reset,tap,save,box_select,poly_select,lasso_select\"\n\np = figure(tools=TOOLS)\np.scatter(x,y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)\nshow(p)\n\n# Datashader (holoviews+Bokeh)\nimport datashader as ds\nimport numpy as np\nimport holoviews as hv\n\nfrom holoviews import opts\nfrom holoviews.operation.datashader import datashade, shade, dynspread, spread, rasterize\nfrom holoviews.operation import decimate\n\nhv.extension('bokeh')\n\ndecimate.max_samples=1000\ndynspread.max_px=20\ndynspread.threshold=0.5\n\ndef random_walk(n, f=5000):\n \"\"\"Random walk in a 2D space, smoothed with a filter of length f\"\"\"\n xs = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum()\n ys = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum()\n xs += 0.1*np.sin(0.1*np.array(range(n-1+f))) # add wobble on x axis\n xs += np.random.normal(0, 0.005, size=n-1+f) # add measurement noise\n ys += np.random.normal(0, 0.005, size=n-1+f)\n return np.column_stack([xs, ys])\n\ndef random_cov():\n \"\"\"Random covariance for use in generating 2D Gaussian distributions\"\"\"\n A = np.random.randn(2,2)\n return np.dot(A, A.T)\n\n\nnp.random.seed(1)\npoints = hv.Points(np.random.multivariate_normal((0,0), [[0.1, 0.1], [0.1, 1.0]], (50000,)),label=\"Points\")\npaths = hv.Path([0.15*random_walk(10000) for i in range(10)], kdims=[\"u\",\"v\"], label=\"Paths\")\ndecimate(points) + rasterize(points) + rasterize(paths)\n\nropts = dict(colorbar=True, tools=[\"hover\"], width=350)\nrasterize( points).opts(cmap=\"kbc_r\", cnorm=\"linear\").relabel('rasterize()').opts(**ropts).hist() + \\\ndynspread(datashade( points, cmap=\"kbc_r\", cnorm=\"linear\").relabel(\"datashade()\"))\n\n#bqplot\nfrom IPython.display import display\nfrom bqplot import (Figure, Map, Mercator, Orthographic, ColorScale, ColorAxis,\n AlbersUSA, topo_load, Tooltip)\ndef_tt = Tooltip(fields=['id', 'name'])\nmap_mark = Map(scales={'projection': Mercator()}, tooltip=def_tt)\nmap_mark.interactions = {'click': 'select', 'hover': 'tooltip'}\nfig = Figure(marks=[map_mark], title='Interactions Example')\ndisplay(fig)\n\n# ipyleaflet (javascript library usage)\nfrom ipyleaflet import (\n Map, Marker, TileLayer, ImageOverlay, Polyline, Polygon,\n Rectangle, Circle, CircleMarker, GeoJSON, DrawControl\n)\nfrom traitlets import link\ncenter = [34.6252978589571, -77.34580993652344]\nm = Map(center=[34.6252978589571, -77.34580993652344], zoom=10)\ndc = DrawControl()\n\ndef handle_draw(self, action, geo_json):\n print(action)\n print(geo_json)\nm\nm\n\ndc.on_draw(handle_draw)\nm.add_control(dc)\n\n%matplotlib widget\n# Testing matplotlib interactions with a simple plot\nimport matplotlib.pyplot as plt\nimport numpy as np\n# warning ; you need to launch a second time %matplotlib widget, if after a %matplotlib inline \n%matplotlib widget\n\nfig = plt.figure() #plt.figure(1)\nplt.plot(np.sin(np.linspace(0, 20, 100)))\nplt.show()\n\n\n# plotnine: giving a taste of ggplot of R langage (formerly we were using ggpy)\nfrom plotnine import ggplot, aes, geom_blank, geom_point, stat_smooth, facet_wrap, theme_bw\nfrom plotnine.data import mtcars\nggplot(mtcars, aes(x='hp', y='wt', color='mpg')) + geom_point() +\\\nfacet_wrap(\"~cyl\") + theme_bw()",
"Ipython Notebook: Interactivity & other",
"import IPython;IPython.__version__\n\n# Audio Example : https://github.com/ipython/ipywidgets/blob/master/examples/Beat%20Frequencies.ipynb\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom ipywidgets import interactive\nfrom IPython.display import Audio, display\ndef beat_freq(f1=220.0, f2=224.0):\n max_time = 3\n rate = 8000\n times = np.linspace(0,max_time,rate*max_time)\n signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)\n print(f1, f2, abs(f1-f2))\n display(Audio(data=signal, rate=rate))\n try:\n plt.plot(signal); #plt.plot(v.result);\n except:\n pass\n return signal\nv = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))\ndisplay(v)\n\n# Networks graph Example : https://github.com/ipython/ipywidgets/blob/master/examples/Exploring%20Graphs.ipynb\n%matplotlib inline\nfrom ipywidgets import interact\nimport matplotlib.pyplot as plt\nimport networkx as nx\n# wrap a few graph generation functions so they have the same signature\n\ndef random_lobster(n, m, k, p):\n return nx.random_lobster(n, p, p / m)\n\ndef powerlaw_cluster(n, m, k, p):\n return nx.powerlaw_cluster_graph(n, m, p)\n\ndef erdos_renyi(n, m, k, p):\n return nx.erdos_renyi_graph(n, p)\n\ndef newman_watts_strogatz(n, m, k, p):\n return nx.newman_watts_strogatz_graph(n, k, p)\n\n@interact(n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),\n generator={'lobster': random_lobster,\n 'power law': powerlaw_cluster,\n 'Newman-Watts-Strogatz': newman_watts_strogatz,\n u'Erdős-Rényi': erdos_renyi,\n })\ndef plot_random_graph(n, m, k, p, generator):\n g = generator(n, m, k, p)\n nx.draw(g)\n plt.title(generator.__name__)\n plt.show()\n ",
"Mathematical: statsmodels, lmfit,",
"# checking statsmodels\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\nimport statsmodels.api as sm\ndata = sm.datasets.anes96.load_pandas()\nparty_ID = np.arange(7)\nlabels = [\"Strong Democrat\", \"Weak Democrat\", \"Independent-Democrat\",\n \"Independent-Independent\", \"Independent-Republican\",\n \"Weak Republican\", \"Strong Republican\"]\nplt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible\nplt.rcParams['figure.figsize'] = (6.0, 4.0) # make plot larger in notebook\nage = [data.exog['age'][data.endog == id] for id in party_ID]\nfig = plt.figure()\nax = fig.add_subplot(111)\nplot_opts={'cutoff_val':5, 'cutoff_type':'abs',\n 'label_fontsize':'small',\n 'label_rotation':30}\nsm.graphics.beanplot(age, ax=ax, labels=labels,\n plot_opts=plot_opts)\nax.set_xlabel(\"Party identification of respondent\")\nax.set_ylabel(\"Age\")\nplt.show()\n\n# lmfit test (from http://nbviewer.ipython.org/github/lmfit/lmfit-py/blob/master/examples/lmfit-model.ipynb)\nimport numpy as np\nimport matplotlib.pyplot as plt\ndef decay(t, N, tau):\n return N*np.exp(-t/tau)\nt = np.linspace(0, 5, num=1000)\ndata = decay(t, 7, 3) + np.random.randn(*t.shape)\n\nfrom lmfit import Model\n\nmodel = Model(decay, independent_vars=['t'])\nresult = model.fit(data, t=t, N=10, tau=1)\nfig = plt.figure() # necessary to separate from previous ploot with %matplotlib widget\nplt.plot(t, data) # data\nplt.plot(t, decay(t=t, **result.values), color='orange', linewidth=5) # best-fit model",
"DataFrames: Pandas, Dask",
"#Pandas \nimport pandas as pd\nimport numpy as np\n\nidx = pd.date_range('2000', '2005', freq='d', closed='left')\ndatas = pd.DataFrame({'Color': [ 'green' if x> 1 else 'red' for x in np.random.randn(len(idx))], \n 'Measure': np.random.randn(len(idx)), 'Year': idx.year},\n index=idx.date)\ndatas.head()",
"Split / Apply / Combine\nSplit your data into multiple independent groups.\nApply some function to each group.\nCombine your groups back into a single data object.",
"datas.query('Measure > 0').groupby(['Color','Year']).size().unstack()",
"Web Scraping: Beautifulsoup",
"# checking Web Scraping: beautifulsoup and requests \nimport requests\nfrom bs4 import BeautifulSoup\n\nURL = 'http://en.wikipedia.org/wiki/Franklin,_Tennessee'\n\nreq = requests.get(URL, headers={'User-Agent' : \"Mining the Social Web\"})\nsoup = BeautifulSoup(req.text, \"lxml\")\n\ngeoTag = soup.find(True, 'geo')\n\nif geoTag and len(geoTag) > 1:\n lat = geoTag.find(True, 'latitude').string\n lon = geoTag.find(True, 'longitude').string\n print ('Location is at', lat, lon)\nelif geoTag and len(geoTag) == 1:\n (lat, lon) = geoTag.string.split(';')\n (lat, lon) = (lat.strip(), lon.strip())\n print ('Location is at', lat, lon)\nelse:\n print ('No location found')",
"Operations Research: Pulp",
"# Pulp example : minimizing the weight to carry 99 pennies\n# (from Philip I Thomas)\n# see https://www.youtube.com/watch?v=UmMn-N5w-lI#t=995\n# Import PuLP modeler functions\nfrom pulp import *\n# The prob variable is created to contain the problem data \nprob = LpProblem(\"99_pennies_Problem\",LpMinimize)\n\n# Variables represent how many of each coin we want to carry\npennies = LpVariable(\"Number_of_pennies\",0,None,LpInteger)\nnickels = LpVariable(\"Number_of_nickels\",0,None,LpInteger)\ndimes = LpVariable(\"Number_of_dimes\",0,None,LpInteger)\nquarters = LpVariable(\"Number_of_quarters\",0,None,LpInteger)\n\n# The objective function is added to 'prob' first\n\n# we want to minimize (LpMinimize) this \nprob += 2.5 * pennies + 5 * nickels + 2.268 * dimes + 5.670 * quarters, \"Total_coins_Weight\"\n\n# We want exactly 99 cents\nprob += 1 * pennies + 5 * nickels + 10 * dimes + 25 * quarters == 99, \"\"\n\n# The problem data is written to an .lp file\nprob.writeLP(\"99cents.lp\")\nprob.solve()\n\n# print (\"status\",LpStatus[prob.status] )\nprint (\"Minimal Weight to carry exactly 99 pennies is %s grams\" % value(prob.objective))\n# Each of the variables is printed with it's resolved optimum value\nfor v in prob.variables():\n print (v.name, \"=\", v.varValue)",
"Deep Learning: see tutorial-first-neural-network-python-keras\nSymbolic Calculation: sympy",
"# checking sympy \nimport sympy\na, b =sympy.symbols('a b')\ne=(a+b)**5\ne.expand()",
"SQL tools: sqlite, Ipython-sql, sqlite_bro, baresql, db.py",
"# checking Ipython-sql, sqlparse, SQLalchemy\n%load_ext sql\n\n%%sql sqlite:///.baresql.db\nDROP TABLE IF EXISTS writer;\nCREATE TABLE writer (first_name, last_name, year_of_death);\nINSERT INTO writer VALUES ('William', 'Shakespeare', 1616);\nINSERT INTO writer VALUES ('Bertold', 'Brecht', 1956);\nSELECT * , sqlite_version() as sqlite_version from Writer order by Year_of_death\n\n# checking baresql\nfrom __future__ import print_function, unicode_literals, division # line needed only if Python2.7\nfrom baresql import baresql\nbsql = baresql.baresql(connection=\"sqlite:///.baresql.db\")\nbsqldf = lambda q: bsql.df(q, dict(globals(),**locals()))\n\nusers = ['Alexander', 'Billy', 'Charles', 'Danielle', 'Esmeralda', 'Franz', 'Greg']\n# We use the python 'users' list like a SQL table\nsql = \"select 'Welcome ' || c0 || ' !' as say_hello, length(c0) as name_length from users$$ where c0 like '%a%' \"\nbsqldf(sql)\n\n# Transfering Datas to sqlite, doing transformation in sql, going back to Pandas and Matplotlib\nbsqldf('''\nselect Color, Year, count(*) as size \nfrom datas$$ \nwhere Measure > 0 \ngroup by Color, Year'''\n ).set_index(['Year', 'Color']).unstack().plot(kind='bar')\n\n# checking db.py\nfrom db import DB\ndb=DB(dbtype=\"sqlite\", filename=\".baresql.db\")\ndb.query(\"select sqlite_version() as sqlite_version ;\") \n\ndb.tables\n\n# checking sqlite_bro: this should lanch a separate non-browser window with sqlite_bro's welcome\n!cmd start cmd /C sqlite_bro\n\n# pyodbc or pypyodbc or ceODBC\ntry:\n import pyodbc\nexcept ImportError:\n import pypyodbc as pyodbc # on PyPy, there is no pyodbc currently\n \n# look for pyodbc providers\nsources = pyodbc.dataSources()\ndsns = list(sources.keys())\nsl = [' %s [%s]' % (dsn, sources[dsn]) for dsn in dsns]\nprint(\"pyodbc Providers: (beware 32/64 bit driver and python version must match)\\n\", '\\n'.join(sl))\n\n# pythonnet\nimport clr\nclr.AddReference(\"System.Data\")\nclr.AddReference('System.Data.Common')\nimport System.Data.OleDb as ADONET\nimport System.Data.Odbc as ODBCNET\nimport System.Data.Common as DATACOM\n\ntable = DATACOM.DbProviderFactories.GetFactoryClasses()\nprint(\"\\n .NET Providers: (beware 32/64 bit driver and python version must match)\")\nfor row in table.Rows:\n print(\" %s\" % row[table.Columns[0]])\n print(\" \",[row[column] for column in table.Columns if column != table.Columns[0]])",
"Qt libraries Demo\nSee Dedicated Qt Libraries Demo\nWrap-up",
"# optional scipy full test (takes up to 10 minutes)\n#!cmd /C start cmd /k python.exe -c \"import scipy;scipy.test()\"\n\n%pip list\n\n!jupyter labextension list\n\n!pip check\n\n!pipdeptree\n\n!pipdeptree -p pip"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lee212/simpleazure | ipynb/Use Case - NIST Pedestrian and Face Detection on Simple Azure (under development).ipynb | gpl-3.0 | [
"Pedestrian and Face Detection on Simple Azure\nPedestrian and Face Detection uses OpenCV to identify people standing in a picture or a video and NIST use case in this document is built with Apache Spark and Mesos clusters on multiple compute nodes.\nSimple Azure supports deploying software stacks for the NIST Pedestrian and Face Detection use case on top of Azure compute resources with the templates.\nOriginal | Pedestrian Detected\n:-----------------------------------:|:------------------------------------------------------:\n|\nOriginal | Pedestrian and Face Detected\n:---------------------------------------:|:----------------------------------------------------------:\n|\nIntroduction\nHuman (pedestrian) detection and face detection have been studied during the last several years and models for them have improved along with Histograms of Oriented Gradients (HOG) for Human Detection [1]. OpenCV is a Computer Vision library including the SVM classifier and the HOG object detector for pedestrian detection and INRIA Person Dataset [2] is one of popular samples for both training and testing purposes. In this document, we deploy Apache Spark on Mesos clusters to train and apply detection models from OpenCV using Python API.\nAnsible Automation Tool\nAnsible is a python tool to install/configure/manage software on multiple machines with JSON files where system descriptions are defined. There are reasons why we use Ansible:\n\nExpandable: Leverages Python (default) but modules can be written in any language\nAgentless: no setup required on managed node\nSecurity: Allows deployment from user space; uses ssh for authentication\nFlexibility: only requires ssh access to privileged user\nTransparency: YAML Based script files express the steps of installing and configuring software\nModularity: Single Ansible Role (should) contain all required commands and variables to deploy software package independently\nSharing and portability: roles are available from source (github, bitbucket, gitlab, etc) or the Ansible Galaxy portal\n\nINRIA Person Dataset\nThis dataset contains positive and negative images for training and test purposes with annotation files for upright persons in each image. 288 positive test images, 453 negative test images, 614 positive training images and 1218 negative training images are included along with normalized 64x128 pixel formats. 970MB dataset is available to download [3].\nHOG with SVM model\nHistogram of Oriented Gradient (HOG) and Support Vector Machine (SVM) are used as object detectors and classifiers and built-in python libraries from OpenCV provide these models for human detection.\nDeployment by Ansible\nWhen it comes to deploy applications and build clusters for batch-processing large datasets, Ansible scripts play a big role such as installation and configuration towards available machines. Ansible provides abstractions by Playbook Roles and reusability by Include statements. We define X application in X Ansible Role, for example, and use include statements to combine with other applications e.g. Y or Z. Five Ansible roles are used in this use case to build clusters for Human and Face Detection with INRIA dataset. The main Ansible playbook runs Ansible roles in order which looks like:\n```\ninclude: sched/00-mesos.yml\ninclude: proc/01-spark.yml\ninclude: apps/02-opencv.yml\ninclude: data/03-inria-dataset.yml\nInclude: anlys/04-human-face-detection.yml\n```\nDirectory names e.g. sched, proc, data, or anlys indicate BDSS layers like:\n- sched: scheduler layer\n- proc: data processing layer\n- apps: application layer\n- data: dataset layer\n- anlys: analytics layer\nand two digits in the filename indicate an order of roles to be run. \nIt is assumed that virtual machines are created by virtual-cluster-libs, the command line tool to start VM instances. For example on OpenStack, vcl boot -p openstack -P $USER- command starts a set of virtual machine instances with a cluster definition file .cluster.py. The number of machines and groups for clusters e.g. namenodes and datanodes are specified in the file and Ansible inventory file, a list of target machines with groups, is generated once machines are ready to use. Ansible roles run to install applications on virtual clusters.\nMesos role is installed first with Ansible inventory groups for masters and slaves in which mesos-master runs on the masters group and mesos-slave runs on the slaves group. Apache Zookeeper is included in the mesos role so that mesos slaves find an elected mesos leader from the zookeeper. Spark, as a data processing layer, provides two options for distributed job processing, batch job processing via a cluster mode and real-time processing via a client mode. The Mesos dispatcher runs on a masters group to accept a batch job submission and Spark interactive shell, which is the client mode, provides real-time processing on any node in the cluster. Either way, Spark is installed after a scheduler layer i.e. mesos to identify a master host for a job submission. Installation of OpenCV, INRIA Person Dataset and Human and Face Detection Python applications are followed.\nSoftware Stacks\nThe following software are expected in the stacks according to the github:\n\nmesos cluster (master, worker)\nspark (with dispatcher for mesos cluster mode)\nopenCV\nzookeeper\nINRIA Person Dataset\n\nDetection Analytics in Python\n\n\n[1] Dalal, Navneet, and Bill Triggs. \"Histograms of oriented gradients for human detection.\" 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 1. IEEE, 2005. [pdf]\n\n[2] http://pascal.inrialpes.fr/data/human/\n[3] ftp://ftp.inrialpes.fr/pub/lear/douze/data/INRIAPerson.tar\n[4] https://docs.python.org/2/library/configparser.html\n\nSimple Azure with Ansible\nSimple Azure supports Ansible to import and run Ansible scripts towards target machines i.e. Azure virtual machines. In the previous tutorial, we've learned how to deploy 3 VMs from the 101-vm-sshkey template and we are going to use the three virtual machines in this example.\nServer groups (inventory)\nWe may separate compute nodes in two groups: masters and workers therefore Mesos masters and zookeeper quorums manage job requests and leaders and workers run actual tasks. Ansible needs group definitions in their inventory therefore software installation associated with a proper part is completed. \nQuick Instructions (under development)\nLoad SimpleAzure",
"from simpleazure import SimpleAzure\nsaz = SimpleAzure()",
"IP Addresses of Compute Nodes",
"ips = saz.arm.view_info()",
"Load Ansible API with IPs",
"from simpleazure.ansible_api import AnsibleAPI\nansible_client = AnsibleAPI(ips)",
"Download Ansible Playbooks from Github\nThe ansible scripts for Pedestrian and Face Detection is here: https://github.com/futuresystems/pedestrian-and-face-detection.\nWe clone the repository using Github command line tools.",
"from simpleazure.github_cli import GithubCLI\ngit_client = GithubCLI()\ngit_client.set_repo('https://github.com/futuresystems/pedestrian-and-face-detection')\ngit_client.clone()",
"Install Software Stacks to Targeted VMs",
"ansible_client.playbook(git_client.path + \"/site.yml\")\nansible_client.run()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adamwang0705/cross_media_affect_analysis | develop/20171019-daheng-build_shed_words_freq_dicts.ipynb | mit | [
"Build selected Hedonometer words frequency dicts for topic_news and topic_tweets docs\nLast modified: 2017-10-23\nRoadmap\n\nCheck shed words pattern-matching requiremnts\nBuild shed words freq dicts for topic docs\n\nSteps",
"\"\"\"\nInitialization\n\"\"\"\n\n'''\nStandard modules\n'''\nimport os\nimport pickle\nimport csv\nimport time\nfrom pprint import pprint\n\n'''\nAnalysis modules\n'''\nimport pandas as pd\n\n\n'''\nCustom modules\n'''\nimport config\nimport utilities\n\n'''\nMisc\n'''\nnb_name = '20171019-daheng-build_shed_words_freq_dicts'",
"Check shed words pattern-matching requiremnts\nRef:\n - Dodds, P. S., Harris, K. D., Kloumann, I. M., Bliss, C. A., & Danforth, C. M. (2011). Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PloS one, 6(12), e26752.\nNotes:\n - See 2.1 Algorithm for Hedonometer P3\n - See Methods P23\nBuild shed words freq dicts for topic docs",
"\"\"\"\nCheck all shed words\n\"\"\"\nif 1 == 1:\n ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL)\n print(ind_shed_word_dict.values())",
"Build single shed words freq dict for topic_news docs\nResult single dict format (for all topic_news docs)\n{topic_ind_0: {\n news_native_id_0_0: {shed_word_0_ind: shed_word_0_freq,\n shed_word_1_ind: shed_word_1_freq,\n ...},\n news_native_id_0_1: {shed_word_0_ind: shed_word_0_freq,\n shed_word_1_ind: shed_word_1_freq,\n ...},\n ...},\ntopic_ind_1: {\n news_native_id_1_0: {shed_word_0_ind: shed_word_0_freq,\n shed_word_1_ind: shed_word_1_freq,\n ...},\n news_native_id_1_1: {shed_word_0_ind: shed_word_0_freq,\n shed_word_1_ind: shed_word_1_freq,\n ...},\n ...},\n...}\nBuild single shed words freq dict for all topic_news docs",
"%%time\n\"\"\"\nBuild single shed words freq dict for all topic_news docs\n\nRegister\n TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL = os.path.join(DATA_DIR, 'topics_news_shed_words_freq.dict.pkl')\nin config\n\"\"\"\nif 0 == 1:\n topics_news_shed_words_freq_dict = {}\n \n for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST):\n localtime = time.asctime(time.localtime(time.time()))\n print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1,\n len(config.MANUALLY_SELECTED_TOPICS_LST),\n topic['name'],\n localtime))\n \n topic_shed_words_freq_dict = {}\n \n '''\n Load shed_word and shed_word_ind mapping pkls\n '''\n ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL)\n shed_word_ind_dict = pd.read_pickle(config.SHED_WORD_IND_DICT_PKL)\n shed_words_set = set(ind_shed_word_dict.values())\n \n '''\n Load topic_news doc\n '''\n csv.register_dialect('topics_docs_line', delimiter='\\t', doublequote=True, quoting=csv.QUOTE_ALL)\n topic_news_csv_file = os.path.join(config.TOPICS_DOCS_DIR, '{}-{}.news.csv'.format(topic_ind, topic['name']))\n with open(topic_news_csv_file, 'r') as f:\n reader = csv.DictReader(f, dialect='topics_docs_line')\n '''\n Count shed words freq for each tweet\n '''\n # lazy load\n for row in reader:\n news_native_id = int(row['news_native_id'])\n news_doc = row['news_doc']\n \n news_doc_shed_words_freq_dict = utilities.count_news_doc_shed_words_freq(news_doc, ind_shed_word_dict, shed_word_ind_dict, shed_words_set)\n \n topic_shed_words_freq_dict[news_native_id] = news_doc_shed_words_freq_dict\n \n topics_news_shed_words_freq_dict[topic_ind] = topic_shed_words_freq_dict\n \n '''\n Make pkl for result single dict\n '''\n with open(config.TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL, 'wb') as f:\n pickle.dump(topics_news_shed_words_freq_dict, f)",
"Check basic statistics",
"\"\"\"\nPrint out sample news shed_words_freq_dicts inside single topic\n\"\"\"\nif 0 == 1:\n target_topic_ind = 0\n \n with open(config.TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL, 'rb') as f:\n topics_news_shed_words_freq_dict = pickle.load(f)\n \n count = 0\n for news_native_id, news_doc_shed_words_freq_dict in topics_news_shed_words_freq_dict[target_topic_ind].items():\n print('news_native_id: {}'.format(news_native_id))\n print('\\t{}'.format(news_doc_shed_words_freq_dict))\n news_doc_shed_words_len = sum(news_doc_shed_words_freq_dict.values())\n print('\\tLEN: {}'.format(news_doc_shed_words_len))\n count += 1\n if count >= 5:\n break\n\n\n%%time\n\"\"\"\nCheck total shed words length of this topic_news doc\n\"\"\" \nif 0 == 1:\n topic_news_shed_words_len = sum([sum(news_doc_shed_words_freq_dict.values()) for news_doc_shed_words_freq_dict in topics_news_shed_words_freq_dict[target_topic_ind].values()])\n print('Total shed words length of this topic_news doc: {}'.format(topic_news_shed_words_len))",
"Build shed words freq dicts for each topic_tweets doc separately\nResult dict format (for each given topic_tweets doc)\n{tweet_id_0_0: {shed_word_0_ind: shed_word_0_freq,\n shed_word_1_ind: shed_word_1_freq,\n ...},\ntweet_id_0_1: {shed_word_0_ind: shed_word_0_freq,\n shed_word_1_ind: shed_word_1_freq,\n ...},\n...}\nBuild shed words freq dict for each topic separately",
"%%time\n\"\"\"\nBuild shed words freq dict for each topic separately\n\nRegister\n TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR = os.path.join(DATA_DIR, 'topics_tweets_shed_words_freq_dict_pkls')\nin config\n\nNote:\n - Number of tweets is large. Process each topic_tweets doc individually to avoid crash\n - Execute second time for updated topic_tweets docs\n\"\"\"\nif 0 == 1:\n for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST):\n localtime = time.asctime(time.localtime(time.time()))\n print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1,\n len(config.MANUALLY_SELECTED_TOPICS_LST),\n topic['name'],\n localtime))\n \n topic_shed_words_freq_dict = {}\n \n '''\n Load shed_word and shed_word_ind mapping pkls\n '''\n ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL)\n shed_word_ind_dict = pd.read_pickle(config.SHED_WORD_IND_DICT_PKL)\n shed_words_set = set(ind_shed_word_dict.values())\n \n '''\n Load topic_tweets doc\n '''\n csv.register_dialect('topics_docs_line', delimiter='\\t', doublequote=True, quoting=csv.QUOTE_ALL)\n topic_tweets_csv_file = os.path.join(config.TOPICS_DOCS_DIR, '{}-{}.updated.tweets.csv'.format(topic_ind, topic['name']))\n with open(topic_tweets_csv_file, 'r') as f:\n reader = csv.DictReader(f, dialect='topics_docs_line')\n \n '''\n Count shed words freq for each tweet\n '''\n # lazy load\n for row in reader:\n tweet_id = int(row['tweet_id'])\n tweet_text = row['tweet_text']\n \n tweet_shed_words_freq_dict = utilities.count_tweet_shed_words_freq(tweet_text, ind_shed_word_dict, shed_word_ind_dict, shed_words_set)\n \n topic_shed_words_freq_dict[tweet_id] = tweet_shed_words_freq_dict\n \n '''\n Make pkl for result dict file\n '''\n topic_tweets_shed_words_freq_dict_pkl_file = os.path.join(config.TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR,\n '{}.updated.dict.pkl'.format(topic_ind))\n with open(topic_tweets_shed_words_freq_dict_pkl_file, 'wb') as f:\n pickle.dump(topic_shed_words_freq_dict, f)",
"Check basic statistics",
"%%time\n\"\"\"\nPrint out sample tweet shed_words_freq_dicts inside single topic\n\"\"\"\nif 0 == 1:\n target_topic_ind = 0\n \n topic_tweets_shed_words_freq_dict_pkl_file = os.path.join(config.TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR, '{}.updated.dict.pkl'.format(target_topic_ind))\n with open(topic_tweets_shed_words_freq_dict_pkl_file, 'rb') as f:\n topic_tweets_shed_words_freq_dict_tmp = pickle.load(f)\n \n count = 0\n for tweet_id, tweet_shed_words_freq_dict in topic_tweets_shed_words_freq_dict_tmp.items():\n print('tweet_id: {}'.format(tweet_id))\n print('\\t{}'.format(tweet_shed_words_freq_dict))\n tweet_shed_words_len = sum(tweet_shed_words_freq_dict.values())\n print('\\tLEN: {}'.format(tweet_shed_words_len))\n count += 1\n if count >= 20:\n break\n\n%%time\n\"\"\"\nCheck total shed words length of a topic_tweets doc\n\"\"\" \nif 0 == 1:\n topic_tweets_shed_words_len = sum([sum(tweet_shed_words_freq_dict.values()) for tweet_shed_words_freq_dict in topic_tweets_shed_words_freq_dict_tmp.values()])\n print('Total shed words length of this topic_tweets_doc: {}'.format(topic_tweets_shed_words_len))",
"Notes\n\nDo NOT try to merge all topic_tweets shed words freq dicts into a single huge dict. This is extremely time-consuming and would leave VM unresponsive."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_linear_model_patterns.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Linear classifier on sensor data with plot patterns and filters\nDecoding, a.k.a MVPA or supervised machine learning applied to MEG and EEG\ndata in sensor space. Fit a linear classifier with the LinearModel object\nproviding topographical patterns which are more neurophysiologically\ninterpretable [1]_ than the classifier filters (weight vectors).\nThe patterns explain how the MEG and EEG data were generated from the\ndiscriminant neural sources which are extracted by the filters.\nNote patterns/filters in MEG data are more similar than EEG data\nbecause the noise is less spatially correlated in MEG than EEG.\nReferences\n.. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,\n Blankertz, B., & Bießmann, F. (2014). On the interpretation of\n weight vectors of linear models in multivariate neuroimaging.\n NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067",
"# Authors: Alexandre Gramfort <[email protected]>\n# Romain Trachel <[email protected]>\n# Jean-Remi King <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io, EvokedArray\nfrom mne.datasets import sample\nfrom mne.decoding import Vectorizer, get_coef\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\n\n# import a linear classifier from mne.decoding\nfrom mne.decoding import LinearModel\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.4\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(.5, 25)\nevents = mne.read_events(event_fname)\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n decim=4, baseline=None, preload=True)\n\nlabels = epochs.events[:, -1]\n\n# get MEG and EEG data\nmeg_epochs = epochs.copy().pick_types(meg=True, eeg=False)\nmeg_data = meg_epochs.get_data().reshape(len(labels), -1)",
"Decoding in sensor space using a LogisticRegression classifier",
"clf = LogisticRegression()\nscaler = StandardScaler()\n\n# create a linear model with LogisticRegression\nmodel = LinearModel(clf)\n\n# fit the classifier on MEG data\nX = scaler.fit_transform(meg_data)\nmodel.fit(X, labels)\n\n# Extract and plot spatial filters and spatial patterns\nfor name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):\n # We fitted the linear model onto Z-scored data. To make the filters\n # interpretable, we must reverse this normalization step\n coef = scaler.inverse_transform([coef])[0]\n\n # The data was vectorized to fit a single model across all time points and\n # all channels. We thus reshape it:\n coef = coef.reshape(len(meg_epochs.ch_names), -1)\n\n # Plot\n evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)\n evoked.plot_topomap(title='MEG %s' % name)",
"Let's do the same on EEG data using a scikit-learn pipeline",
"X = epochs.pick_types(meg=False, eeg=True)\ny = epochs.events[:, 2]\n\n# Define a unique pipeline to sequentially:\nclf = make_pipeline(\n Vectorizer(), # 1) vectorize across time and channels\n StandardScaler(), # 2) normalize features across trials\n LinearModel(LogisticRegression())) # 3) fits a logistic regression\nclf.fit(X, y)\n\n# Extract and plot patterns and filters\nfor name in ('patterns_', 'filters_'):\n # The `inverse_transform` parameter will call this method on any estimator\n # contained in the pipeline, in reverse order.\n coef = get_coef(clf, name, inverse_transform=True)\n evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)\n evoked.plot_topomap(title='EEG %s' % name[:-1])"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
spacecowboy/article-annriskgroups-source | DataSetStratification.ipynb | gpl-3.0 | [
"Data set stratification\nThis script randomly assigns training/test labels to each entry in a data set.\nOne quarter (1/4) of the data is assigned as test, and rest as training. The labeling\nis stratified for censoring so that both testing and training pieces have about the same\namount of censoring.",
"import datasets as ds\nimport pandas as pd\nimport numpy as np",
"This cell can be used for all data sets except colon. colon is special because it has 3 types of events instead of just 2. Just change the first line to run a different data set.",
"#data = ds._pbc\n#data = ds._lung\n#data = ds._nwtco\ndata = ds._flchain\n\ndf = pd.read_csv(data['filename'][:-4] + \"_org.csv\",\n sep=None, engine='python')\nk = 4\n\n# flchain has three guys at zero, remove them\nif 'flchain' in data['filename']:\n df = df[(df[data['timecol']] > 0)]\n\n# Need shape later\nn, d = df.shape\n\n# Random reordering\ndf = df.reindex(np.random.permutation(df.index))\ndf.sort(data['eventcol'], inplace=True)\n\nassignments = np.array((n // k + 1) * list(range(0, k)))\nassignments = assignments[:n]\n\nprint(assignments.shape)\nprint(df.shape)\n\n# Create a new column that specifies set\ndf['set'] = 1\n# 0 is testing\ndf.loc[assignments == 0, 'set'] = 'testing'\n# rest is training\ndf.loc[assignments != 0, 'set'] = 'training'\n\nprint(\"Training size:\", np.sum(df['set'] == 'training'))\nprint(\"Testing size:\", np.sum(df['set'] == 'testing'))\n\ndf = df.reindex(np.sort(df.index))",
"Print the labeled to data to a new file.",
"fname = data['filename']\nprint(fname)\ndf.to_csv(fname, na_rep='NA', index=False)",
"Colon\nIs kind of special. It has 3 events where two must be combined before stratification is possible.",
"data = ds._colon\n\ndf = pd.read_csv(data['filename'], sep=None, engine='python')\nn, d = df.shape\nk = 4\n\n# Construct lists of events, censored\nevents = []\ncensored = []\n\nfor i in df['id'].unique():\n x = ((df['id'] == i) & (df['etype'] == 1))\n if df[x]['status'].sum() < 1:\n censored.append(i)\n else:\n events.append(i)\n\n\n\ntrainingids = []\ntestingids = []\nfor d in [events, censored]:\n ids = np.random.permutation(d)\n\n n = len(ids)\n k = 4\n assignments = np.array((n // k + 1) * list(range(0, k)))\n assignments = assignments[:n]\n\n testingids.extend(ids[assignments == 0])\n trainingids.extend(ids[assignments != 0])\n \ndf['set'] = 1\n\nfor i in trainingids:\n which = (df['id'] == i)\n df.loc[which, 'set'] = 'training'\n \nfor i in testingids:\n which = (df['id'] == i)\n df.loc[which, 'set'] = 'testing'\n \nprint(\"Training size:\", np.sum(df['set'] == 'training'))\nprint(\"Testing size:\", np.sum(df['set'] == 'testing'))\ndf",
"Print data to file.",
"fname = data['filename'][:-8] + '.csv'\nprint(fname)\ndf.to_csv(fname, na_rep='NA', index=False)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dpaniukov/RulesFPC | thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb | mit | [
"Model 1 with FPC mask\nLevel 1:\nEVs:\nstimulus appication\nstimulus learning\nstimulus na\nfeedback correct\nfeedback incorrect\nfeedback na \nContrasts:\nstimulus appication>0\nstimulus learning>0\nstimulus appication>stimulus learning \nLevel 2:\ntask001 task1\ntask001 task2 \ntask001 task1>task2\ntask001 task2>task1 \nLevel 3:\npositive contrast\nnegative contrast \nFPC mask\n*Images from randomise (cluster mass with t=2.49 and v=8) are thresholded at .95 and overlaid with unthresholded t-maps.\nPrepare stuff",
"import os\nfrom IPython.display import IFrame\nfrom IPython.display import Image\n\n# This function renders interactive brain images\ndef render(name,brain_list):\n \n #prepare file paths\n brain_files = []\n for b in brain_list:\n brain_files.append(os.path.join(\"data\",b))\n \n wdata = \"\"\"\n <!DOCTYPE html>\n\n<html xmlns=\"http://www.w3.org/1999/xhtml\" lang=\"en\">\n\t<head>\n \t<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\"/>\n \n \t<!-- iOS meta tags -->\n \t<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0, user-scalable=no\"/>\n \t<meta name=\"apple-mobile-web-app-capable\" content=\"yes\">\n \t<meta name=\"apple-mobile-web-app-status-bar-style\" content=\"black-translucent\">\n \n \t<link rel=\"stylesheet\" type=\"text/css\" href=\"../papaya/papaya.css?build=1420\" />\n \t<script type=\"text/javascript\" src=\"../papaya/papaya.js?build=1422\"></script>\n \n \t<title>Papaya Viewer</title>\n \n\t<script type=\"text/javascript\">\n \n var params = [];\n params[\"worldSpace\"] = true;\n params[\"atlas\"] = \"MNI (Nearest Grey Matter)\";\n params[\"images\"] = %s;\n \n </script>\n\n\t</head>\n\n\t<body>\n\t\t\n\t\t<div class=\"papaya\" data-params=\"params\"></div>\n\t\t\n\t</body>\n</html>\n \"\"\" % str(brain_files)\n \n fname=name+\"index.html\"\n with open (fname, 'w') as f: f.write (wdata)\n\n return IFrame(fname, width=800, height=600)\n\n# variables\nl1cope=\"0\"\nl2cope=\"0\"\nl3cope=\"0\"\ndef paths():\n sliced_img = os.path.join(\"data\", \"img_\"+l1cope+\"_\"+l2cope+\"_\"+l3cope+\"_wb.png\")\n wb_img = \"WB.nii.gz\"\n cluster_corr = \"rand_\"+l1cope+\"_\"+l2cope+\"_\"+l3cope+\".nii.gz\"\n tstat_img = os.path.join(\"data\", \"imgt_\"+l1cope+\"_\"+l2cope+\"_\"+l3cope+\"_wb.png\")\n html_cl = l1cope+\"_\"+l2cope+\"_\"+l3cope\n html_t = l1cope+\"_\"+l2cope+\"_\"+l3cope+\"t\"\n return sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t",
"Model results\nRule learning and rule application in the matching task\nRule Learning > Rule Application",
"l1cope=\"3\"\nl2cope=\"1\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule Application > Rule Learning",
"l1cope=\"3\"\nl2cope=\"1\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule Learning > Baseline",
"l1cope=\"2\"\nl2cope=\"1\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Baseline > Rule Learning",
"l1cope=\"2\"\nl2cope=\"1\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule Application > Baseline",
"l1cope=\"1\"\nl2cope=\"1\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Baseline > Rule Application",
"l1cope=\"1\"\nl2cope=\"1\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\nImage(sliced_img)\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule learning and rule application in the classification task\nRule Learning > Rule Application",
"l1cope=\"3\"\nl2cope=\"2\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule Learning > Baseline",
"l1cope=\"2\"\nl2cope=\"2\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Baseline > Rule Learning",
"l1cope=\"2\"\nl2cope=\"2\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule Application > Baseline",
"l1cope=\"1\"\nl2cope=\"2\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Baseline > Rule Application",
"l1cope=\"1\"\nl2cope=\"2\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule learning in the matching and classification tasks\nMatching > Classification",
"l1cope=\"2\"\nl2cope=\"3\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Classification > Matching",
"l1cope=\"2\"\nl2cope=\"3\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Rule application in the matching and classification tasks\nMatching > Classification",
"l1cope=\"1\"\nl2cope=\"3\"\nl3cope=\"1\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])",
"Classification > Matching",
"l1cope=\"1\"\nl2cope=\"3\"\nl3cope=\"2\"\nsliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()\n\nrender(html_cl,[wb_img,cluster_corr])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sainathadapa/fastai-courses | deeplearning1/nbs/lesson1.ipynb | apache-2.0 | [
"Using Convolutional Neural Networks\nWelcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.\nIntroduction to this week's task: 'Dogs vs Cats'\nWe're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): \"State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task\". So if we can beat 80%, then we will be at the cutting edge as of 2013!\nBasic setup\nThere isn't too much to do to get started - just a few simple configuration steps.\nThis shows plots in the web page itself - we always wants to use this when using jupyter notebook:",
"%matplotlib inline",
"Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)",
"path = \"data/dogscats/\"\n#path = \"data/dogscats/sample/\"",
"A few basic libraries that we'll need for the initial exercises:",
"from __future__ import division,print_function\n\nimport os, json\nfrom glob import glob\nimport numpy as np\nnp.set_printoptions(precision=4, linewidth=100)\nfrom matplotlib import pyplot as plt",
"We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.",
"import utils; reload(utils)\nfrom utils import plots",
"Use a pretrained VGG model with our Vgg16 class\nOur first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.\nWe have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. \nThe punchline: state of the art custom model in 7 lines of code\nHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.",
"# As large as you can, but no larger than 64 is recommended. \n# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.\nbatch_size=64\n\n# Import our class, and instantiate\nimport vgg16; reload(vgg16)\nfrom vgg16 import Vgg16\n\nvgg = Vgg16()\n# Grab a few images at a time for training and validation.\n# NB: They must be in subdirectories named based on their category\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)\nvgg.finetune(batches)\nvgg.fit(batches, val_batches, nb_epoch=1)",
"The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.\nLet's take a look at how this works, step by step...\nUse Vgg16 for basic image recognition\nLet's start off by using the Vgg16 class to recognise the main imagenet category for each image.\nWe won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.\nFirst, create a Vgg16 object:",
"vgg = Vgg16()",
"Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.\nLet's grab batches of data from our training folder:",
"batches = vgg.get_batches(path+'train', batch_size=4)",
"(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)\nBatches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.",
"imgs,labels = next(batches)",
"As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding. \nThe arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.",
"plots(imgs, titles=labels)",
"We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.",
"vgg.predict(imgs, True)",
"The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:",
"vgg.classes[:4]",
"(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)\nUse our Vgg16 class to finetune a Dogs vs Cats model\nTo change our model so that it outputs \"cat\" vs \"dog\", instead of one of 1,000 very specific categories, we need to use a process called \"finetuning\". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.\nHowever, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().\nWe create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.",
"batch_size=64\n\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size)",
"Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.",
"vgg.finetune(batches)",
"Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)",
"vgg.fit(batches, val_batches, nb_epoch=1)",
"That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.\nNext up, we'll dig one level deeper to see what's going on in the Vgg16 class.\nCreate a VGG model from scratch in Keras\nFor the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.\nModel setup\nWe need to import all the modules we'll be using from numpy, scipy, and keras:",
"from numpy.random import random, permutation\nfrom scipy import misc, ndimage\nfrom scipy.ndimage.interpolation import zoom\n\nimport keras\nfrom keras import backend as K\nfrom keras.utils.data_utils import get_file\nfrom keras.models import Sequential, Model\nfrom keras.layers.core import Flatten, Dense, Dropout, Lambda\nfrom keras.layers import Input\nfrom keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D\nfrom keras.optimizers import SGD, RMSprop\nfrom keras.preprocessing import image",
"Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.",
"FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'\n# Keras' get_file() is a handy function that downloads files, and caches them for re-use later\nfpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')\nwith open(fpath) as f: class_dict = json.load(f)\n# Convert dictionary with string indexes into an array\nclasses = [class_dict[str(i)][1] for i in range(len(class_dict))]",
"Here's a few examples of the categories we just imported:",
"classes[:5]",
"Model creation\nCreating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.\nVGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:",
"def ConvBlock(layers, model, filters):\n for i in range(layers): \n model.add(ZeroPadding2D((1,1)))\n model.add(Convolution2D(filters, 3, 3, activation='relu'))\n model.add(MaxPooling2D((2,2), strides=(2,2)))",
"...and here's the fully-connected definition.",
"def FCBlock(model):\n model.add(Dense(4096, activation='relu'))\n model.add(Dropout(0.5))",
"When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:",
"# Mean of each channel as provided by VGG researchers\nvgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))\n\ndef vgg_preprocess(x):\n x = x - vgg_mean # subtract mean\n return x[:, ::-1] # reverse axis bgr->rgb",
"Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!",
"def VGG_16():\n model = Sequential()\n model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))\n\n ConvBlock(2, model, 64)\n ConvBlock(2, model, 128)\n ConvBlock(3, model, 256)\n ConvBlock(3, model, 512)\n ConvBlock(3, model, 512)\n\n model.add(Flatten())\n FCBlock(model)\n FCBlock(model)\n model.add(Dense(1000, activation='softmax'))\n return model",
"We'll learn about what these different blocks do later in the course. For now, it's enough to know that:\n\nConvolution layers are for finding patterns in images\nDense (fully connected) layers are for combining patterns across an image\n\nNow that we've defined the architecture, we can create the model like any python object:",
"model = VGG_16()",
"As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem. \nDownloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.",
"fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')\nmodel.load_weights(fpath)",
"Getting imagenet predictions\nThe setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.",
"batch_size = 4",
"Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:",
"def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, \n batch_size=batch_size, class_mode='categorical'):\n return gen.flow_from_directory(path+dirname, target_size=(224,224), \n class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)",
"From here we can use exactly the same steps as before to look at predictions from the model.",
"batches = get_batches('train', batch_size=batch_size)\nval_batches = get_batches('valid', batch_size=batch_size)\nimgs,labels = next(batches)\n\n# This shows the 'ground truth'\nplots(imgs, titles=labels)",
"The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.",
"def pred_batch(imgs):\n preds = model.predict(imgs)\n idxs = np.argmax(preds, axis=1)\n\n print('Shape: {}'.format(preds.shape))\n print('First 5 classes: {}'.format(classes[:5]))\n print('First 5 probabilities: {}\\n'.format(preds[0, :5]))\n print('Predictions prob/class: ')\n \n for i in range(len(idxs)):\n idx = idxs[i]\n print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))\n\npred_batch(imgs)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cavestruz/MLPipeline | notebooks/anomaly_detection/anomaly_detection_zhu.ipynb | mit | [
"Let us first explore an example that falls under novelty detection. Here, we train a model on data with some distribution and no outliers. The test data, has some \"novel\" subset of data that does not follow that distribution.",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import svm\n%matplotlib inline",
"Use the np.random module to generate a normal distribution of 1,000 data points in two dimensions (e.g. x, y) - choose whatever mean and sigma^2 you like. Generate another 1,000 data points with a normal distribution in two dimensions that are well separated from the first set. You now have two \"clusters\". Concatenate them so you have 2,000 data points in two dimensions. Plot the points. This will be the training set.",
"X_train_normal = np.concatenate((np.random.randn(1000,2), 2*np.random.randn(1000,2)+8.))",
"Plot the points.",
"plt.scatter(X_train_normal[:,0],X_train_normal[:,1])",
"Generate 100 data points with the same distribution as your first random normal 2-d set, and 100 data points with the same distribution as your second random normal 2-d set. This will be the test set labeled X_test_normal.",
"X_test_normal = np.concatenate((np.random.randn(100,2), 3*np.random.randn(100,2)+10.))",
"Generate 100 data points with a random uniform distribution. This will be the test set labeled X_test_uniform.",
"X_test_uniform = np.random.rand(100,2)",
"Define a model classifier with the svm.OneClassSVM",
"model = svm.OneClassSVM()",
"Fit the model to the training data.",
"model.fit(X_train_normal)",
"Use the trained model to predict whether X_test_normal data point are in the same distributions. Calculate the fraction of \"false\" predictions.",
"model.predict(X_test_normal)",
"Use the trained model to predict whether X_test_uniform is in the same distribution. Calculate the fraction of \"false\" predictions.",
"from collections import Counter\n\nL1=model.predict(X_test_uniform)\na = Counter(L1).values()[0] \nb = Counter(L1).values()[1]\nc = float(b)/a\nprint c",
"Use the trained model to see how well it recovers the training data. (Predict on the training data, and calculate the fraction of \"false\" predictions.)",
"L2=model.predict(X_train_normal)\na2 = Counter(L2).values()[0]\nb2 = Counter(L2).values()[1]\nc2 = float(b2)/a2\nprint c2",
"Create another instance of the model classifier, but change the kwarg value for nu. Hint: Use help to figure out what the kwargs are.",
"model2 = svm.OneClassSVM(nu=.2)",
"Redo the prediction on the training set, prediction on X_test_random, and prediction on X_test.",
"model2.fit(X_train_normal)\nL3=model2.predict(X_train_normal)\nCounter(L3)\nmodel2.predict(X_test_normal)\nmodel2.predict(X_test_uniform)",
"Plot in scatter points the X_train in blue, X_test_normal in red, and X_test_uniform in black. Overplot the trained model decision function boundary for the first instance of the model classifier.",
"plt.scatter(X_train_normal[:,0],X_train_normal[:,1],color='b')\nplt.scatter(X_test_normal[:,0],X_test_normal[:,1],color='r')\nplt.scatter(X_test_uniform[:,0],X_test_uniform[:,1],color='k')\n\nxx1, yy1 = np.meshgrid(np.linspace(-5, 22, 1000), np.linspace(-5, 22,1000))\nZ1 =model.decision_function(np.c_[xx1.ravel(), yy1.ravel()])\nZ1 = Z1.reshape(xx1.shape)\nplt.contour(xx1, yy1, Z1, levels=[0],\n linewidths=2)",
"Do the same for the second instance of the model classifier.",
"plt.scatter(X_train_normal[:,0],X_train_normal[:,1],color='b')\nplt.scatter(X_test_normal[:,0],X_test_normal[:,1],color='r')\nplt.scatter(X_test_uniform[:,0],X_test_uniform[:,1],color='k')\n\nxx1, yy1 = np.meshgrid(np.linspace(-5, 22, 1000), np.linspace(-5, 22,1000))\nZ1 =model2.decision_function(np.c_[xx1.ravel(), yy1.ravel()])\nZ1 = Z1.reshape(xx1.shape)\nplt.contour(xx1, yy1, Z1, levels=[0],\n linewidths=2)\n\nfrom sklearn.covariance import EllipticEnvelope",
"Test how well EllipticEnvelope predicts the outliers when you concatenate the training data with the X_test_uniform data.",
"train_uniform=np.concatenate((X_train_normal,X_test_uniform))\nenvelope=EllipticEnvelope()\nenvelope.fit(train_uniform)\nenvelope.predict(train_uniform)",
"Compute and plot the mahanalobis distances of X_test, X_train_normal, X_train_uniform",
"plt.scatter(range(100),envelope.mahalanobis(X_test_uniform),color='black') #idk why but on the graph it's red...\nplt.scatter(range(2000),envelope.mahalanobis(X_train_normal),color='b')\nplt.scatter(range(200),envelope.mahalanobis(X_test_normal),color='r')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
feststelltaste/software-analytics | prototypes/Quantifying Graph Data.ipynb | gpl-3.0 | [
"Introduction\nThis is a simple helper notebook to quickly get some numbers about your graphs out of a Neo4j database. You just need to start your Neo4j database locally with the default values before running this notebook.\nSetup\nFirst, we fire up the connection to Neo4j that contains all the data. If needed, you could add some custom parameters like URL or port to adjust the setup to your settings.",
"import py2neo\nimport pandas as pd\ngraph = py2neo.Graph()\ngraph.dbms.kernel_version",
"Let's get some numbers!\nNodes\nNumber of all Nodes",
"graph.data(\"MATCH (n) RETURN COUNT(n) AS NumberOfAllNodes\")",
"Nodes and their Labels",
"pd.DataFrame(graph.data(\"MATCH (n) RETURN labels(n) AS Labels, COUNT(n) AS LabelCount ORDER BY LabelCount DESC\"))",
"Relationships\nNumber of all Relationships",
"graph.data(\"MATCH ()-[r]-() RETURN COUNT(r) AS NumberOfAllRelationships\")",
"Relationships and their Types",
"pd.DataFrame(graph.data(\"MATCH ()-[r]-() RETURN type(r) AS Type, COUNT(r) AS TypeCount ORDER BY TypeCount DESC\"))",
"Properties\nNumber of all properties",
"graph.data(\"MATCH (n) RETURN SUM(SIZE(KEYS(n))) as NumberOfAllProperties\")",
"Amount of specific Properties",
"pd.DataFrame(graph.data(\"\"\"\nMATCH (n) WITH KEYS(n) as keys \nUNWIND keys as properties \nRETURN properties as Property, COUNT(properties) as PropertyCount\nORDER BY PropertyCount DESC\"\"\"))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
d00d/quantNotebooks | Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb | unlicense | [
"Introduction to Python\nby Maxwell Margenot\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nAll of the coding that you will do on the Quantopian platform will be in Python. It is also just a good, jack-of-all-trades language to know! Here we will provide you with the basics so that you can feel confident going through our other lectures and understanding what is happening.\nCode Comments\nA comment is a note made by a programmer in the source code of a program. Its purpose is to clarify the source code and make it easier for people to follow along with what is happening. Anything in a comment is generally ignored when the code is actually run, making comments useful for including explanations and reasoning as well as removing specific lines of code that you may be unsure about. Comments in Python are created by using the pound symbol (# Insert Text Here). Including a # in a line of code will comment out anything that follows it.",
"# This is a comment\n# These lines of code will not change any values\n# Anything following the first # is not run as code",
"You may hear text enclosed in triple quotes (\"\"\" Insert Text Here \"\"\") referred to as multi-line comments, but this is not entirely accurate. This is a special type of string (a data type we will cover), called a docstring, used to explain the purpose of a function.",
"\"\"\" This is a special string \"\"\"",
"Make sure you read the comments within each code cell (if they are there). They will provide more real-time explanations of what is going on as you look at each line of code.\nVariables\nVariables provide names for values in programming. If you want to save a value for later or repeated use, you give the value a name, storing the contents in a variable. Variables in programming work in a fundamentally similar way to variables in algebra, but in Python they can take on various different data types.\nThe basic variable types that we will cover in this section are integers, floating point numbers, booleans, and strings. \nAn integer in programming is the same as in mathematics, a round number with no values after the decimal point. We use the built-in print function here to display the values of our variables as well as their types!",
"my_integer = 50\nprint my_integer, type(my_integer)",
"Variables, regardless of type, are assigned by using a single equals sign (=). Variables are case-sensitive so any changes in variation in the capitals of a variable name will reference a different variable entirely.",
"one = 1\nprint One",
"A floating point number, or a float is a fancy name for a real number (again as in mathematics). To define a float, we need to either include a decimal point or specify that the value is a float.",
"my_float = 1.0\nprint my_float, type(my_float)\nmy_float = float(1)\nprint my_float, type(my_float)",
"A variable of type float will not round the number that you store in it, while a variable of type integer will. This makes floats more suitable for mathematical calculations where you want more than just integers.\nNote that as we used the float() function to force an number to be considered a float, we can use the int() function to force a number to be considered an int.",
"my_int = int(3.14159)\nprint my_int, type(my_int)",
"The int() function will also truncate any digits that a number may have after the decimal point!\nStrings allow you to include text as a variable to operate on. They are defined using either single quotes ('') or double quotes (\"\").",
"my_string = 'This is a string with single quotes'\nprint my_string\nmy_string = \"This is a string with double quotes\"\nprint my_string",
"Both are allowed so that we can include apostrophes or quotation marks in a string if we so choose.",
"my_string = '\"Jabberwocky\", by Lewis Carroll'\nprint my_string\nmy_string = \"'Twas brillig, and the slithy toves / Did gyre and gimble in the wabe;\"\nprint my_string",
"Booleans, or bools are binary variable types. A bool can only take on one of two values, these being True or False. There is much more to this idea of truth values when it comes to programming, which we cover later in the Logical Operators of this notebook.",
"my_bool = True\nprint my_bool, type(my_bool)",
"There are many more data types that you can assign as variables in Python, but these are the basic ones! We will cover a few more later as we move through this tutorial.\nBasic Math\nPython has a number of built-in math functions. These can be extended even further by importing the math package or by including any number of other calculation-based packages.\nAll of the basic arithmetic operations are supported: +, -, /, and *. You can create exponents by using ** and modular arithmetic is introduced with the mod operator, %.",
"print 'Addition: ', 2 + 2\nprint 'Subtraction: ', 7 - 4\nprint 'Multiplication: ', 2 * 5\nprint 'Division: ', 10 / 2\nprint 'Exponentiation: ', 3**2",
"If you are not familiar with the the mod operator, it operates like a remainder function. If we type $15 \\ \\% \\ 4$, it will return the remainder after dividing $15$ by $4$.",
"print 'Modulo: ', 15 % 4",
"Mathematical functions also work on variables!",
"first_integer = 4\nsecond_integer = 5\nprint first_integer * second_integer",
"Make sure that your variables are floats if you want to have decimal points in your answer. If you perform math exclusively with integers, you get an integer. Including any float in the calculation will make the result a float.",
"first_integer = 11\nsecond_integer = 3\nprint first_integer / second_integer\n\nfirst_number = 11.0\nsecond_number = 3.0\nprint first_number / second_number",
"Python has a few built-in math functions. The most notable of these are:\n\nabs()\nround()\nmax()\nmin()\nsum()\n\nThese functions all act as you would expect, given their names. Calling abs() on a number will return its absolute value. The round() function will round a number to a specified number of the decimal points (the default is $0$). Calling max() or min() on a collection of numbers will return, respectively, the maximum or minimum value in the collection. Calling sum() on a collection of numbers will add them all up. If you're not familiar with how collections of values in Python work, don't worry! We will cover collections in-depth in the next section. \nAdditional math functionality can be added in with the math package.",
"import math",
"The math library adds a long list of new mathematical functions to Python. Feel free to check out the documentation for the full list and details. It concludes some mathematical constants",
"print 'Pi: ', math.pi\nprint \"Euler's Constant: \", math.e",
"As well as some commonly used math functions",
"print 'Cosine of pi: ', math.cos(math.pi)",
"Collections\nLists\nA list in Python is an ordered collection of objects that can contain any data type. We define a list using brackets ([]).",
"my_list = [1, 2, 3]\nprint my_list",
"We can access and index the list by using brackets as well. In order to select an individual element, simply type the list name followed by the index of the item you are looking for in braces.",
"print my_list[0]\nprint my_list[2]",
"Indexing in Python starts from $0$. If you have a list of length $n$, the first element of the list is at index $0$, the second element is at index $1$, and so on and so forth. The final element of the list will be at index $n-1$. Be careful! Trying to access a non-existent index will cause an error.",
"print 'The first, second, and third list elements: ', my_list[0], my_list[1], my_list[2]\nprint 'Accessing outside the list bounds causes an error: ', my_list[3]",
"We can see the number of elements in a list by calling the len() function.",
"print len(my_list)",
"We can update and change a list by accessing an index and assigning new value.",
"print my_list\nmy_list[0] = 42\nprint my_list",
"This is fundamentally different from how strings are handled. A list is mutable, meaning that you can change a list's elements without changing the list itself. Some data types, like strings, are immutable, meaning you cannot change them at all. Once a string or other immutable data type has been created, it cannot be directly modified without creating an entirely new object.",
"my_string = \"Strings never change\"\nmy_string[0] = 'Z'",
"As we stated before, a list can contain any data type. Thus, lists can also contain strings.",
"my_list_2 = ['one', 'two', 'three']\nprint my_list_2",
"Lists can also contain multiple different data types at once!",
"my_list_3 = [True, 'False', 42]",
"If you want to put two lists together, they can be combined with a + symbol.",
"my_list_4 = my_list + my_list_2 + my_list_3\nprint my_list_4",
"In addition to accessing individual elements of a list, we can access groups of elements through slicing.",
"my_list = ['friends', 'romans', 'countrymen', 'lend', 'me', 'your', 'ears']",
"Slicing\nWe use the colon (:) to slice lists.",
"print my_list[2:4]",
"Using : we can select a group of elements in the list starting from the first element indicated and going up to (but not including) the last element indicated.\nWe can also select everything after a certain point",
"print my_list[1:]",
"And everything before a certain point",
"print my_list[:4]",
"Using negative numbers will count from the end of the indices instead of from the beginning. For example, an index of -1 indicates the last element of the list.",
"print my_list[-1]",
"You can also add a third component to slicing. Instead of simply indicating the first and final parts of your slice, you can specify the step size that you want to take. So instead of taking every single element, you can take every other element.",
"print my_list[0:7:2]",
"Here we have selected the entire list (because 0:7 will yield elements 0 through 6) and we have selected a step size of 2. So this will spit out element 0 , element 2, element 4, and so on through the list element selected. We can skip indicated the beginning and end of our slice, only indicating the step, if we like.",
"print my_list[::2]",
"Lists implictly select the beginning and end of the list when not otherwise specified.",
"print my_list[:]",
"With a negative step size we can even reverse the list!",
"print my_list[::-1]",
"Python does not have native matrices, but with lists we can produce a working fascimile. Other packages, such as numpy, add matrices as a separate data type, but in base Python the best way to create a matrix is to use a list of lists.\nWe can also use built-in functions to generate lists. In particular we will look at range() (because we will be using it later!). Range can take several different inputs and will return a list.",
"b = 10\nmy_list = range(b)\nprint my_list",
"Similar to our list-slicing methods from before, we can define both a start and an end for our range. This will return a list that is includes the start and excludes the end, just like a slice.",
"a = 0\nb = 10\nmy_list = range(a, b)\nprint my_list",
"We can also specify a step size. This again has the same behavior as a slice.",
"a = 0\nb = 10\nstep = 2\nmy_list = range(a, b, step)\nprint my_list",
"Tuples\nA tuple is a data type similar to a list in that it can hold different kinds of data types. The key difference here is that a tuple is immutable. We define a tuple by separating the elements we want to include by commas. It is conventional to surround a tuple with parentheses.",
"my_tuple = 'I', 'have', 30, 'cats'\nprint my_tuple\n\nmy_tuple = ('I', 'have', 30, 'cats')\nprint my_tuple",
"As mentioned before, tuples are immutable. You can't change any part of them without defining a new tuple.",
"my_tuple[3] = 'dogs' # Attempts to change the 'cats' value stored in the the tuple to 'dogs'",
"You can slice tuples the same way that you slice lists!",
"print my_tuple[1:3]",
"And concatenate them the way that you would with strings!",
"my_other_tuple = ('make', 'that', 50)\nprint my_tuple + my_other_tuple",
"We can 'pack' values together, creating a tuple (as above), or we can 'unpack' values from a tuple, taking them out.",
"str_1, str_2, int_1 = my_other_tuple\nprint str_1, str_2, int_1",
"Unpacking assigns each value of the tuple in order to each variable on the left hand side of the equals sign. Some functions, including user-defined functions, may return tuples, so we can use this to directly unpack them and access the values that we want.\nSets\nA set is a collection of unordered, unique elements. It works almost exactly as you would expect a normal set of things in mathematics to work and is defined using braces ({}).",
"things_i_like = {'dogs', 7, 'the number 4', 4, 4, 4, 42, 'lizards', 'man I just LOVE the number 4'}\nprint things_i_like, type(things_i_like)",
"Note how any extra instances of the same item are removed in the final set. We can also create a set from a list, using the set() function.",
"animal_list = ['cats', 'dogs', 'dogs', 'dogs', 'lizards', 'sponges', 'cows', 'bats', 'sponges']\nanimal_set = set(animal_list)\nprint animal_set # Removes all extra instances from the list",
"Calling len() on a set will tell you how many elements are in it.",
"print len(animal_set)",
"Because a set is unordered, we can't access individual elements using an index. We can, however, easily check for membership (to see if something is contained in a set) and take the unions and intersections of sets by using the built-in set functions.",
"'cats' in animal_set # Here we check for membership using the `in` keyword.",
"Here we checked to see whether the string 'cats' was contained within our animal_set and it returned True, telling us that it is indeed in our set.\nWe can connect sets by using typical mathematical set operators, namely |, for union, and &, for intersection. Using | or & will return exactly what you would expect if you are familiar with sets in mathematics.",
"print animal_set | things_i_like # You can also write things_i_like | animal_set with no difference",
"Pairing two sets together with | combines the sets, removing any repetitions to make every set element unique.",
"print animal_set & things_i_like # You can also write things_i_like & animal_set with no difference",
"Pairing two sets together with & will calculate the intersection of both sets, returning a set that only contains what they have in common.\nIf you are interested in learning more about the built-in functions for sets, feel free to check out the documentation.\nDictionaries\nAnother essential data structure in Python is the dictionary. Dictionaries are defined with a combination of curly braces ({}) and colons (:). The braces define the beginning and end of a dictionary and the colons indicate key-value pairs. A dictionary is essentially a set of key-value pairs. The key of any entry must be an immutable data type. This makes both strings and tuples candidates. Keys can be both added and deleted.\nIn the following example, we have a dictionary composed of key-value pairs where the key is a genre of fiction (string) and the value is a list of books (list) within that genre. Since a collection is still considered a single entity, we can use one to collect multiple variables or values into one key-value pair.",
"my_dict = {\"High Fantasy\": [\"Wheel of Time\", \"Lord of the Rings\"], \n \"Sci-fi\": [\"Book of the New Sun\", \"Neuromancer\", \"Snow Crash\"],\n \"Weird Fiction\": [\"At the Mountains of Madness\", \"The House on the Borderland\"]}",
"After defining a dictionary, we can access any individual value by indicating its key in brackets.",
"print my_dict[\"Sci-fi\"]",
"We can also change the value associated with a given key",
"my_dict[\"Sci-fi\"] = \"I can't read\"\nprint my_dict[\"Sci-fi\"]",
"Adding a new key-value pair is as simple as defining it.",
"my_dict[\"Historical Fiction\"] = [\"Pillars of the Earth\"]\nprint my_dict[\"Historical Fiction\"]\n\nprint my_dict",
"String Shenanigans\nWe already know that strings are generally used for text. We can used built-in operations to combine, split, and format strings easily, depending on our needs.\nThe + symbol indicates concatenation in string language. It will combine two strings into a longer string.",
"first_string = '\"Beware the Jabberwock, my son! /The jaws that bite, the claws that catch! /'\nsecond_string = 'Beware the Jubjub bird, and shun /The frumious Bandersnatch!\"/'\nthird_string = first_string + second_string\nprint third_string",
"Strings are also indexed much in the same way that lists are.",
"my_string = 'Supercalifragilisticexpialidocious'\nprint 'The first letter is: ', my_string[0] # Uppercase S\nprint 'The last letter is: ', my_string[-1] # lowercase s\nprint 'The second to last letter is: ', my_string[-2] # lowercase u\nprint 'The first five characters are: ', my_string[0:5] # Remember: slicing doesn't include the final element!\nprint 'Reverse it!: ', my_string[::-1]",
"Built-in objects and classes often have special functions associated with them that are called methods. We access these methods by using a period ('.'). We will cover objects and their associated methods more in another lecture!\nUsing string methods we can count instances of a character or group of characters.",
"print 'Count of the letter i in Supercalifragilisticexpialidocious: ', my_string.count('i')\nprint 'Count of \"li\" in the same word: ', my_string.count('li')",
"We can also find the first instance of a character or group of characters in a string.",
"print 'The first time i appears is at index: ', my_string.find('i')",
"As well as replace characters in a string.",
"print \"All i's are now a's: \", my_string.replace('i', 'a')\n\nprint \"It's raining cats and dogs\".replace('dogs', 'more cats')",
"There are also some methods that are unique to strings. The function upper() will convert all characters in a string to uppercase, while lower() will convert all characters in a string to lowercase!",
"my_string = \"I can't hear you\"\nprint my_string.upper()\nmy_string = \"I said HELLO\"\nprint my_string.lower()",
"String Formatting\nUsing the format() method we can add in variable values and generally format our strings.",
"my_string = \"{0} {1}\".format('Marco', 'Polo')\nprint my_string\n\nmy_string = \"{1} {0}\".format('Marco', 'Polo')\nprint my_string",
"We use braces ({}) to indicate parts of the string that will be filled in later and we use the arguments of the format() function to provide the values to substitute. The numbers within the braces indicate the index of the value in the format() arguments.\nSee the format() documentation for additional examples.\nIf you need some quick and dirty formatting, you can instead use the % symbol, called the string formatting operator.",
"print 'insert %s here' % 'value'",
"The % symbol basically cues Python to create a placeholder. Whatever character follows the % (in the string) indicates what sort of type the value put into the placeholder will have. This character is called a conversion type. Once the string has been closed, we need another % that will be followed by the values to insert. In the case of one value, you can just put it there. If you are inserting more than one value, they must be enclosed in a tuple.",
"print 'There are %s cats in my %s' % (13, 'apartment')",
"In these examples, the %s indicates that Python should convert the values into strings. There are multiple conversion types that you can use to get more specific with the the formatting. See the string formatting documentation for additional examples and more complete details on use.\nLogical Operators\nBasic Logic\nLogical operators deal with boolean values, as we briefly covered before. If you recall, a bool takes on one of two values, True or False (or $1$ or $0$). The basic logical statements that we can make are defined using the built-in comparators. These are == (equal), != (not equal), < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to).",
"print 5 == 5\n\nprint 5 > 5",
"These comparators also work in conjunction with variables.",
"m = 2\nn = 23\nprint m < n",
"We can string these comparators together to make more complex logical statements using the logical operators or, and, and not.",
"statement_1 = 10 > 2\nstatement_2 = 4 <= 6\nprint \"Statement 1 truth value: {0}\".format(statement_1)\nprint \"Statement 2 truth value: {0}\".format(statement_2)\nprint \"Statement 1 and Statement 2: {0}\".format(statement_1 and statement_2)",
"The or operator performs a logical or calculation. This is an inclusive or, so if either component paired together by or is True, the whole statement will be True. The and statement only outputs True if all components that are anded together are True. Otherwise it will output False. The not statement simply inverts the truth value of whichever statement follows it. So a True statement will be evaluated as False when a not is placed in front of it. Similarly, a False statement will become True when a not is in front of it.\nSay that we have two logical statements, or assertions, $P$ and $Q$. The truth table for the basic logical operators is as follows:\n| P | Q | not P| P and Q | P or Q|\n|:-----:|:-----:|:---:|:---:|:---:|\n| True | True | False | True | True |\n| False | True | True | False | True |\n| True | False | False | False | True |\n| False | False | True | False | False |\nWe can string multiple logical statements together using the logical operators.",
"print ((2 < 3) and (3 > 0)) or ((5 > 6) and not (4 < 2))",
"Logical statements can be as simple or complex as we like, depending on what we need to express. Evaluating the above logical statement step by step we see that we are evaluating (True and True) or (False and not False). This becomes True or (False and True), subsequently becoming True or False, ultimately being evaluated as True.\nTruthiness\nData types in Python have a fun characteristic called truthiness. What this means is that most built-in types will evaluate as either True or False when a boolean value is needed (such as with an if-statement). As a general rule, containers like strings, tuples, dictionaries, lists, and sets, will return True if they contain anything at all and False if they contain nothing.",
"# Similar to how float() and int() work, bool() forces a value to be considered a boolean!\nprint bool('')\n\nprint bool('I have character!')\n\nprint bool([])\n\nprint bool([1, 2, 3])",
"And so on, for the other collections and containers. None also evaluates as False. The number 1 is equivalent to True and the number 0 is equivalent to False as well, in a boolean context.\nIf-statements\nWe can create segments of code that only execute if a set of conditions is met. We use if-statements in conjunction with logical statements in order to create branches in our code. \nAn if block gets entered when the condition is considered to be True. If condition is evaluated as False, the if block will simply be skipped unless there is an else block to accompany it. Conditions are made using either logical operators or by using the truthiness of values in Python. An if-statement is defined with a colon and a block of indented text.",
"# This is the basic format of an if statement. This is a vacuous example. \n# The string \"Condition\" will always evaluated as True because it is a\n# non-empty string. he purpose of this code is to show the formatting of\n# an if-statement.\nif \"Condition\": \n # This block of code will execute because the string is non-empty\n # Everything on these indented lines\n print True\nelse:\n # So if the condition that we examined with if is in fact False\n # This block of code will execute INSTEAD of the first block of code\n # Everything on these indented lines\n print False\n# The else block here will never execute because \"Condition\" is a non-empty string.\n\ni = 4\nif i == 5:\n print 'The variable i has a value of 5'",
"Because in this example i = 4 and the if-statement is only looking for whether i is equal to 5, the print statement will never be executed. We can add in an else statement to create a contingency block of code in case the condition in the if-statement is not evaluated as True.",
"i = 4\nif i == 5:\n print \"All lines in this indented block are part of this block\"\n print 'The variable i has a value of 5'\nelse:\n print \"All lines in this indented block are part of this block\"\n print 'The variable i is not equal to 5'",
"We can implement other branches off of the same if-statement by using elif, an abbreviation of \"else if\". We can include as many elifs as we like until we have exhausted all the logical branches of a condition.",
"i = 1\nif i == 1:\n print 'The variable i has a value of 1'\nelif i == 2:\n print 'The variable i has a value of 2'\nelif i == 3:\n print 'The variable i has a value of 3'\nelse:\n print \"I don't care what i is\"",
"You can also nest if-statements within if-statements to check for further conditions.",
"i = 10\nif i % 2 == 0:\n if i % 3 == 0:\n print 'i is divisible by both 2 and 3! Wow!'\n elif i % 5 == 0:\n print 'i is divisible by both 2 and 5! Wow!'\n else:\n print 'i is divisible by 2, but not 3 or 5. Meh.'\nelse:\n print 'I guess that i is an odd number. Boring.'",
"Remember that we can group multiple conditions together by using the logical operators!",
"i = 5\nj = 12\nif i < 10 and j > 11:\n print '{0} is less than 10 and {1} is greater than 11! How novel and interesting!'.format(i, j)",
"You can use the logical comparators to compare strings!",
"my_string = \"Carthago delenda est\"\nif my_string == \"Carthago delenda est\":\n print 'And so it was! For the glory of Rome!'\nelse:\n print 'War elephants are TERRIFYING. I am staying home.'",
"As with other data types, == will check for whether the two things on either side of it have the same value. In this case, we compare whether the value of the strings are the same. Using > or < or any of the other comparators is not quite so intuitive, however, so we will stay from using comparators with strings in this lecture. Comparators will examine the lexicographical order of the strings, which might be a bit more in-depth than you might like.\nSome built-in functions return a boolean value, so they can be used as conditions in an if-statement. User-defined functions can also be constructed so that they return a boolean value. This will be covered later with function definition!\nThe in keyword is generally used to check membership of a value within another value. We can check memebership in the context of an if-statement and use it to output a truth value.",
"if 'a' in my_string or 'e' in my_string:\n print 'Those are my favorite vowels!'",
"Here we use in to check whether the variable my_string contains any particular letters. We will later use in to iterate through lists!\nLoop Structures\nLoop structures are one of the most important parts of programming. The for loop and the while loop provide a way to repeatedly run a block of code repeatedly. A while loop will iterate until a certain condition has been met. If at any point after an iteration that condition is no longer satisfied, the loop terminates. A for loop will iterate over a sequence of values and terminate when the sequence has ended. You can instead include conditions within the for loop to decide whether it should terminate early or you could simply let it run its course.",
"i = 5\nwhile i > 0: # We can write this as 'while i:' because 0 is False!\n i -= 1\n print 'I am looping! {0} more to go!'.format(i)",
"With while loops we need to make sure that something actually changes from iteration to iteration so that that the loop actually terminates. In this case, we use the shorthand i -= 1 (short for i = i - 1) so that the value of i gets smaller with each iteration. Eventually i will be reduced to 0, rendering the condition False and exiting the loop.\nA for loop iterates a set number of times, determined when you state the entry into the loop. In this case we are iterating over the list returned from range(). The for loop selects a value from the list, in order, and temporarily assigns the value of i to it so that operations can be performed with the value.",
"for i in range(5):\n print 'I am looping! I have looped {0} times!'.format(i + 1)",
"Note that in this for loop we use the in keyword. Use of the in keyword is not limited to checking for membership as in the if-statement example. You can iterate over any collection with a for loop by using the in keyword.\nIn this next example, we will iterate over a set because we want to check for containment and add to a new set.",
"my_list = {'cats', 'dogs', 'lizards', 'cows', 'bats', 'sponges', 'humans'} # Lists all the animals in the world\nmammal_list = {'cats', 'dogs', 'cows', 'bats', 'humans'} # Lists all the mammals in the world\nmy_new_list = set()\nfor animal in my_list:\n if animal in mammal_list:\n # This adds any animal that is both in my_list and mammal_list to my_new_list\n my_new_list.add(animal)\n \nprint my_new_list",
"There are two statements that are very helpful in dealing with both for and while loops. These are break and continue. If break is encountered at any point while a loop is executing, the loop will immediately end.",
"i = 10\nwhile True:\n if i == 14:\n break\n i += 1 # This is shorthand for i = i + 1. It increments i with each iteration.\n print i\n\nfor i in range(5):\n if i == 2:\n break\n print i",
"The continue statement will tell the loop to immediately end this iteration and continue onto the next iteration of the loop.",
"i = 0\nwhile i < 5:\n i += 1\n if i == 3:\n continue\n print i",
"This loop skips printing the number $3$ because of the continue statement that executes when we enter the if-statement. The code never sees the command to print the number $3$ because it has already moved to the next iteration. The break and continue statements are further tools to help you control the flow of your loops and, as a result, your code.\nThe variable that we use to iterate over a loop will retain its value when the loop exits. Similarly, any variables defined within the context of the loop will continue to exist outside of it.",
"for i in range(5):\n loop_string = 'I transcend the loop!'\n print 'I am eternal! I am {0} and I exist everywhere!'.format(i)\n\nprint 'I persist! My value is {0}'.format(i)\nprint loop_string",
"We can also iterate over a dictionary!",
"my_dict = {'firstname' : 'Inigo', 'lastname' : 'Montoya', 'nemesis' : 'Rugen'}\n\nfor key in my_dict:\n print key",
"If we just iterate over a dictionary without doing anything else, we will only get the keys. We can either use the keys to get the values, like so:",
"for key in my_dict:\n print my_dict[key]",
"Or we can use the iteritems() function to get both key and value at the same time.",
"for key, value in my_dict.iteritems():\n print key, ':', value",
"The iteritems() function creates a tuple of each key-value pair and the for loop stores unpacks that tuple into key, value on each separate execution of the loop!\nFunctions\nA function is a reusable block of code that you can call repeatedly to make calculations, output data, or really do anything that you want. This is one of the key aspects of using a programming language. To add to the built-in functions in Python, you can define your own!",
"def hello_world():\n \"\"\" Prints Hello, world! \"\"\"\n print 'Hello, world!'\n\nhello_world()\n\nfor i in range(5):\n hello_world()",
"Functions are defined with def, a function name, a list of parameters, and a colon. Everything indented below the colon will be included in the definition of the function.\nWe can have our functions do anything that you can do with a normal block of code. For example, our hello_world() function prints a string every time it is called. If we want to keep a value that a function calculates, we can define the function so that it will return the value we want. This is a very important feature of functions, as any variable defined purely within a function will not exist outside of it.",
"def see_the_scope():\n in_function_string = \"I'm stuck in here!\"\n\nsee_the_scope()\nprint in_function_string",
"The scope of a variable is the part of a block of code where that variable is tied to a particular value. Functions in Python have an enclosed scope, making it so that variables defined within them can only be accessed directly within them. If we pass those values to a return statement we can get them out of the function. This makes it so that the function call returns values so that you can store them in variables that have a greater scope.\nIn this case specifically,including a return statement allows us to keep the string value that we define in the function.",
"def free_the_scope():\n in_function_string = \"Anything you can do I can do better!\"\n return in_function_string\nmy_string = free_the_scope()\nprint my_string",
"Just as we can get values out of a function, we can also put values into a function. We do this by defining our function with parameters.",
"def multiply_by_five(x):\n \"\"\" Multiplies an input number by 5 \"\"\"\n return x * 5\n\nn = 4\nprint n\nprint multiply_by_five(n)",
"In this example we only had one parameter for our function, x. We can easily add more parameters, separating everything with a comma.",
"def calculate_area(length, width):\n \"\"\" Calculates the area of a rectangle \"\"\"\n return length * width\n\nl = 5\nw = 10\nprint 'Area: ', calculate_area(l, w)\nprint 'Length: ', l\nprint 'Width: ', w\n\ndef calculate_volume(length, width, depth):\n \"\"\" Calculates the volume of a rectangular prism \"\"\"\n return length * width * depth",
"If we want to, we can define a function so that it takes an arbitrary number of parameters. We tell Python that we want this by using an asterisk (*).",
"def sum_values(*args):\n sum_val = 0\n for i in args:\n sum_val += i\n return sum_val\n\nprint sum_values(1, 2, 3)\nprint sum_values(10, 20, 30, 40, 50)\nprint sum_values(4, 2, 5, 1, 10, 249, 25, 24, 13, 6, 4)",
"The time to use *args as a parameter for your function is when you do not know how many values may be passed to it, as in the case of our sum function. The asterisk in this case is the syntax that tells Python that you are going to pass an arbitrary number of parameters into your function. These parameters are stored in the form of a tuple.",
"def test_args(*args):\n print type(args)\n\ntest_args(1, 2, 3, 4, 5, 6)",
"We can put as many elements into the args tuple as we want to when we call the function. However, because args is a tuple, we cannot modify it after it has been created.\nThe args name of the variable is purely by convention. You could just as easily name your parameter *vars or *things. You can treat the args tuple like you would any other tuple, easily accessing arg's values and iterating over it, as in the above sum_values(*args) function.\nOur functions can return any data type. This makes it easy for us to create functions that check for conditions that we might want to monitor.\nHere we define a function that returns a boolean value. We can easily use this in conjunction with if-statements and other situations that require a boolean.",
"def has_a_vowel(word):\n \"\"\" \n Checks to see whether a word contains a vowel \n If it doesn't contain a conventional vowel, it\n will check for the presence of 'y' or 'w'. Does\n not check to see whether those are in the word\n in a vowel context.\n \"\"\"\n vowel_list = ['a', 'e', 'i', 'o', 'u']\n \n for vowel in vowel_list:\n if vowel in word:\n return True\n # If there is a vowel in the word, the function returns, preventing anything after this loop from running\n return False\n\nmy_word = 'catnapping'\nif has_a_vowel(my_word):\n print 'How surprising, an english word contains a vowel.'\nelse:\n print 'This is actually surprising.'\n\ndef point_maker(x, y):\n \"\"\" Groups x and y values into a point, technically a tuple \"\"\"\n return x, y",
"This above function returns an ordered pair of the input parameters, stored as a tuple.",
"a = point_maker(0, 10)\nb = point_maker(5, 3)\ndef calculate_slope(point_a, point_b):\n \"\"\" Calculates the linear slope between two points \"\"\"\n return (point_b[1] - point_a[1])/(point_b[0] - point_a[0])\nprint \"The slope between a and b is {0}\".format(calculate_slope(a, b))",
"And that one calculates the slope between two points!",
"print \"The slope-intercept form of the line between a and b, using point a, is: y - {0} = {2}(x - {1})\".format(a[1], a[0], calculate_slope(a, b))",
"With the proper syntax, you can define functions to do whatever calculations you want. This makes them an indispensible part of programming in any language.\nNext Steps\nThis was a lot of material and there is still even more to cover! Make sure you play around with the cells in each notebook to accustom yourself to the syntax featured here and to figure out any limitations. If you want to delve even deeper into the material, the documentation for Python is all available online. We are in the process of developing a second part to this Python tutorial, designed to provide you with even more programming knowledge, so keep an eye on the Quantopian Lectures Page and the forums for any new lectures.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
grokkaine/biopycourse | day2/scicomp_numpy.ipynb | cc0-1.0 | [
"Scientific computing\n\nNumpy: advanced array operations, multidimensional arrays\nScipy: scientific computing by examples\nsingular value decomposition, with scipy.linalg\nscipy.signal and scipy.fftpack: Signal theory\nscipy.optimize: Local and global optimization, fitting and root finding\nscipy.interpolate: Cubic interpolation\nscipy.integrate: Integration and ODE solvers\nscipy.ndimage - Image processing\n\n\nSimpy: symbolic math\n\nNumpy\n\nusing numpy improves RAM space and speed\nnumpy enforces strong typing, while Python is a dynamic typed language\ntranslates in Numpy using less heap space for representing data\nmultidimensional array operations are the core of scientific computing\n\nFurther reading:\n- https://docs.scipy.org/doc/numpy/user/basics.html",
"import numpy as np\nL = range(1000)\n%timeit [i**2 for i in L]\na = np.arange(1000)\n%timeit a**2\nprint(type(L[1]))\nprint(a.dtype)\n\nimport numpy as np\n##Get help!\n#np.lookfor('create array')\n#np.array?\n#np.arr*?\n\na = np.array([0, 1, 2, 3])\nprint(\"a:\\n\", a)\nb = np.array([[0, 1, 2], [3, 4, 5]])\nprint(\"b:\\n\", b)\n\nprint(\"b.shape:\\n\", b.shape)",
"Data types\nThere are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory).",
"print(\"np.float32(1.0) :\", np.float32(1.0))\nprint(\"np.arange(3, dtype=np.uint8) :\", np.arange(3, dtype=np.uint8))\n\nz = np.array([1, 2, 3], dtype='f')\nprint(z)\n\nz = np.arange(3, dtype=np.uint8)\nprint(z)\nprint(z.astype(float))\nprint(z.dtype)",
"Array creation",
"# extrinsic\nx = np.array([2,3,1,0])\nprint(x)\nprint()\nx = np.array([[ 1., 2.], [ 0., 0.], [ 1., 3.]])\nprint(x)\n\n#intrinsic\nb = np.arange(1, 9, 2)\nprint(b)\nc = np.linspace(0, 1, 6)\nprint(c)\n\nprint(np.arange(35).reshape(5,7))\n\nx = np.random.rand(35).reshape(5,7)\nprint(x)\n\n%pylab inline\nimport matplotlib.pyplot as plt\n\nimage = np.random.rand(30, 30)\nplt.imshow(image, cmap=plt.cm.hot) \nplt.colorbar() ",
"Indexing, slicing and selection",
"a = np.arange(10)\nprint(a)\nprint(a[0], a[2], a[-1], a[-3])\nprint(a[2:5], a[2:], a[:-2], a[::2], a[2::2])\n\na = np.diag(np.arange(3))\na[2, 1] = 10 # !third line, !second column\n\nprint(a)\nprint(a[1, 1])\n#print(a[1])\n#print(a[:,1], a[1,:])\n#print(a[1:,1:])\n\n# array indexes\nx = np.arange(10,1,-1)\nprint(x)\nprint()\nprint(x[np.array([3,3,-3,8])])\nprint()\nprint(x[np.array([[1,1],[2,3]])])\n\n# 10 random numbers 0 - 20\na = np.random.randint(0, 20, 10)\nprint(a)\nprint(a%3==0)\nprint(a[a%3==0])\n\na[a % 3 == 0] = -1\nprint(a)",
"Task:\n- What does this do:",
"# How does it work?\n# Print the primes!\ndef get_primes():\n primes = np.ones((100,), dtype=bool)\n primes[:2] = 0\n N_max = int(np.sqrt(len(primes)))\n for j in range(2, N_max):\n primes[2*j::j] = 0\n return primes\nprint(get_primes())\n",
"Broadcasting, assignment, structured arrays",
"a = np.arange(10)\nb = a\nprint(np.may_share_memory(a, b))\n\na = np.arange(10)\nc = a.copy() # force a copy\nprint(np.may_share_memory(a, c))\n\n#Array operations\na = np.array([1, 2, 3, 4])\nprint(\"a: \", a)\nprint(\"a + 1, 2**a: \", a + 1, 2**a)\nprint (\"2**(a + 1) - a: \", 2**(a + 1) - a)\n\na = np.array([1, 2, 3, 4])\nb = np.ones(4)+1\nprint(\"a: \",a)\nprint(\"b: \",b)\nprint(\"a - b, a * b: \", a - b, a * b)\n\nc = np.ones((3, 3))\nprint(c)\nprint(2*c + 1)\n\n# matrix multiplication\na = np.ones((3, 2)) + 1\nb = np.ones((2, 3)) + 1\nc = a.dot(b)\nprint(a, b, c, sep=\"\\n\\n\")\n\na = np.arange(5)\nprint(np.sin(a))\nprint(np.log(a))\nprint(np.exp(a))\n\n# shape manipulation\nx = np.array([1, 2, 3])\ny = x[:, np.newaxis]\nz = x[np.newaxis, :]\nprint(x, y, z, sep='\\n\\n')\n\n# flatten\na = np.array([[1, 2, 3], [4, 5, 6]])\nprint(a)\nprint(a.ravel())\n\n# sorting matrices\na = np.array([[4, 3, 5], [1, 2, 1]])\nb = np.sort(a, axis=1) #sorting per row\nprint(a)\nprint(b)\n\n# sorting arguments\na = np.array([4, 3, 1, 2])\nj = np.argsort(a)\nprint(j, a[j])",
"Reductions",
"# unidimensional\nx = np.array([1, 2, 3, 4])\nprint(np.sum(x), x.sum(), x.sum(axis=0))\n\n# multidimensional\nx = np.array([[1, 1], [2, 2]])\nprint(x)\nprint(x.sum(axis=0)) # rows (first dimension)\nprint(x.sum(axis=1)) # columns (second dimension)\n\nx = np.array([1, 3, 2])\nprint(x)\nprint(x.min(), x.max(), x.argmin(), x.argmax())\n\nprint(np.all([True, True, False]), np.any([True, True, False]))\n\nx = np.array([1, 2, 3, 1])\ny = np.array([[1, 2, 3], [5, 6, 1]])\nprint(x.mean(), x.std(), np.median(x), np.median(y, axis=-1))\n\na = np.zeros((100, 100))\nprint(np.any(a != 0))\n\na = np.array([1, 2, 3, 2])\nb = np.array([2, 2, 3, 2])\nc = np.array([6, 4, 4, 5])\nprint(((a <= b) & (b <= c)).all())",
"Tricksy task:\n- Replace all values greater than 25 with 9 and all values smaller than 10 with 29.",
"a = np.random.randint(0, 50, 10)\nprint(a)",
"Sympy\nSymbolic math is sometimes important, especially if we are weak at calculus or if we need to perform automated calculus on long formulas. We are briefly going through a few test cases, to get the feel of it. Symbolic math is especially developed for Mathematica, or Sage which is an open-source equivalent.",
"import sympy\nprint sympy.sqrt(8)\nimport math\nprint math.sqrt(8)\n\nfrom sympy import symbols\nx, y, z, t = symbols('x y z t')\nexpr = x + 2*y\nprint expr\nprint x * expr\nfrom sympy import expand, factor, simplify\nexpanded_expr = expand(x*expr)\nprint expanded_expr\nprint factor(expanded_expr)\nexp = expanded_expr.subs(x, z**t)\nprint exp\nprint simplify(exp)",
"In the scipy.optimize paragraph we needed the Hessian matrix for a function f. Here is how you can obtain it in sympy:",
"import sympy\nx, y = sympy.symbols('x y')\nf = .5*(1 - x)**2 + (y - x**2)**2\nh = sympy.hessian(f, [x,y])\nprint(h)\nfrom IPython.display import Latex\nLatex(sympy.latex(h))\n\n\nfrom IPython.display import HTML\nHTML('<iframe src=http://en.wikipedia.org/wiki/Hessian_matrix width=700 height=350></iframe>')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bosscha/alma-calibrator | notebooks/selecting_source/alma_database_selection10.ipynb | gpl-2.0 | [
"What is the most important project that we need to download first?\nWhat I did is simply counting the occurance of the word (project name) in the report file",
"from collections import Counter\n\nfilename = \"report_8_nonALMACAL_priority.txt\"\n\nwith open(filename, 'r') as ifile:\n wordcount = Counter(ifile.read().split())\n\nlist_of_project = []\n\nfor item in wordcount:\n if len(item) == 14 and item[-1] == 'S': # project_name\n list_of_project.append([item, wordcount[item]])\n\nsorted_project = sorted(list_of_project, key=lambda data: data[1])\n\nprint(\"Number of project: \", len(sorted_project))\n\nsorted_from_large = list(reversed(sorted_project))",
"due to the structure of the report this number can not be used directly as a reference\ne.g. maybe large occurance due to small integration and observed many time and also it is possible only for one object in one band (like the first project in here)\nI think the year of Cycle is more important due to number of antenna.",
"# 15 first\nfor i in sorted_from_large[0:15]:\n print(i)",
"Sorted based on year",
"sorted_project_year = sorted(list_of_project, key=lambda data: data[0])\n\nsorted_from_new = list(reversed(sorted_project_year))\n\n# 15 first\nfor i in sorted_from_new[0:15]:\n print(i)",
"There is 'A' in project name e.g. 2016.A.00011.S, 2016.A.00010.S, what does it mean?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DhavalThkkar/internship2017 | Challenges/MNIST with Multi-Layer Perceptron.ipynb | apache-2.0 | [
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nMNIST Multi-Layer Perceptron\nIn this lecture we will build out a Multi Layer Perceptron model to try to classify hand written digits using TensorFlow (a very famous example).\nKeep in mind that no single lecture (or course!) can cover the vastness that is Deep Learning, I would highly suggest reading MIT's Deep Learning textbook for more information on these topics!\nGet the Data\nWe will be using the famous MNIST data set of handwritten digits. \nThe images which we will be working with are black and white images of size 28 x 28 pixels, or 784 pixels total. Our features will be the pixel values for each pixel. Either the pixel is \"white\" (blank with a 0), or there is some pixel value. \nWe will try to correctly predict what number is written down based solely on the image data in the form of an array. This type of problem (Image Recognition) is a great use case for Deep Learning Methods!\nThis data is to Deep Learning what the iris data set is to typical machine learning algorithms. \nLet's get the data:",
"import tensorflow as tf\n\n# Import MINST data\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)",
"Data Format\nThe data is stored in a vector format, although the original data was a 2-dimensional matirx with values representing how much pigment was at a certain location. Let's explore this:",
"type(mnist)\n\ntype(mnist.train.images)\n\n#mnist.train.images[0]\nmnist.train.images[2].shape\n\nsample = mnist.train.images[2].reshape(28,28)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.imshow(sample)",
"Parameters\nWe'll need to define 4 parameters, it is really (really) hard to know what good parameter values are on a data set for which you have no experience with, however since MNIST is pretty famous, we have some reasonable values for our data below. The parameters here are:\n\nLearning Rate - How quickly to adjust the cost function.\nTraining Epochs - How many training cycles to go through\nBatch Size - Size of the 'batches' of training data",
"# Parameters\nlearning_rate = 0.001\ntraining_epochs = 150\nbatch_size = 100",
"Network Parameters\nHere we have parameters which will directly define our Neural Network, these would be adjusted depending on what your data looked like and what kind of a net you would want to build. Basically just some numbers we will eventually use to define some variables later on in our model:",
"# Network Parameters\nn_hidden_1 = 256 # 1st layer number of features\nn_hidden_2 = 256 # 2nd layer number of features\nn_input = 784 # MNIST data input (img shape: 28*28)\nn_classes = 10 # MNIST total classes (0-9 digits)\nn_samples = mnist.train.num_examples",
"TensorFlow Graph Input",
"x = tf.placeholder(\"float\", [None, n_input])\ny = tf.placeholder(\"float\", [None, n_classes])",
"MultiLayer Model\nIt is time to create our model, let's review what we want to create here.\nFirst we receive the input data array and then to send it to the first hidden layer. Then the data will begin to have a weight attached to it between layers (remember this is initially a random value) and then sent to a node to undergo an activation function (along with a Bias as mentioned in the lecture). Then it will continue on to the next hidden layer, and so on until the final output layer. In our case, we will just use two hidden layers, the more you use the longer the model will take to run (but it has more of an opportunity to possibly be more accurate on the training data).\nOnce the transformed \"data\" has reached the output layer we need to evaluate it. Here we will use a loss function (also called a cost function) to evaluate how far off we are from the desired result. In this case, how many of the classes we got correct. \nThen we will apply an optimization function to minimize the cost (lower the error). This is done by adjusting weight values accordingly across the network. In out example, we will use the Adam Optimizer, which keep in mind, relative to other mathematical concepts, is an extremely recent development.\nWe can adjust how quickly to apply this optimization by changing our earlier learning rate parameter. The lower the rate the higher the possibility for accurate training results, but that comes at the cost of having to wait (physical time wise) for the results. Of course, after a certain point there is no benefit to lower the learning rate.\nNow we will create our model, we'll start with 2 hidden layers, which use the [RELU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) activation function, which is a very simple rectifier function which essentially either returns x or zero. For our final output layer we will use a linear activation with matrix multiplication:",
"def multilayer_perceptron(x, weights, biases):\n '''\n x : Place Holder for Data Input\n weights: Dictionary of weights\n biases: Dicitionary of biases\n '''\n \n # First Hidden layer with RELU activation\n layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])\n layer_1 = tf.nn.relu(layer_1)\n \n # Second Hidden layer with RELU activation\n layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])\n layer_2 = tf.nn.relu(layer_2)\n \n # Last Output layer with linear activation\n out_layer = tf.matmul(layer_2, weights['out']) + biases['out']\n return out_layer",
"Weights and Bias\nIn order for our tensorflow model to work we need to create two dictionaries containing our weight and bias objects for the model. We can use the tf.variable object type. This is different from a constant because TensorFlow's Graph Object becomes aware of the states of all the variables. A Variable is a modifiable tensor that lives in TensorFlow's graph of interacting operations. It can be used and even modified by the computation. We will generally have the model parameters be Variables. From the documentation string:\nA variable maintains state in the graph across calls to `run()`. You add a variable to the graph by constructing an instance of the class `Variable`.\n\nThe `Variable()` constructor requires an initial value for the variable, which can be a `Tensor` of any type and shape. The initial value defines the type and shape of the variable. After construction, the type and shape of the variable are fixed. The value can be changed using one of the assign methods.\n\nWe'll use tf's built-in random_normal method to create the random values for our weights and biases (you could also just pass ones as the initial biases).",
"weights = {\n 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),\n 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),\n 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))\n}\n\nbiases = {\n 'b1': tf.Variable(tf.random_normal([n_hidden_1])),\n 'b2': tf.Variable(tf.random_normal([n_hidden_2])),\n 'out': tf.Variable(tf.random_normal([n_classes]))\n}\n\n# Construct model\npred = multilayer_perceptron(x, weights, biases)",
"Cost and Optimization Functions\nWe'll use Tensorflow's built-in functions for this part (check out the documentation for a lot more options and discussion on this):",
"# Define loss and optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=x))\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)",
"Initialization of Variables\nNow initialize all those tf.Variable objects we created earlier. This will be the first thing we run when training our model:",
"# Initializing the variables\ninit = tf.initialize_all_variables()",
"Training the Model\nnext_batch()\nBefore we get started I want to cover one more convenience function in our mnist data object called next_batch. This returns a tuple in the form (X,y) with an array of the data and a y array indicating the class in the form of a binary array. For example:",
"Xsamp,ysamp = mnist.train.next_batch(1)\n\nplt.imshow(Xsamp.reshape(28,28))\n\n# Remember indexing starts at zero!\nprint(ysamp)",
"Running the Session\nNow it is time to run our session! Pay attention to how we have two loops, the outer loop which runs the epochs, and the inner loop which runs the batches for each epoch of training. Let's breakdown each step!",
"# Launch the session\nsess = tf.InteractiveSession()\n\n# Intialize all the variables\nsess.run(init)\n\n# Training Epochs\n# Essentially the max amount of loops possible before we stop\n# May stop earlier if cost/loss limit was set\nfor epoch in range(training_epochs):\n\n # Start with cost = 0.0\n avg_cost = 0.0\n\n # Convert total number of batches to integer\n total_batch = int(n_samples/batch_size)\n\n # Loop over all batches\n for i in range(total_batch):\n\n # Grab the next batch of training data and labels\n batch_x, batch_y = mnist.train.next_batch(batch_size)\n\n # Feed dictionary for optimization and loss value\n # Returns a tuple, but we only need 'c' the cost\n # So we set an underscore as a \"throwaway\"\n _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})\n\n # Compute average loss\n avg_cost += c / total_batch\n\n print(\"Epoch: {} cost={:.4f}\".format(epoch+1,avg_cost))\n\nprint(\"Model has completed {} Epochs of Training\".format(training_epochs))",
"Model Evaluations\nTensorflow comes with some built-in functions to help evaluate our model, including tf.equal and tf.cast with tf.reduce_mean.\ntf.equal()\nThis is essentially just a check of predictions == y_test. In our case since we know the format of the labels is a 1 in an array of zeroes, we can compare argmax() location of that 1. Remember that y here is still that placeholder we created at the very beginning, we will perform a series of operations to get a Tensor that we can eventually fill in the test data for with an evaluation method. What we are currently running will still be empty of test data:",
"# Test model\ncorrect_predictions = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\n\nprint(correct_predictions[0])",
"In order to get a numerical value for our predictions we will need to use tf.cast to cast the Tensor of booleans back into a Tensor of Floating point values in order to take the mean of it.",
"correct_predictions = tf.cast(correct_predictions, \"float\")\n\nprint(correct_predictions[0])",
"Now we use the tf.reduce_mean function in order to grab the mean of the elements across the tensor.",
"accuracy = tf.reduce_mean(correct_predictions)\n\ntype(accuracy)",
"This may seem a little strange, but this accuracy is still a Tensor object. Remember that we still need to pass in our actual test data! Now we can call the MNIST test labels and images and evaluate our accuracy!",
"mnist.test.labels\n\nmnist.test.images",
"The eval() method allows you to directly evaluates this tensor in a Session without needing to call tf.sess():mm",
"print(\"Accuracy:\", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))",
"94% not too shabby! But this actually isn't anywhere near as good as it could be. Running for more training epochs with this data (around 20,000) can produce accuracy around 99%. But we won't do that here because that will take a very long time to run!\nGreat Job!\nExtra Credit: See what happens if you try to make this model again with more layers!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ondrejiayc/StatisticalMethods | examples/XrayImage/FirstLook.ipynb | gpl-2.0 | [
"A First Look at an X-ray Image Dataset\nImages are data. They can be 2D, from cameras, or 1D, from spectrographs, or 3D, from IFUs (integral field units). In each case, the data come packaged as an array of numbers, which we can visualize, and do calculations with.\nLet's suppose we are interested in clusters of galaxies. We choose one, Abell 1835, and propose to observe it with the XMM-Newton space telescope. We are successful, we design the observations, and they are taken for us. Next: we download the data, and take a look at it.\nGetting the Data\nWe will download our images from HEASARC, the online archive where XMM data are stored.",
"from __future__ import print_function\nimport astropy.io.fits as pyfits\nimport numpy as np\nimport os\nimport urllib\nimport astropy.visualization as viz\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 10.0)",
"Download the example data files if we don't already have them.",
"targdir = 'a1835_xmm'\nif not os.path.isdir(targdir):\n os.mkdir(targdir)\n\nfilenames = ('P0098010101M2U009IMAGE_3000.FTZ', \n 'P0098010101M2U009EXPMAP3000.FTZ',\n 'P0098010101M2X000BKGMAP3000.FTZ')\n\nremotedir = 'http://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/'\n\nfor filename in filenames:\n path = os.path.join(targdir, filename)\n url = os.path.join(remotedir, filename)\n if not os.path.isfile(path):\n urllib.urlretrieve(url, path)\n\nimagefile, expmapfile, bkgmapfile = [os.path.join(targdir, filename) for filename in filenames]\n \nfor filename in os.listdir(targdir):\n print('{0:>10.2f} KB {1}'.format(os.path.getsize(os.path.join(targdir, filename))/1024.0, filename))",
"The XMM MOS2 image\nLet's find the \"science\" image taken with the MOS2 camera, and display it.",
"imfits = pyfits.open(imagefile)\nimfits.info()",
"imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type, and size 648x648 pixels, stored in the primary \"header data unit\" or HDU. \n\nIf we need it to be floating point for some reason, we need to cast it:\nim = imfits[0].data.astype('np.float32')\nNote that this (probably?) prevents us from using the pyfits \"writeto\" method to save any changes. Assuming the integer type is ok, just get a pointer to the image data.\n\nAccessing the .data member of the FITS object returns the image data as a numpy ndarray.",
"im = imfits[0].data",
"Let's look at this with ds9.",
"!ds9 -log \"$imagefile\"",
"If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage.\n\nWe can also display the image in the notebook:",
"plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');",
"Exercise\nWhat is going on in this image? \nMake a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time.",
"index = np.unravel_index(im.argmax(), im.shape)\nprint(\"image dimensions:\",im.shape)\nprint(\"location of maximum pixel value:\",index)\nprint(\"maximum pixel value: \",im[index])",
"NB. Images read in with pyfits are indexed with eg im[y,x]: ds9 shows that the maximum pixel value is at \"image coordinates\" x=328, y=348. pyplot knows what to do, but sometimes we may need to take the transpose of the im array. What pyplot does need to be told is that in astronomy, the origin of the image is conventionally taken to be at the bottom left hand corner, not the top left hand corner. That's what the origin=lower in the plt.imshow command was about.\nWe will work in image coordinates throughout this course, for simplicity. Aligning images on the sky via a \"World Coordinate System\" is something to be learned elsewhere."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kingb12/languagemodelRNN | report_notebooks/encdec_noing_250_512_040dr.ipynb | mit | [
"Encoder-Decoder Analysis\nModel Architecture",
"report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/reports/encdec_noing_250_512_040dr_2.json'\nlog_file = '/Users/bking/IdeaProjects/LanguageModelRNN/logs/encdec_noing_250_512_040dr_2.json'\n\nimport json\nimport matplotlib.pyplot as plt\nwith open(report_file) as f:\n report = json.loads(f.read())\nwith open(log_file) as f:\n logs = json.loads(f.read())\nprint'Encoder: \\n\\n', report['architecture']['encoder']\nprint'Decoder: \\n\\n', report['architecture']['decoder']",
"Perplexity on Each Dataset",
"print('Train Perplexity: ', report['train_perplexity'])\nprint('Valid Perplexity: ', report['valid_perplexity'])\nprint('Test Perplexity: ', report['test_perplexity'])",
"Loss vs. Epoch",
"%matplotlib inline\nfor k in logs.keys():\n plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')\n plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')\nplt.title('Loss v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()",
"Perplexity vs. Epoch",
"%matplotlib inline\nfor k in logs.keys():\n plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')\n plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')\nplt.title('Perplexity v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Perplexity')\nplt.legend()\nplt.show()",
"Generations",
"def print_sample(sample):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n print('Input: '+ enc_input + '\\n')\n print('Gend: ' + sample['generated'] + '\\n')\n print('True: ' + gold + '\\n')\n print('\\n')\n \n\nfor sample in report['train_samples']:\n print_sample(sample)\n\nfor sample in report['valid_samples']:\n print_sample(sample)\n\nfor sample in report['test_samples']:\n print_sample(sample)",
"BLEU Analysis",
"print 'Overall Score: ', report['bleu']['score'], '\\n'\nprint '1-gram Score: ', report['bleu']['components']['1']\nprint '2-gram Score: ', report['bleu']['components']['2']\nprint '3-gram Score: ', report['bleu']['components']['3']\nprint '4-gram Score: ', report['bleu']['components']['4']",
"N-pairs BLEU Analysis\nThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations",
"npairs_generated = report['n_pairs_bleu_generated']\nnpairs_gold = report['n_pairs_bleu_gold']\nprint 'Overall Score (Generated): ', npairs_generated['score'], '\\n'\nprint '1-gram Score: ', npairs_generated['components']['1']\nprint '2-gram Score: ', npairs_generated['components']['2']\nprint '3-gram Score: ', npairs_generated['components']['3']\nprint '4-gram Score: ', npairs_generated['components']['4']\n\nprint '\\n'\n\nprint 'Overall Score: (Gold)', npairs_gold['score'], '\\n'\nprint '1-gram Score: ', npairs_gold['components']['1']\nprint '2-gram Score: ', npairs_gold['components']['2']\nprint '3-gram Score: ', npairs_gold['components']['3']\nprint '4-gram Score: ', npairs_gold['components']['4']",
"Alignment Analysis\nThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores",
"print 'Average Generated Score: ', report['average_alignment_generated']\nprint 'Average Gold Score: ', report['average_alignment_gold']"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quaquel/EMAworkbench | ema_workbench/examples/scenario_discovery_resampling.ipynb | bsd-3-clause | [
"Performing Scenario Discovery in Python\nThe purpose of example is to demonstrate how one can do scenario discovery in python. I will demonstrate how we can perform both PRIM in an interactive way, as well as briefly show how to use CART, which is also available in the exploratory modeling workbench. There is ample literature on both CART and PRIM and their relative merits for use in scenario discovery. So I won't be discussing that here in any detail.\nIn order to demonstrate the use of the exploratory modeling workbench for scenario discovery, I am using a published example. I am using the data used in the original article by Ben Bryant and Rob Lempert where they first introduced 2010. Ben Bryant kindly made this data available and allowed me to share it. The data comes as a csv file. We can import the data easily using pandas. columns 2 up to and including 10 contain the experimental design, while the classification is presented in column 15\nThis example is a slightly updated version of a blog post on https://waterprogramming.wordpress.com/2015/08/05/scenario-discovery-in-python/",
"import pandas as pd\n\ndata = pd.read_csv(\"./data/bryant et al 2010 data.csv\", index_col=False)\nx = data.iloc[:, 2:11]\ny = data.iloc[:, 15].values",
"the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running.",
"from ema_workbench.analysis import prim\nfrom ema_workbench.util import ema_logging\n\nema_logging.log_to_stderr(ema_logging.INFO);",
"Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box.",
"prim_alg = prim.Prim(x, y, threshold=0.8, peel_alpha=0.1)\nbox1 = prim_alg.find_box()",
"Let's investigate this first box is some detail. A first thing to look at is the trade off between coverage and density. The box has a convenience function for this called show_tradeoff.",
"box1.show_tradeoff()\nplt.show()",
"Since we are doing this analysis in a notebook, we can take advantage of the interactivity that the browser offers. A relatively recent addition to the python ecosystem is the library altair. Altair can be used to create interactive plots for use in a browser. Altair is an optional dependency for the workbench. If available, we can create the following visual.",
"box1.inspect_tradeoff()",
"Here we can interactively explore the boxes associated with each point in the density coverage trade-off. It also offers mouse overs for the various points on the trade off curve. Given the id of each point, we can also use the workbench to manually inpect the peeling trajectory. Following Bryant & Lempert, we inspect box 21.",
"box1.resample(21)\n\nbox1.inspect(21)\nbox1.inspect(21, style=\"graph\")\nplt.show()",
"If one where to do a detailed comparison with the results reported in the original article, one would see small numerical differences. These differences arise out of subtle differences in implementation. The most important difference is that the exploratory modeling workbench uses a custom objective function inside prim which is different from the one used in the scenario discovery toolkit. Other differences have to do with details about the hill climbing optimization that is used in prim, and in particular how ties are handled in selected the next step. The differences between the two implementations are only numerical, and don't affect the overarching conclusions drawn from the analysis. \nLet's select this 21 box, and get a more detailed view of what the box looks like. Following Bryant et al., we can use scatter plots for this.",
"box1.select(21)\nfig = box1.show_pairs_scatter(21)\nplt.show()",
"Because the last restriction is not significant, we can choose to drop this restriction from the box.",
"box1.drop_restriction(\"Cellulosic cost\")\nbox1.inspect(style=\"graph\")\nplt.show()",
"We have now found a first box that explains over 75% of the cases of interest. Let's see if we can find a second box that explains the remainder of the cases.",
"box2 = prim_alg.find_box()",
"As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overal results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames.",
"prim_alg.stats_to_dataframe()\n\nprim_alg.boxes_to_dataframe()",
"CART\nThe way of interacting with CART is quite similar to how we setup the prim analysis. We import cart from the analysis package. We instantiate the algorithm, and next fit CART to the data. This is done via the build_tree method.",
"from ema_workbench.analysis import cart\n\ncart_alg = cart.CART(x, y, 0.05)\ncart_alg.build_tree()",
"Now that we have trained CART on the data, we can investigate its results. Just like PRIM, we can use stats_to_dataframe and boxes_to_dataframe to get an overview.",
"cart_alg.stats_to_dataframe()\n\ncart_alg.boxes_to_dataframe()",
"Alternatively, we might want to look at the classification tree directly. For this, we can use the show_tree method.",
"fig = cart_alg.show_tree()\nfig.set_size_inches((18, 12))\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy | v0.11/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb | bsd-3-clause | [
"%matplotlib inline",
"GINI Water Vapor Imagery\nUse MetPy's support for GINI files to read in a water vapor satellite image and plot the\ndata using CartoPy.",
"import cartopy.feature as cfeature\nimport matplotlib.pyplot as plt\nimport xarray as xr\n\nfrom metpy.cbook import get_test_data\nfrom metpy.io import GiniFile\nfrom metpy.plots import add_metpy_logo, add_timestamp, colortables\n\n# Open the GINI file from the test data\nf = GiniFile(get_test_data('WEST-CONUS_4km_WV_20151208_2200.gini'))\nprint(f)",
"Get a Dataset view of the data (essentially a NetCDF-like interface to the\nunderlying data). Pull out the data and (x, y) coordinates. We use metpy.parse_cf to\nhandle parsing some netCDF Climate and Forecasting (CF) metadata to simplify working with\nprojections.",
"ds = xr.open_dataset(f)\nx = ds.variables['x'][:]\ny = ds.variables['y'][:]\ndat = ds.metpy.parse_cf('WV')",
"Plot the image. We use MetPy's xarray/cartopy integration to automatically handle parsing\nthe projection information.",
"fig = plt.figure(figsize=(10, 12))\nadd_metpy_logo(fig, 125, 145)\nax = fig.add_subplot(1, 1, 1, projection=dat.metpy.cartopy_crs)\nwv_norm, wv_cmap = colortables.get_with_range('WVCIMSS', 100, 260)\nwv_cmap.set_under('k')\nim = ax.imshow(dat[:], cmap=wv_cmap, norm=wv_norm,\n extent=(x.min(), x.max(), y.min(), y.max()), origin='upper')\nax.add_feature(cfeature.COASTLINE.with_scale('50m'))\nadd_timestamp(ax, f.prod_desc.datetime, y=0.02, high_contrast=True)\n\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sympy/scipy-2017-codegen-tutorial | notebooks/07-the-hard-way.ipynb | bsd-3-clause | [
"The Harder Way: C Code generation, Custom Printers, and CSE [1 hour]\nOne of the most common low level programming languages in use is C. Compiled C code can be optimized for execution speed for many different computers. Python is written in C as well as many of the vectorized operations in NumPy and numerical algorithms in SciPy. It is often necessary to translate a complex mathematical expression into C for optimal execution speeds and memory management. In this notebook you will learn how to automatically translate a complex SymPy expression into C, compile the code, and run the program.\nWe will continue examining the complex chemical kinetic reaction ordinary differential equation introduced in the previous lesson.\nLearning Objectives\nAfter this lesson you will be able to:\n\nuse a code printer class to convert a SymPy expression to compilable C code\nuse an array compatible assignment to print valid C array code\nsubclass the printer class and modify it to provide custom behavior\nutilize common sub expression elimination to simplify and speed up the code execution\n\nImport SymPy and enable mathematical printing in the Jupyter notebook.",
"import sympy as sym\n\nsym.init_printing()",
"Ordinary Differential Equations\nThe previously generated ordinary differential equations that describe chemical kinetic reactions are loaded below. These expressions describe the right hand side of this mathematical equation:\n$$\\frac{d\\mathbf{y}}{dt} = \\mathbf{f}(\\mathbf{y}(t))$$\nwhere the state vector $\\mathbf{y}(t)$ is made up of 14 states, i.e. $\\mathbf{y}(t) \\in \\mathbb{R}^{14}$.\nBelow the variable rhs_of_odes represents $\\mathbf{f}(\\mathbf{y}(t))$ and states represents $\\mathbf{y}(t)$.\nFrom now own we will simply use $\\mathbf{y}$ instead of $\\mathbf{y}(t)$ and assume an implicit function of $t$.",
"from scipy2017codegen.chem import load_large_ode\n\nrhs_of_odes, states = load_large_ode()",
"Exercise [2 min]\nDisplay the expressions (rhs_of_odes and states), inspect them, and find out their types and dimensions. What are some of the characteristics of the equations (type of mathematical expressions, linear or non-linear, etc)?\nDouble Click For Solution\n<!--\n\nrhs_of_odes\ntype(rhs_of_odes)\nrhs_of_odes.shape\n# rhs_of_odes is a 14 x 1 SymPy matrix of expressions. The expressions are\n# long multivariate polynomials.\nstates\ntype(states)\nstates.shape\n# states is a 14 x 1 SymPy matrix of symbols\n\nThe equations are nonlinear equations of the states. There are 14 equations and 14 states. The coefficients in the equations are various floating point numbers.\n\n-->",
"# write your solution here",
"Compute the Jacobian\nAs has been shown in the previous lesson the Jacobian of the right hand side of the differential equations is often very useful for computations, such as integration and optimization. With:\n$$\\frac{d\\mathbf{y}}{dt} = \\mathbf{f}(\\mathbf{y})$$\nthe Jacobian is defined as:\n$$\\mathbf{J}(\\mathbf{y}) = \\frac{\\partial\\mathbf{f}(\\mathbf{y})}{\\partial\\mathbf{y}}$$\nSymPy can compute the Jacobian of matrix objects with the Matrix.jacobian() method.\nExercise [3 min]\nLook up the Jacobian in the SymPy documentation then compute the Jacobian and store the result in the variable jac_of_odes. Inspect the resulting Jacobian for dimensionality, type, and the symbolic form.\nDouble Click For Solution\n<!--\n\njac_of_odes = rhs_of_odes.jacobian(states)\ntype(jac_of_odes)\njac_of_odes.shape\njac_of_odes\n\nThe Jacobian is a 14 x 14 SymPy matrix and contains 196 expressions which are linear functions of the state variables.\n\n-->",
"# write your answer here",
"C Code Printing\nThe two expressions are large and will likely have to be excuted many thousands of times to compute the desired numerical values, so we want them to execute as fast as possible. We can use SymPy to print these expressions as C code.\nWe will design a double precision C function that evaluates both $\\mathbf{f}(\\mathbf{y})$ and $\\mathbf{J}(\\mathbf{y})$ simultaneously given the values of the states $\\mathbf{y}$. Below is a basic template for a C program that includes such a function, evaluate_odes(). Our job is to populate the function with the C version of the SymPy expressions.\n```C\ninclude <math.h>\ninclude <stdio.h>\nvoid evaluate_odes(const double state_vals[14], double rhs_result[14], double jac_result[196])\n{\n // We need to fill in the code here using SymPy.\n}\nint main() {\n// initialize the state vector with some values\ndouble state_vals[14] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14};\n// create \"empty\" 1D arrays to hold the results of the computation\ndouble rhs_result[14];\ndouble jac_result[196];\n\n// call the function\nevaluate_odes(state_vals, rhs_result, jac_result);\n\n// print the computed values to the terminal\nint i;\n\nprintf(\"The right hand side of the equations evaluates to:\\n\");\nfor (i=0; i < 14; i++) {\n printf(\"%lf\\n\", rhs_result[i]);\n}\n\nprintf(\"\\nThe Jacobian evaluates to:\\n\");\nfor (i=0; i < 196; i++) {\n printf(\"%lf\\n\", jac_result[i]);\n}\n\nreturn 0;\n\n}\n```\nInstead of using the ccode convenience function you learned earlier let's use the underlying code printer class to do the printing. This will allow us to modify the class to for custom printing further down.",
"from sympy.printing.ccode import C99CodePrinter",
"All printing classes have to be instantiated and then the .doprint() method can be used to print SymPy expressions. Let's try to print the right hand side of the differential equations.",
"printer = C99CodePrinter()\n\nprint(printer.doprint(rhs_of_odes))",
"In this case, the C code printer does not do what we desire. It does not support printing a SymPy Matrix (see the first line of the output). In C, on possible representation of a matrix is an array type. The array type in C stores contigous values, e.g. doubles, in a chunk of memory. You can declare an array of doubles in C like:\nC\ndouble my_array[10];\nThe word double is the data type of the individual values in the array which must all be the same. The word my_array is the variable name we choose to name the array and the [10] is the syntax to declare that this array will have 10 values.\nThe array is \"empty\" when first declared and can be filled with values like so:\nC\nmy_array[0] = 5;\nmy_array[1] = 6.78;\nmy array[2] = my_array[0] * 12;\nor like:\nC\nmy_array = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};\nIt is possible to declare multidimensional arrays in C that could map more directly to the indices of our two dimensional matrix, but in this case we will map our two dimensional matrix to a one dimenasional array using C contingous row ordering.\nThe code printers are capable of dealing with this need through the assign_to keyword argument in the .doprint() method but we must define a SymPy object that is appropriate to be assigned to. In our case, since we want to assign a Matrix we need to use an appropriately sized Matrix symbol.",
"rhs_result = sym.MatrixSymbol('rhs_result', 14, 1)\n\nprint(rhs_result)\n\nprint(rhs_result[0])\n\nprint(printer.doprint(rhs_of_odes, assign_to=rhs_result))",
"Notice that we have proper array value assignment and valid lines of C code that can be used in our function.\nExcercise [5 min]\nPrint out valid C code for the Jacobian matrix.\nDouble Click For Solution\n<!---\njac_result = sym.MatrixSymbol('jac_result', 14, 14)\n\nprint(jac_result)\n\nprint(printer.doprint(jac_of_odes, assign_to=jac_result))\n-->",
"# write your answer here",
"Changing the Behavior of the Printer\nThe SymPy code printers are relatively easy to extend. They are designed such that if you want to change how a particularly SymPy object prints, for example a Symbol, then you only need to modify the _print_Symbol method of the printer. In general, the code printers have a method for every SymPy object and also many builtin types. Use tab completion with C99CodePrinter._print_ to see all of the options.\nOnce you find the method you want to modify, it is often useful to look at the existing impelementation of the print method to see how the code is written.",
"C99CodePrinter._print_Symbol??",
"Below is a simple example of overiding the Symbol printer method. Note that you should use the self._print() method instead of simply returning the string so that the proper printer, self._print_str(), is dispatched. This is most important if you are printing non-singletons, i.e. expressions that are made up of multiple singletons.",
"C99CodePrinter._print_str??\n\nclass MyCodePrinter(C99CodePrinter):\n def _print_Symbol(self, expr):\n return self._print(\"No matter what symbol you pass in I will always print:\\n\\nNi!\")\n\nmy_printer = MyCodePrinter()\n\ntheta = sym.symbols('theta')\ntheta\n\nprint(my_printer.doprint(theta))",
"Exercise [10 min]\nOne issue with our current code printer is that the expressions use the symbols y0, y1, ..., y13 instead of accessing the values directly from the arrays with state_vals[0], state_vals[1], ..., state_vals[13]. We could go back and rename our SymPy symbols to use brackets, but another way would be to override the _print_Symbol() method to print these symbols as we desire. Modify the code printer so that it prints with the proper array access in the expression.\nDouble Click For Solution: Subclassing\n<!--\n\nThe following solution examines the symbol and if it is a state variable it overrides the printer, otherwise it uses the parent class to print the symbol as a fall back.\n\nclass MyCodePrinter(C99CodePrinter):\n def _print_Symbol(self, symbol):\n if symbol in states:\n idx = list(states).index(symbol)\n return self._print('state_vals[{}]'.format(idx))\n else:\n return super()._print_Symbol(symbol)\n\nmy_printer = MyCodePrinter()\n\nprint(my_printer.doprint(rhs_of_odes, assign_to=rhs_result))\n\n-->\n\nDouble Click For Solution: Exact replacement\n<!--\nAnother option is to replace the symbols with `MatrixSymbol` elements. Notice that the C printer assumes that a 2D matrix will get mapped to a 1D C array.\n\nstate_vals = sym.MatrixSymbol('state_vals', 14, 1)\nstate_array_map = dict(zip(states, state_vals))\nprint(state_array_map)\nprint(printer.doprint(rhs_of_odes.xreplace(state_array_map), assign_to=rhs_result))\n\n-->",
"# write your answer here",
"Bonus Exercise\nDo this exercise if you finish the previous one quickly.\nIt turns out that calling pow() for low value integer exponents executes slower than simply expanding the multiplication. For example pow(x, 2) could be printed as x*x. Modify the CCodePrinter ._print_Pow method to expand the multiplication if the exponent is less than or equal to 4. You may want to have a look at the source code with printer._print_Pow??\nNote that a Pow expression has an .exp for exponent and .base for the item being raised. For example $x^2$ would have:\npython\nexpr = x**2\nexpr.base == x\nexpr.exp == 2\nDouble Click for Solution\n<!--\n\nprinter._print_Pow??\n\nclass MyCodePrinter(C99CodePrinter):\n def _print_Pow(self, expr):\n if expr.exp.is_integer and expr.exp > 0 and expr.exp <= 4:\n return '*'.join([self._print(expr.base) for i in range(expr.exp)])\n else:\n return super()._print_Pow(expr)\n\nmy_printer = MyCodePrinter()\n\nx = sym.Symbol('x')\n\nmy_printer.doprint(x)\n\nmy_printer.doprint(x**2)\n\nmy_printer.doprint(x**4)\n\nmy_printer.doprint(x**5)\n\nmy_printer.doprint(x**1.5)\n\n-->",
"# write your answer here",
"Common Subexpression Elimination\nIf you look carefully at the expressions in the two matrices you'll see repeated expressions. These are not ideal in the sense that the computer has to repeat the exact same calculation multiple times. For large expressions this can be a major issue. Compilers, such as gcc, can often eliminate common subexpressions on their own when different optimization flags are invoked but for complex expressions the algorithms in some compilers do not do a thorough job or compilation can take an extremely long time. SymPy has tools to perform common subexpression elimination which is both thorough and reasonably efficient. In particular if gcc is run with the lowest optimization setting -O0 cse can give large speedups.\nFor example if you have two expressions:\npython\na = x*y + 5\nb = x*y + 6\nyou can convert this to these three expressions:\npython\nz = x*y\na = z + 5\nb = z + 6\nand x*y only has to be computed once.\nThe cse() function in SymPy returns the subexpression, z = x*y, and the simplified expressions: a = z + 5, b = z + 6.\nHere is how it works:",
"sm.cse?\n\nsub_exprs, simplified_rhs = sym.cse(rhs_of_odes)\n\nfor var, expr in sub_exprs:\n sym.pprint(sym.Eq(var, expr))",
"cse() can return a number of simplified expressions and to do this it returns a list. In our case we have 1 simplified expression that can be accessed as the first item of the list.",
"type(simplified_rhs)\n\nlen(simplified_rhs)\n\nsimplified_rhs[0]",
"You can find common subexpressions among multiple objects also:",
"jac_of_odes = rhs_of_odes.jacobian(states)\n\nsub_exprs, simplified_exprs = sym.cse((rhs_of_odes, jac_of_odes))\n\nfor var, expr in sub_exprs:\n sym.pprint(sym.Eq(var, expr))\n\nsimplified_exprs[0]\n\nsimplified_exprs[1]",
"Exercise [15min]\nUse common subexpression elimination to print out C code for your two arrays such that:\n```C\ndouble x0 = first_sub_expression;\n...\ndouble xN = last_sub_expression;\nrhs_result[0] = expressions_containing_the_subexpressions;\n...\nrhs_result[13] = ...;\njac_result[0] = ...;\n...\njac_result[195] = ...;\n```\nThe code you create can be copied and pasted into the provided template above to make a C program. Refer back to the introduction to C code printing above.\nTo give you a bit of help we will first introduce the Assignment class. The printers know how to print variable assignments that are defined by an Assignment instance.",
"from sympy.printing.codeprinter import Assignment\n\nprint(printer.doprint(Assignment(theta, 5)))",
"The following code demonstrates a way to use cse() to simplify single matrix objects. Note that we use ImmutableDenseMatrix because all dense matrics are internally converted to this type in the printers. Check the type of your matrices to see.",
"class CMatrixPrinter(C99CodePrinter):\n def _print_ImmutableDenseMatrix(self, expr):\n sub_exprs, simplified = sym.cse(expr)\n lines = []\n for var, sub_expr in sub_exprs:\n lines.append('double ' + self._print(Assignment(var, sub_expr)))\n M = sym.MatrixSymbol('M', *expr.shape)\n return '\\n'.join(lines) + '\\n' + self._print(Assignment(M, expr))\n\np = CMatrixPrinter()\nprint(p.doprint(jac_of_odes))",
"Now create a custom printer that uses cse() on the two matrices simulatneously so that subexpressions are not repeated. Hint: think about how the list printer method, _print_list(self, list_of_exprs), might help here.\nDouble Click For Solution\n<!--\n\nclass CMatrixPrinter(C99CodePrinter):\n\n def _print_list(self, list_of_exprs):\n # NOTE : The MutableDenseMatrix is turned in an ImmutableMatrix inside here.\n if all(isinstance(x, sym.ImmutableMatrix) for x in list_of_exprs):\n sub_exprs, simplified_exprs = sym.cse(list_of_exprs)\n lines = []\n for var, sub_expr in sub_exprs:\n ass = Assignment(var, sub_expr.xreplace(state_array_map))\n lines.append('double ' + self._print(ass))\n for mat in simplified_exprs:\n lines.append(self._print(mat.xreplace(state_array_map)))\n return '\\n'.join(lines)\n else:\n return super()._print_list(list_of_exprs)\n\n def _print_ImmutableDenseMatrix(self, expr):\n if expr.shape[1] > 1:\n M = sym.MatrixSymbol('jac_result', *expr.shape)\n else:\n M = sym.MatrixSymbol('rhs_result', *expr.shape)\n return self._print(Assignment(M, expr))\n\np = CMatrixPrinter()\nprint(p.doprint([rhs_of_odes, jac_of_odes]))\n\n-->",
"# write your answer here",
"Bonus Exercise: Compile and Run the C Program\nBelow we provide you with a template for the C program described above. You can use it by passing in a string like:\npython\nc_template.format(code='the holy grail')\nUse this template and your code printer to create a file called run.c in the working directory.\nTo compile the code there are several options. The first is gcc (the GNU C Compiler). If you have Linux, Mac, or Windows (w/ mingw installed) you can use the Jupyter notebook ! command to send your command to the terminal. For example:\nipython\n!gcc run.c -lm -o run\nThis will compile run.c, link against the C math library with -lm and output, -o, to a file run (Mac/Linux) or run.exe (Windows).\nOn Mac and Linux the program can be executed with:\nipython\n!./run\nand on Windows:\nipython\n!run.exe\nOther options are using the clang compiler or Windows cl compiler command:\nipython\n!clang run.c -lm -o run\n!cl run.c -lm\nDouble Click For Solution\n<!--\n\nc_program = c_template.format(code=p.doprint([rhs_of_odes, jac_of_odes]))\nprint(c_program)\n\nwith open('run.c', 'w') as f:\n f.write(c_program)\n\n-->",
"c_template = \"\"\"\\\n#include <math.h>\n#include <stdio.h>\n\nvoid evaluate_odes(const double state_vals[14], double rhs_result[14], double jac_result[196])\n{{\n // We need to fill in the code here using SymPy.\n{code}\n}}\n\nint main() {{\n\n // initialize the state vector with some values\n double state_vals[14] = {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}};\n // create \"empty\" 1D arrays to hold the results of the computation\n double rhs_result[14];\n double jac_result[196];\n\n // call the function\n evaluate_odes(state_vals, rhs_result, jac_result);\n\n // print the computed values to the terminal\n int i;\n printf(\"The right hand side of the equations evaluates to:\\\\n\");\n for (i=0; i < 14; i++) {{\n printf(\"%lf\\\\n\", rhs_result[i]);\n }}\n printf(\"\\\\nThe Jacobian evaluates to:\\\\n\");\n for (i=0; i < 196; i++) {{\n printf(\"%lf\\\\n\", jac_result[i]);\n }}\n\n return 0;\n}}\\\n\"\"\"\n\n# write your answer here"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs | 2.3/tutorials/gravb_bol.ipynb | gpl-3.0 | [
"Gravity Brightening/Darkening (gravb_bol)\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"",
"As always, let's do imports and initialize a logger and a new bundle.",
"import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()\n\nb.add_dataset('lc', dataset='lc01')\nb.add_dataset('mesh', times=[0], columns=['intensities*'])",
"Relevant Parameters\nThe 'gravb_bol' parameter corresponds to the β coefficient for gravity darkening corrections.",
"print(b['gravb_bol'])\n\nprint(b['gravb_bol@primary'])",
"If you have a logger enabled, PHOEBE will print a warning if the value of gravb_bol is outside the \"suggested\" ranges. Note that this is strictly a warning, and will never turn into an error at b.run_compute().\nYou can also manually call b.run_checks(). The first returned item tells whether the system has passed checks: True means it has, False means it has failed, and None means the tests pass but with a warning. The second argument tells the first warning/error message raised by the checks.\nThe checks use the following \"suggested\" values:\n * teff 8000+: gravb_bol >= 0.9 (suggest 1.0)\n * teff 6600-8000: gravb_bol 0.32-1.0\n * teff 6600-: grav_bol < 0.9 (suggest 0.32)",
"print(b.run_checks())\n\nb['teff@primary'] = 8500\nb['gravb_bol@primary'] = 0.8\nprint(b.run_checks())\n\nb['teff@primary'] = 7000\nb['gravb_bol@primary'] = 0.2\nprint(b.run_checks())\n\nb['teff@primary'] = 6000\nb['gravb_bol@primary'] = 1.0\nprint(b.run_checks())",
"Influence on Intensities",
"b['teff@primary'] = 6000\nb['gravb_bol@primary'] = 0.32\n\nb.run_compute(model='gravb_bol_32')\n\nafig, mplfig = b['primary@mesh01@gravb_bol_32'].plot(fc='intensities', ec='None', show=True)\n\nb['gravb_bol@primary'] = 1.0\n\nb.run_compute(model='gravb_bol_10')\n\nafig, mplfig = b['primary@mesh01@gravb_bol_10'].plot(fc='intensities', ec='None', show=True)",
"Comparing these two plots, it is essentially impossible to notice any difference between the two models. But if we compare the intensities directly, we can see that there is a subtle difference, with a maximum difference of about 3%.",
"np.nanmax((b.get_value('intensities', component='primary', model='gravb_bol_32') - b.get_value('intensities', component='primary', model='gravb_bol_10'))/b.get_value('intensities', component='primary', model='gravb_bol_10'))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
abeschneider/algorithm_notes | Heaps.ipynb | mit | [
"Run time\n\nFind min/max: $O(1)$\nInsert: $O(\\log n)$\nDelete: $O(\\log n)$",
"from __future__ import print_function\nimport numpy as np\n\nfrom IPython.display import clear_output\nfrom pjdiagram import *\nfrom ipywidgets import *\n\nfrom heap import binary_heap_allocation_example, insert_item_to_heap_example, percolate_down_example",
"Description\nA heap is a type of binary tree that allows fast insertion and fast traversal. This makes them a good candidate for sorting large amounts of data. Heaps have the following properties:\n- the root node has maximum value key\n- the key stored at a non-root is at most the value of its parent\nTherefore:\n- any path from the root node to a leaf node is in nonincreasing order\nHowever:\n- the left and right sub-trees don't have a formal relationship\nStorage\nA binary tree can be represented using an array with the following indexing:\n$$\\texttt{parent}\\left(i\\right) = (i-1)/2$$\n$$\\texttt{left}\\left(i\\right) = (2i)+1$$\n$$\\texttt{right}\\left(i\\right) = (2i)+2$$\nExample\n\nroot node index: $0$\nleft child: $1$\nleft child: $2*1+1 = 3$\nright child: $2*1+2 = 4$\n\n\nright child: $2$\nleft child: $2*2 + 1 = 5$\nright child: $2*2 + 2 = 6$\n\n\n\n\n\nThe figure below provides a visual demonstration.",
"binary_heap_allocation_example()",
"Operations\nInserting a new item into the heap\nTo insert a new item into the heap, we start by first adding it to the end. Once added, we will percolate the item up until its parent is larger than the item.",
"def parent(i): return (i-1)/2\ndef left(i): return 2*i+1\ndef right(i): return 2*i+2\n\ndef percolate_up(heap, startpos, pos):\n ppos = parent(pos)\n while pos > startpos and heap[ppos] < heap[pos]:\n # percolate value up by swapping current position with parent position\n heap[pos], heap[ppos] = heap[ppos], heap[pos]\n \n # move up one node\n pos = ppos\n ppos = parent(pos)\n \ndef heap_insert(heap, value):\n # add value to end\n heap.append(value)\n \n # move value up heap until the nodes below it are smaller\n percolate_up(heap, 0, len(heap)-1)",
"To see why this works, we can visualize the algorithm. We start with a new value of 100 (highlighted with red). That is inserted into the bottom of the heap. We percoluate 100 up (each swap is highlighted) until it gets placed into the root note. Once finished, the heap's properties are now restored, and every child will have a smaller value than its parent.\nTo get a good sense of how percolute_up works, try putting different values in for the heap. Note that, it won't work correctly if the initial value isn't a proper heap.",
"heap = [16, 14, 10, 8, 7, 9, 3, 2, 4]\nheap.append(100)\ninsert_item_to_heap_example(heap)",
"A quick example of using the code:",
"heap = []\nheap_insert(heap, 20)\nprint(\"adding 20: \", heap) # [20]\nheap_insert(heap, 5)\nprint(\"adding 5: \", heap) # [5, 20]\nheap_insert(heap, 1)\nprint(\"adding 1: \", heap) # [1, 20, 5]\nheap_insert(heap, 50)\nprint(\"adding 50: \", heap) # [1, 20, 5, 50]\nheap_insert(heap, 6)\nprint(\"adding 6: \", heap) # [1, 5, 6, 50, 20]\n\nwith Canvas(400, 150) as ctx:\n draw_binary_tree(ctx, (200, 50), heap)\n",
"Removing an item from the heap\nRemoving the root node from the heap gives the largest value. In place of the root node, the smallest (i.e. last value in the heap) can be placed at the root, and the heap properties are then restored.\nTo restore the heap properties, the function percolate_down starts at the root node, and traverses down the tree. At every node it compares the current node's value with the left and right child. If the children are smaller than the current node, because of the heap properties, we know the rest of the tree is correctly ordered. If the current node is less than the left node or right node, it is swapped with the largest value.\nTo understand why this works, consider the two possibilities:\n(1) The current node is largest. This meets the definition of a heap.",
"heap = [10, 5, 3]\nwith Canvas(400, 80) as ctx:\n draw_binary_tree(ctx, (200, 20), heap)",
"(2) The left child is largest. In the case if we swap the parent node with the child, the heap properties are restored (i.e. the top node is larger than either of its children).",
"heap1 = [5, 10, 3]\nheap2 = [10, 5, 3]\nwith Canvas(400, 80) as ctx:\n draw_binary_tree(ctx, (100, 20), heap1)\n draw_binary_tree(ctx, (300, 20), heap2)",
"We have to do this recursively down the tree, as every swap we make can potentially cause a violation of the heap below. The code for the algorithm is given below:",
"def percolate_down(heap, i, size):\n l = left(i)\n r = right(i)\n if l < size and heap[l] > heap[i]:\n max = l\n else:\n max = i\n \n if r < size and heap[r] > heap[l]:\n max = r\n \n # if left or right is greater than current index\n if max != i:\n # swap values\n heap[i], heap[max] = heap[max], heap[i] \n \n # continue downward\n percolate_down(heap, max, len(heap))",
"To see this code in action, we'll start with a well-formed heap. Next, we'll take a value off of the heap by swapping the root node with the last node. Finally, we restore the heap with a call to percolate_down. In the demo below the highlighted nodes show the two nodes that will be swapped (i.e. parent node and the largest child).",
"heap = [16, 14, 10, 8, 7, 9, 3, 2, 4]\n\n# swap root with last value (4 is now root, and 16 is at the bottom)\nheap[0], heap[-1] = heap[-1], heap[0]\n\n# remove `16` from heap, and restore the heap properties\nvalue = heap.pop()\n\npercolate_down_example(heap)",
"Finally, putting everything together, we have heap_pop:",
"def heap_pop(heap):\n # swap root with last value\n heap[0], heap[-1] = heap[-1], heap[0]\n \n # remove last value\n result = heap.pop()\n \n # restore heap properties\n for i in range(len(heap)):\n percolate_down(heap, 0, len(heap))\n \n return result",
"To see heap_pop in action:",
"heap = []\nheap_insert(heap, 1)\nheap_insert(heap, 100)\nheap_insert(heap, 20)\nheap_insert(heap, 5)\nheap_insert(heap, 3)\nprint(heap)\n\nprint(heap_pop(heap))\nprint(heap_pop(heap))\nprint(heap_pop(heap))\nprint(heap_pop(heap))\nprint(heap_pop(heap))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
robertoalotufo/ia898 | src/hadamard.ipynb | mit | [
"Function hadamard\nSynopse\nHadamard Transform.\n\nF = iahadamard(f)\nOutput:\nF: Image.\n\n\nInput:\nf: Image.\n\n\n\n\n\nFunction code",
"import numpy as np\n\ndef hadamard(f):\n import ia898.src as ia\n f = np.asarray(f).astype(np.float64)\n if len(f.shape) == 1: f = f[:, newaxis]\n (m, n) = f.shape\n A = ia.hadamardmatrix(m)\n if (n == 1):\n F = np.dot(A, f)\n else:\n B = ia.hadamardmatrix(n)\n F = np.dot(np.dot(A, f), np.transpose(B))\n return F",
"Examples\nExample 1",
"testing = (__name__ == \"__main__\")\n\nif testing:\n ! jupyter nbconvert --to python hadamard.ipynb\n import numpy as np\n import sys,os\n import matplotlib.image as mpimg\n ia898path = os.path.abspath('../../')\n if ia898path not in sys.path:\n sys.path.append(ia898path)\n import ia898.src as ia\n\n\nif testing:\n f = mpimg.imread('../data/cameraman.tif')\n F = ia.hadamard(f)\n nb = ia.nbshow(2)\n nb.nbshow(f)\n nb.nbshow(ia.normalize(np.log(abs(F)+1)))\n nb.nbshow()",
"Measuring time:",
"if testing:\n f = mpimg.imread('../data/cameraman.tif')\n print('Computational time is:')\n %timeit ia.hadamard(f)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mzszym/oedes | examples/egdm/egdm-g1-g2-g3.ipynb | agpl-3.0 | [
"Evaluation of transport models\nThis example shows how to evaluate functions of the Extended Gaussian Disorder Model without running device simulation.\nBelow, $\\hat{\\sigma}$ denotes normalized Gaussian disorder $\\frac{\\sigma}{k T}$. $c$ denotes relative charge carrier concentration $\\frac{N}{N_0}$.",
"%matplotlib inline\nimport matplotlib.pylab as plt\nimport numpy as np\nfrom oedes.functions import egdm\nfrom scipy.optimize import brentq\nfrom oedes import *",
"Enhancement factor of mobility depending on charge carrier concentration $g_1$",
"c = 10**np.linspace(-10, 0., 101)\nfor nsigma in [3, 4, 5, 6][::-1]:\n g1 = egdm.g1(nsigma, np.where(c < egdm.g1_max_c, c, egdm.g1_max_c))\n plt.plot(c, g1, label='$\\hat{\\sigma}$ = %s'%nsigma)\n testing.store(g1, rtol=1e-7)\n # solve for g1(x)=2\n c2 = brentq(lambda x: egdm.g1(nsigma, x) - 2., 1e-10, 1e1)\n plt.plot(c2, 2, 'o', color='black')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylim([1., 1e6])\nplt.xlabel('carrier concentration')\nplt.ylabel('mobility enhancement $g_1$')\nplt.legend(loc=0, frameon=False);",
"Enhancement factor of mobility depending on electric field $g_2$",
"En = np.linspace(0., 2.5, 101)\n\nfor nsigma in [3, 4, 5, 6][::-1]:\n g2 = egdm.g2(nsigma, np.where(En < egdm.g2_max_En, En, egdm.g2_max_En))\n testing.store(g2, rtol=1e-7)\n plt.plot(En, g2, label='$\\hat{\\sigma}$ = %s'%nsigma)\nplt.yscale('log')\nplt.ylim([1., 1e3])\nplt.xlabel('normalized electric field, $E_n=eaF/\\sigma$')\nplt.ylabel('mobility enhancement $g_2$')\nplt.legend(loc=0, frameon=False);",
"Enhancement factor of diffusion $g_3$",
"c = 10**np.linspace(-4, 0., 1001)\nfor nsigma in [3, 4, 5, 6][::-1]:\n g3 = egdm.g3(nsigma, np.where(c < egdm.g3_max_c, c, egdm.g3_max_c))\n plt.plot(c, g3, label='$\\hat{\\sigma}$ = %s'%nsigma)\n testing.store(g3, rtol=1e-7)\n # solve for g3(x)=2\n c2 = brentq(lambda x: egdm.g3(nsigma, x) - 2., 1e-4, 0.5)\n plt.plot(c2, 2, 'o', color='black')\nplt.xscale('log')\nplt.ylim([1., 8.])\nplt.xlabel('carrier concentration')\nplt.ylabel('diffusion enhancement $g_3$')\nplt.legend(loc=0, frameon=False);",
"Reference\nS. L. M. van Mensfoort and R. Coehoorn Effect of Gaussian disorder on the voltage dependence of the current density in sandwich-type devices based on organic semiconductors, Phys Rev B 78, 085207 (2008)\n\nThis file is a part of oedes, an open source organic electronic device \nsimulator. For more information, see https://www.github.com/mzszym/oedes."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
atulsingh0/MachineLearning | python-machine-learning/Assignment 1.ipynb | gpl-3.0 | [
"You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nAssignment 1 - Introduction to Machine Learning\nFor this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).",
"import numpy as np\nimport pandas as pd\nfrom sklearn.datasets import load_breast_cancer\n\ncancer = load_breast_cancer()\n\n#print(cancer.DESCR) # Print the data set description\ncancer",
"The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary.",
"cancer.keys()",
"Question 0 (Example)\nHow many features does the breast cancer dataset have?\nThis function should return an integer.",
"# You should write your whole answer within the function provided. The autograder will call\n# this function and compare the return value against the correct solution value\ndef answer_zero():\n # This function returns the number of features of the breast cancer dataset, which is an integer. \n # The assignment question description will tell you the general format the autograder is expecting\n return len(cancer['feature_names'])\n\n# You can examine what your function returns by calling it in the cell. If you have questions\n# about the assignment formats, check out the discussion forums for any FAQs\nanswer_zero() ",
"Question 1\nScikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. \nConvert the sklearn.dataset cancer to a DataFrame. \n*This function should return a (569, 31) DataFrame with * \n*columns = *\n['mean radius', 'mean texture', 'mean perimeter', 'mean area',\n'mean smoothness', 'mean compactness', 'mean concavity',\n'mean concave points', 'mean symmetry', 'mean fractal dimension',\n'radius error', 'texture error', 'perimeter error', 'area error',\n'smoothness error', 'compactness error', 'concavity error',\n'concave points error', 'symmetry error', 'fractal dimension error',\n'worst radius', 'worst texture', 'worst perimeter', 'worst area',\n'worst smoothness', 'worst compactness', 'worst concavity',\n'worst concave points', 'worst symmetry', 'worst fractal dimension',\n'target']\n\n*and index = *\nRangeIndex(start=0, stop=569, step=1)",
"def answer_one():\n \n df = pd.DataFrame(data=cancer['data'], columns=cancer['feature_names'])\n df['target'] = cancer['target']\n \n return df\n\n\nanswer_one()",
"Question 2\nWhat is the class distribution? (i.e. how many instances of malignant (encoded 0) and how many benign (encoded 1)?)\nThis function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign']",
"def answer_two():\n cancerdf = answer_one()\n \n malignant = (cancerdf['target']==0).sum()\n benign = (cancerdf['target']==1).sum()\n ans = [malignant, benign]\n \n return ans\n\n\nanswer_two()",
"Question 3\nSplit the DataFrame into X (the data) and y (the labels).\nThis function should return a tuple of length 2: (X, y), where \n* X has shape (569, 30)\n* y has shape (569,).",
"cancerdf = answer_one()\ncancerdf.iloc[:, :-1]\n\ndef answer_three():\n cancerdf = answer_one()\n \n X= cancerdf.iloc[:, :-1]\n y= cancerdf['target']\n \n return X, y",
"Question 4\nUsing train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test).\nSet the random number generator state to 0 using random_state=0 to make sure your results match the autograder!\nThis function should return a tuple of length 4: (X_train, X_test, y_train, y_test), where \n* X_train has shape (426, 30)\n* X_test has shape (143, 30)\n* y_train has shape (426,)\n* y_test has shape (143,)",
"from sklearn.model_selection import train_test_split\n\ndef answer_four():\n X, y = answer_three()\n \n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25 , random_state=0)\n \n return X_train, X_test, y_train, y_test\n\nanswer_four()",
"Question 5\nUsing KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train, y_train and using one nearest neighbor (n_neighbors = 1).\n*This function should return a * sklearn.neighbors.classification.KNeighborsClassifier.",
"from sklearn.neighbors import KNeighborsClassifier\n\ndef answer_five():\n X_train, X_test, y_train, y_test = answer_four()\n \n # Your code here\n \n return # Return your answer",
"Question 6\nUsing your knn classifier, predict the class label using the mean value for each feature.\nHint: You can use cancerdf.mean()[:-1].values.reshape(1, -1) which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).\nThis function should return a numpy array either array([ 0.]) or array([ 1.])",
"def answer_six():\n cancerdf = answer_one()\n means = cancerdf.mean()[:-1].values.reshape(1, -1)\n \n # Your code here\n \n return # Return your answer",
"Question 7\nUsing your knn classifier, predict the class labels for the test set X_test.\nThis function should return a numpy array with shape (143,) and values either 0.0 or 1.0.",
"def answer_seven():\n X_train, X_test, y_train, y_test = answer_four()\n knn = answer_five()\n \n # Your code here\n \n return # Return your answer",
"Question 8\nFind the score (mean accuracy) of your knn classifier using X_test and y_test.\nThis function should return a float between 0 and 1",
"def answer_eight():\n X_train, X_test, y_train, y_test = answer_four()\n knn = answer_five()\n \n # Your code here\n \n return # Return your answer",
"Optional plot\nTry using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.",
"def accuracy_plot():\n import matplotlib.pyplot as plt\n\n %matplotlib notebook\n\n X_train, X_test, y_train, y_test = answer_four()\n\n # Find the training and testing accuracies by target value (i.e. malignant, benign)\n mal_train_X = X_train[y_train==0]\n mal_train_y = y_train[y_train==0]\n ben_train_X = X_train[y_train==1]\n ben_train_y = y_train[y_train==1]\n\n mal_test_X = X_test[y_test==0]\n mal_test_y = y_test[y_test==0]\n ben_test_X = X_test[y_test==1]\n ben_test_y = y_test[y_test==1]\n\n knn = answer_five()\n\n scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y), \n knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)]\n\n\n plt.figure()\n\n # Plot the scores as a bar chart\n bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868'])\n\n # directly label the score onto the bars\n for bar in bars:\n height = bar.get_height()\n plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2), \n ha='center', color='w', fontsize=11)\n\n # remove all the ticks (both axes), and tick labels on the Y axis\n plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')\n\n # remove the frame of the chart\n for spine in plt.gca().spines.values():\n spine.set_visible(False)\n\n plt.xticks([0,1,2,3], ['Malignant\\nTraining', 'Benign\\nTraining', 'Malignant\\nTest', 'Benign\\nTest'], alpha=0.8);\n plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8)\n\n# Uncomment the plotting function to see the visualization, \n# Comment out the plotting function when submitting your notebook for grading\n\n#accuracy_plot() "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbwhit/jupyter-best-practices | notebooks/07-Some_basics.ipynb | mit | [
"from __future__ import absolute_import, division, print_function",
"Github\nhttps://github.com/jbwhit/OSCON-2015/commit/6750b962606db27f69162b802b5de4f84ac916d5\nA few Python Basics",
"# Create a [list] \ndays = ['Monday', # multiple lines \n 'Tuesday', # acceptable \n 'Wednesday',\n 'Thursday',\n 'Friday',\n 'Saturday',\n 'Sunday', # trailing comma is fine!\n ] \n\ndays\n\n# Simple for-loop\nfor day in days:\n print(day)\n\n# Double for-loop\nfor day in days:\n for letter in day:\n print(letter)\n\nprint(days)\n\nprint(*days)\n\n# Double for-loop\nfor day in days:\n for letter in day:\n print(letter)\n print()\n\nfor day in days:\n for letter in day:\n print(letter.lower())",
"List Comprehensions",
"length_of_days = [len(day) for day in days]\nlength_of_days\n\nletters = [letter for day in days\n for letter in day]\n\nprint(letters)\n\nletters = [letter for day in days for letter in day]\nprint(letters)\n\n[num for num in xrange(10) if num % 2]\n\n[num for num in xrange(10) if num % 2 else \"doesn't work\"]\n\n[num if num % 2 else \"works\" for num in xrange(10)]\n\n[num for num in xrange(10)]\n\nsorted_letters = sorted([x.lower() for x in letters])\nprint(sorted_letters)\n\nunique_sorted_letters = sorted(set(sorted_letters))\n\nprint(\"There are\", len(unique_sorted_letters), \"unique letters in the days of the week.\")\nprint(\"They are:\", ''.join(unique_sorted_letters))\n\nprint(\"They are:\", '; '.join(unique_sorted_letters))\n\ndef first_three(input_string):\n \"\"\"Takes an input string and returns the first 3 characters.\"\"\"\n return input_string[:3] \n\nimport numpy as np\n\n# tab\nnp.linspace()\n\n[first_three(day) for day in days]\n\ndef last_N(input_string, number=2):\n \"\"\"Takes an input string and returns the last N characters.\"\"\"\n return input_string[-number:] \n\n[last_N(day, 4) for day in days if len(day) > 6]\n\nfrom math import pi\n\nprint([str(round(pi, i)) for i in xrange(2, 9)])\n\nlist_of_lists = [[i, round(pi, i)] for i in xrange(2, 9)]\nprint(list_of_lists)\n\nfor sublist in list_of_lists:\n print(sublist)\n\n# Let this be a warning to you!\n\n# If you see python code like the following in your work:\n\nfor x in range(len(list_of_lists)):\n print(\"Decimals:\", list_of_lists[x][0], \"expression:\", list_of_lists[x][1])\n\nprint(list_of_lists)\n\n# Change it to look more like this: \n\nfor decimal, rounded_pi in list_of_lists:\n print(\"Decimals:\", decimal, \"expression:\", rounded_pi)\n \n\n\n# enumerate if you really need the index\n\nfor index, day in enumerate(days):\n print(index, day)\n",
"Dictionaries\nPython dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well.",
"from IPython.display import IFrame, HTML\nHTML('<iframe src=https://en.wikipedia.org/wiki/Hash_table width=100% height=550></iframe>')\n\nfellows = [\"Jonathan\", \"Alice\", \"Bob\"]\nuniversities = [\"UCSD\", \"UCSD\", \"Vanderbilt\"]\n\nfor x, y in zip(fellows, universities):\n print(x, y)\n\n# Don't do this\n{x: y for x, y in zip(fellows, universities)}\n\n# Doesn't work like you might expect\n{zip(fellows, universities)}\n\ndict(zip(fellows, universities))\n\nfellows\n\nfellow_dict = {fellow.lower(): university \n for fellow, university in zip(fellows, universities)}\n\nfellow_dict\n\nfellow_dict['bob']\n\nrounded_pi = {i:round(pi, i) for i in xrange(2, 9)}\n\nrounded_pi[5]\n\nsum([i ** 2 for i in range(10)])\n\nsum(i ** 2 for i in range(10))\n\nhuh = (i ** 2 for i in range(10))\n\nhuh.next()",
"Participate in StackOverflow\nAn example: http://stackoverflow.com/questions/6605006/convert-pdf-to-image-with-high-resolution"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
uber/pyro | tutorial/source/csis.ipynb | apache-2.0 | [
"Compiled Sequential Importance Sampling\nCompiled sequential importance sampling [1], or inference compilation, is a technique to amortize the computational cost of inference by learning a proposal distribution for importance sampling.\nThe proposal distribution is learned to minimise the KL divergence between the model and the guide, $\\rm{KL}!\\left( p({\\bf z} | {\\bf x}) \\lVert q_{\\phi, x}({\\bf z}) \\right)$. This differs from variational inference, which would minimise $\\rm{KL}!\\left( q_{\\phi, x}({\\bf z}) \\lVert p({\\bf z} | {\\bf x}) \\right)$. Using this loss encourages the approximate proposal distribution to be broader than the true posterior (mass covering), whereas variational inference typically learns a narrower approximation (mode seeking). Guides for importance sampling are usually desired to have heavier tails than the model (see this stackexchange question). Therefore, the inference compilation loss is usually more suited to compiling a guide for importance sampling.\nAnother benefit of CSIS is that, unlike many types of variational inference, it has no requirement that the model is differentiable. This allows it to be used for inference on arbitrarily complex programs (e.g. a Captcha renderer [1]).\nThis example shows CSIS being used to speed up inference on a simple problem with a known analytic solution.",
"import torch\nimport torch.nn as nn\nimport torch.functional as F\n\nimport pyro\nimport pyro.distributions as dist\nimport pyro.infer\nimport pyro.optim\n\nimport os\nsmoke_test = ('CI' in os.environ)\nn_steps = 2 if smoke_test else 2000",
"Specify the model:\nThe model is specified in the same way as any Pyro model, except that a keyword argument, observations, must be used to input a dictionary with each observation as a key. Since inference compilation involves learning to perform inference for any observed values, it is not important what the values in the dictionary are. 0 is used here.",
"def model(prior_mean, observations={\"x1\": 0, \"x2\": 0}):\n x = pyro.sample(\"z\", dist.Normal(prior_mean, torch.tensor(5**0.5)))\n y1 = pyro.sample(\"x1\", dist.Normal(x, torch.tensor(2**0.5)), obs=observations[\"x1\"])\n y2 = pyro.sample(\"x2\", dist.Normal(x, torch.tensor(2**0.5)), obs=observations[\"x2\"])\n return x",
"And the guide:\nThe guide will be trained (a.k.a. compiled) to use the observed values to make proposal distributions for each unconditioned sample statement. In the paper [1], a neural network architecture is automatically generated for any model. However, for the implementation in Pyro the user must specify a task-specific guide program structure. As with any Pyro guide function, this should have the same call signature as the model. It must also encounter the same unobserved sample statements as the model. So that the guide program can be trained to make good proposal distributions, the distributions at sample statements should depend on the values in observations. In this example, a feed-forward neural network is used to map the observations to a proposal distribution for the latent variable.\npyro.module is called when the guide function is run so that the guide parameters can be found by the optimiser during training.",
"class Guide(nn.Module):\n def __init__(self):\n super().__init__()\n self.neural_net = nn.Sequential(\n nn.Linear(2, 10),\n nn.ReLU(),\n nn.Linear(10, 20),\n nn.ReLU(),\n nn.Linear(20, 10),\n nn.ReLU(),\n nn.Linear(10, 5),\n nn.ReLU(),\n nn.Linear(5, 2))\n\n def forward(self, prior_mean, observations={\"x1\": 0, \"x2\": 0}):\n pyro.module(\"guide\", self)\n x1 = observations[\"x1\"]\n x2 = observations[\"x2\"]\n v = torch.cat((x1.view(1, 1), x2.view(1, 1)), 1)\n v = self.neural_net(v)\n mean = v[0, 0]\n std = v[0, 1].exp()\n pyro.sample(\"z\", dist.Normal(mean, std))\n\nguide = Guide()",
"Now create a CSIS instance:\nThe object is initialised with the model; the guide; a PyTorch optimiser for training the guide; and the number of importance-weighted samples to draw when performing inference. The guide will be optimised for a particular value of the model/guide argument, prior_mean, so we use the value set here throughout training and inference.",
"optimiser = pyro.optim.Adam({'lr': 1e-3})\ncsis = pyro.infer.CSIS(model, guide, optimiser, num_inference_samples=50)\nprior_mean = torch.tensor(1.)",
"Now we 'compile' the instance to perform inference on this model:\nThe arguments given to csis.step are passed to the model and guide when they are run to evaluate the loss.",
"for step in range(n_steps):\n csis.step(prior_mean)",
"And now perform inference by importance sampling:\nThe compiled guide program should now be able to propose a distribution for z that approximates the posterior, $p(z | x_1, x_2)$, for any $x_1, x_2$. The same prior_mean is entered again, as well as the observed values inside observations.",
"posterior = csis.run(prior_mean,\n observations={\"x1\": torch.tensor(8.),\n \"x2\": torch.tensor(9.)})\nmarginal = pyro.infer.EmpiricalMarginal(posterior, \"z\")",
"We now plot the results and compare with importance sampling:\nWe observe $x_1 = 8$ and $x_2 = 9$. Inference is performed by taking 50 samples using CSIS, and 50 using importance sampling from the prior. We then plot the resulting approximations to the posterior distributions, along with the analytic posterior.",
"import numpy as np\nimport scipy.stats\nimport matplotlib.pyplot as plt\n\nwith torch.no_grad():\n # Draw samples from empirical marginal for plotting\n csis_samples = torch.stack([marginal() for _ in range(1000)])\n\n # Calculate empirical marginal with importance sampling\n is_posterior = pyro.infer.Importance(model, num_samples=50).run(\n prior_mean, observations={\"x1\": torch.tensor(8.),\n \"x2\": torch.tensor(9.)})\n is_marginal = pyro.infer.EmpiricalMarginal(is_posterior, \"z\")\n is_samples = torch.stack([is_marginal() for _ in range(1000)])\n\n# Calculate true prior and posterior over z\ntrue_posterior_z = torch.arange(-10, 10, 0.05)\ntrue_posterior_p = dist.Normal(7.25, (5/6)**0.5).log_prob(true_posterior_z).exp()\nprior_z = true_posterior_z\nprior_p = dist.Normal(1., 5**0.5).log_prob(true_posterior_z).exp()\n\nplt.rcParams['figure.figsize'] = [30, 15]\nplt.rcParams.update({'font.size': 30})\nfig, ax = plt.subplots()\nplt.plot(prior_z, prior_p, 'k--', label='Prior')\nplt.plot(true_posterior_z, true_posterior_p, color='k', label='Analytic Posterior')\nplt.hist(csis_samples.numpy(), range=(-10, 10), bins=100, color='r', density=1,\n label=\"Inference Compilation\")\nplt.hist(is_samples.numpy(), range=(-10, 10), bins=100, color='b', density=1,\n label=\"Importance Sampling\")\nplt.xlim(-8, 10)\nplt.ylim(0, 5)\nplt.xlabel(\"z\")\nplt.ylabel(\"Estimated Posterior Probability Density\")\nplt.legend()\nplt.show()",
"Using $x_1 = 8$ and $x_2 = 9$ gives a posterior far from the prior, and so using the prior as a guide for importance sampling is inefficient, giving a very small effective sample size. By first learning a suitable guide function, CSIS has a proposal distribution much more closely matched to the true posterior. This allows samples to be drawn with far better coverage of the true posterior, and greater effective sample size, as shown in the graph above.\nFor other examples of inference compilation, see [1] or https://github.com/probprog/anglican-infcomp-examples.\nReferences\n[1] Inference compilation and universal probabilistic programming,<br /> \nTuan Anh Le, Atilim Gunes Baydin, and Frank Wood"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lilleswing/deepchem | examples/tutorials/25_Uncertainty_In_Deep_Learning.ipynb | mit | [
"Tutorial Part 25: Uncertainty in Deep Learning\nA common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.\nDeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, predict the output on the test set, and then derive some uncertainty estimates.\nColab\nThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.\n\nSetup\nTo run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.",
"!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py\nimport conda_installer\nconda_installer.install()\n!/root/miniconda/bin/conda info -e\n\n!pip install --pre deepchem\nimport deepchem\ndeepchem.__version__",
"We'll use the Delaney dataset from the MoleculeNet suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions.",
"import deepchem as dc\nimport numpy as np\nimport matplotlib.pyplot as plot\n\ntasks, datasets, transformers = dc.molnet.load_delaney()\ntrain_dataset, valid_dataset, test_dataset = datasets\n\nmodel = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)\nmodel.fit(train_dataset, nb_epoch=20)\ny_pred, y_std = model.predict_uncertainty(test_dataset)",
"All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?\nOf course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv.org/abs/1703.04977)\nTo begin with, what does \"uncertainty\" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account.\nAleatoric Uncertainty\nConsider the following graph. It shows the best fit linear regression to a set of ten data points.",
"# Generate some fake data and plot a regression line.\nx = np.linspace(0, 5, 10)\ny = 0.15*x + np.random.random(10)\nplot.scatter(x, y)\nfit = np.polyfit(x, y, 1)\nline_x = np.linspace(-1, 6, 2)\nplot.plot(line_x, np.poly1d(fit)(line_x))\nplot.show()",
"The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.\nHow can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.\nEpistemic Uncertainty\nNow consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.",
"plot.figure(figsize=(12, 3))\nline_x = np.linspace(0, 5, 50)\nfor i in range(3):\n plot.subplot(1, 3, i+1)\n plot.scatter(x, y)\n fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)\n plot.plot(line_x, np.poly1d(fit)(line_x))\nplot.show()",
"Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.\nThe ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout.\nRecall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs.\nUncertain Uncertainty?\nNow we can combine the two types of uncertainty to compute an overall estimate of the error in each output:\n$$\\sigma_\\text{total} = \\sqrt{\\sigma_\\text{aleatoric}^2 + \\sigma_\\text{epistemic}^2}$$\nThis is the value DeepChem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable.\nLet's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty.",
"abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())\nplot.scatter(y_std.flatten(), abs_error)\nplot.xlabel('Standard Deviation')\nplot.ylabel('Absolute Error')\nplot.show()",
"The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors. (Strictly speaking, we expect the absolute error to be less than the predicted uncertainty. Even a very uncertain number could still happen to be close to the correct value by chance. If the model is working well, there should be more points below the diagonal than above it.)\nNow let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations.",
"plot.hist(abs_error/y_std.flatten(), 20)\nplot.show()",
"All the values are in the expected range, and the distribution looks roughly Gaussian although not exactly. Perhaps this indicates the errors are not normally distributed, but it may also reflect inaccuracies in the uncertainties. This is an important reminder: the uncertainties are just estimates, not rigorous measurements. Most of them are pretty good, but you should not put too much confidence in any single value.\nCongratulations! Time to join the Community!\nCongratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\nStar DeepChem on GitHub\nStarring DeepChem on GitHub helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phoebe-project/phoebe2-docs | 2.0/tutorials/ORB.ipynb | gpl-3.0 | [
"'orb' Datasets and Options\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).",
"!pip install -I \"phoebe>=2.0,<2.1\"",
"As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.",
"%matplotlib inline\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()",
"Dataset Parameters\nLet's create the ParameterSet which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach this ParameterSet for us.",
"ps, constraints = phoebe.dataset.orb()\nprint ps",
"times",
"print ps['times']",
"Compute Options\nLet's look at the compute options (for the default PHOEBE 2 backend) that relate to dynamics and the ORB dataset",
"ps_compute = phoebe.compute.phoebe()\nprint ps_compute",
"dynamics_method",
"print ps_compute['dynamics_method']",
"The 'dynamics_method' parameter controls how stars and components are placed in the coordinate system as a function of time and has several choices:\n * keplerian (default): Use Kepler's laws to determine positions. If the system has more than two components, then each orbit is treated independently and nested (ie there are no dynamical/tidal effects - the inner orbit is treated as a single point mass in the outer orbit).\n * more coming soon\nltte",
"print ps_compute['ltte']",
"The 'ltte' parameter sets whether light travel time effects (Roemer delay) are included. If set to False, the positions and velocities are returned as they actually are for that given object at that given time. If set to True, they are instead returned as they were or will be when their light reaches the origin of the coordinate system.\nSee the Systemic Velocity Example Script for an example of how 'ltte' and 'vgamma' (systemic velocity) interplay.\nSynthetics",
"b.add_dataset('orb', times=np.linspace(0,3,201))\n\nb.run_compute()\n\nb['orb@model'].twigs\n\nprint b['times@primary@orb01@orb@model']\n\nprint b['xs@primary@orb01@orb@model']\n\nprint b['vxs@primary@orb01@orb@model']",
"Plotting\nBy default, orb datasets plot as 'ys' vx 'xs' (plane of sky). Notice the y-scale here with inclination set to 90.",
"axs, artists = b['orb@model'].plot()",
"As always, you have access to any of the arrays for either axes, so if you want to plot 'vxs' vs 'times'",
"axs, artists = b['orb@model'].plot(x='times', y='vxs')",
"3d axes are not yet supported for orbits, but hopefully will be soon.\nOnce they are supported, they will default to x, y, and z positions plotted on their respective axes.",
"fig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\naxs, artists = b['orb@model'].plot(xlim=(-4,4), ylim=(-4,4), zlim=(-4,4))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xR86/ml-stuff | kaggle/machine-learning-with-a-heart/Lab4.ipynb | mit | [
"Tema 4.1 <a class=\"tocSkip\">\nImports",
"import math\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport graphviz\n\nimport sklearn.tree\nimport sklearn.neighbors\nimport sklearn.naive_bayes\nimport sklearn.svm\nimport sklearn.metrics\nimport sklearn.preprocessing\nimport sklearn.model_selection",
"Data\nhttps://www.drivendata.org/competitions/54/machine-learning-with-a-heart/page/109/\n- Numeric\n - slope_of_peak_exercise_st_segment (int, semi-categorical, 1-3)\n - resting_blood_pressure (int)\n - chest_pain_type (int, semi-categorical, 1-4)\n - num_major_vessels (int, semi-categorical, 0-3)\n - resting_ekg_results (int, semi-categorical, 0-2)\n - serum_cholesterol_mg_per_dl (int)\n - oldpeak_eq_st_depression (float)\n - age (int)\n - max_heart_rate_achieved (int)\n- Categorical\n - thal\n - normal\n - fixed_defect\n - reversible_defect\n - fasting_blood_sugar_gt_120_mg_per_dl (blood sugar > 120)\n - 0\n - 1\n - sex\n - 0 (f)\n - 1 (m)\n - exercise_induced_angina \n - 0\n - 1",
"features = pd.read_csv('train_values.csv')\nlabels = pd.read_csv('train_labels.csv')\n\nfeatures.head()\n\nlabels.head()\n\nFEATURES = ['slope_of_peak_exercise_st_segment', \n 'thal',\n 'resting_blood_pressure', \n 'chest_pain_type', \n 'num_major_vessels', \n 'fasting_blood_sugar_gt_120_mg_per_dl',\n 'resting_ekg_results', \n 'serum_cholesterol_mg_per_dl', \n 'oldpeak_eq_st_depression', \n 'sex',\n 'age', \n 'max_heart_rate_achieved', \n 'exercise_induced_angina']\n\nLABEL = 'heart_disease_present'\n\nEXPLANATIONS = {'slope_of_peak_exercise_st_segment' : 'Quality of Blood Flow to the Heart',\n 'thal' : 'Thallium Stress Test Measuring Blood Flow to the Heart',\n 'resting_blood_pressure' : 'Resting Blood Pressure', \n 'chest_pain_type' : 'Chest Pain Type (1-4)',\n 'num_major_vessels' : 'Major Vessels (0-3) Colored by Flourosopy',\n 'fasting_blood_sugar_gt_120_mg_per_dl' : 'Fasting Blood Sugar > 120 mg/dl',\n 'resting_ekg_results' : 'Resting Electrocardiographic Results (0-2)',\n 'serum_cholesterol_mg_per_dl' : 'Serum Cholesterol in mg/dl',\n 'oldpeak_eq_st_depression' : 'Exercise vs. Rest\\nA Measure of Abnormality in Electrocardiograms',\n 'age' : 'Age (years)',\n 'sex' : 'Sex (m/f)',\n 'max_heart_rate_achieved' : 'Maximum Heart Rate Achieved (bpm)',\n 'exercise_induced_angina' : 'Exercise-Induced Chest Pain (yes/no)'}\n\nNUMERICAL_FEATURES = ['slope_of_peak_exercise_st_segment', \n 'resting_blood_pressure', \n 'chest_pain_type', \n 'num_major_vessels', \n 'resting_ekg_results', \n 'serum_cholesterol_mg_per_dl', \n 'oldpeak_eq_st_depression', \n 'age', \n 'max_heart_rate_achieved']\n\nCATEGORICAL_FEATURES = ['thal', \n 'fasting_blood_sugar_gt_120_mg_per_dl', \n 'sex', \n 'exercise_induced_angina']\n\nCATEGORICAL_FEATURE_VALUES = {'thal' : [[0, 1, 2], ['Normal', \n 'Fixed Defect', \n 'Reversible Defect']], \n 'fasting_blood_sugar_gt_120_mg_per_dl' : [[0, 1], ['No', 'Yes']],\n 'sex' : [[0, 1], ['F', 'M']], \n 'exercise_induced_angina' : [[0, 1], ['No', 'Yes']]}\n\nSEMI_CATEGORICAL_FEATURES = ['slope_of_peak_exercise_st_segment',\n 'chest_pain_type',\n 'num_major_vessels',\n 'resting_ekg_results']\n\nSEMI_CATEGORICAL_FEATURE_LIMITS = {'slope_of_peak_exercise_st_segment' : [1, 3],\n 'chest_pain_type' : [1, 4],\n 'num_major_vessels' : [0, 3],\n 'resting_ekg_results' : [0, 2]}\n\nLABEL_VALUES = [[0, 1], ['No', 'Yes']]\n\n\nfor feature in CATEGORICAL_FEATURES:\n if len(CATEGORICAL_FEATURE_VALUES[feature][0]) > 2:\n \n onehot_feature = pd.get_dummies(features[feature])\n \n feature_index = features.columns.get_loc(feature)\n features.drop(feature, axis=1, inplace=True)\n \n onehot_feature.columns = [f'{feature}={feature_value}' for feature_value in onehot_feature.columns]\n for colname in onehot_feature.columns[::-1]:\n features.insert(feature_index, colname, onehot_feature[colname])\n\nfeatures.head()\n\nx = features.values[:,1:].astype(int)\ny = labels.values[:,-1].astype(int)\n\nprint('x =\\n', x)\nprint('y =\\n', y)\n\nstratified_kflod_validator = sklearn.model_selection.StratifiedKFold(n_splits=5, shuffle=True)\n\nstratified_kflod_validator",
"Decision Trees",
"tree_mean_acc = 0\ntree_score_df = pd.DataFrame(columns = ['Fold', 'Accuracy', 'Precision', 'Recall'])\n\nfor fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1):\n \n x_train, x_test = x[train_indices], x[test_indices]\n y_train, y_test = y[train_indices], y[test_indices]\n \n dec_tree = sklearn.tree.DecisionTreeClassifier(min_samples_split = 5)\n dec_tree.fit(x_train, y_train)\n \n acc = dec_tree.score(x_test, y_test)\n tree_mean_acc += acc\n \n y_pred = dec_tree.predict(x_test)\n precision = sklearn.metrics.precision_score(y_test, y_pred)\n recall = sklearn.metrics.recall_score(y_test, y_pred)\n \n tree_score_df.loc[fold_ind] = [f'{fold_ind}', \n f'{acc*100:.2f} %', \n f'{precision*100:.2f} %', \n f'{recall*100:.2f} %']\n \n tree_plot_data = sklearn.tree.export_graphviz(dec_tree, out_file = None,\n feature_names = features.columns[1:], \n class_names = [f'{labels.columns[1]}={label_value}' \n for label_value \n in LABEL_VALUES[1]],\n filled = True, \n rounded = True, \n special_characters = True) \n graph = graphviz.Source(tree_plot_data) \n graph.render(f'Fold {fold_ind}')\n \nnext_ind = len(tree_score_df) + 1\n\nmean_acc = tree_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean()\nmean_prec = tree_score_df['Precision'].apply(lambda n: float(n[:-2])).mean()\nmean_rec = tree_score_df['Recall'].apply(lambda n: float(n[:-2])).mean()\n\ntree_score_df.loc[next_ind] = ['Avg', f'{mean_acc:.2f} %', f'{mean_prec:.2f} %', f'{mean_rec:.2f} %']\ntree_score_df",
"KNN",
"# TODO Normalize\n\nknn_mean_score_df = pd.DataFrame(columns = ['k', 'Avg. Accuracy', 'Avg. Precision', 'Avg. Recall'])\n\nnormalized_x = sklearn.preprocessing.normalize(x) # No improvement over un-normalized data.\n\nmean_accs = []\nfor k in list(range(1, 10)) + [math.ceil(len(features) * step) for step in [0.1, 0.2, 0.3, 0.4, 0.5]]:\n \n knn_score_df = pd.DataFrame(columns = ['Fold', 'Accuracy', 'Precision', 'Recall'])\n\n mean_acc = 0\n for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1):\n\n x_train, x_test = normalized_x[train_indices], normalized_x[test_indices]\n y_train, y_test = y[train_indices], y[test_indices]\n\n knn = sklearn.neighbors.KNeighborsClassifier(n_neighbors = k)\n knn.fit(x_train, y_train)\n\n acc = knn.score(x_test, y_test)\n mean_acc += acc\n \n y_pred = knn.predict(x_test)\n precision = sklearn.metrics.precision_score(y_test, y_pred)\n recall = sklearn.metrics.recall_score(y_test, y_pred)\n\n knn_score_df.loc[fold_ind] = [f'{fold_ind}', \n f'{acc*100:.2f} %',\n f'{precision*100:.2f} %', \n f'{recall*100:.2f} %']\n\n next_ind = len(knn_score_df) + 1\n \n mean_acc = knn_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean()\n mean_prec = knn_score_df['Precision'].apply(lambda n: float(n[:-2])).mean()\n mean_rec = knn_score_df['Recall'].apply(lambda n: float(n[:-2])).mean()\n \n knn_score_df.loc[next_ind] = ['Avg', \n f'{acc*100:.2f} %',\n f'{precision*100:.2f} %', \n f'{recall*100:.2f} %']\n \n knn_mean_score_df.loc[k] = [k, \n f'{mean_acc:.2f} %', \n f'{mean_prec:.2f} %', \n f'{mean_rec:.2f} %']\n\n# print(f'k = {k}')\n# print(knn_score_df)\n# print()\n \nbest_accuracy = knn_mean_score_df.sort_values(by = ['Avg. Accuracy']).iloc[-1]\nprint('Best avg. accuracy is', best_accuracy['Avg. Accuracy'], 'for k =', best_accuracy['k'], '.')\nknn_mean_score_df.sort_values(by = ['Avg. Accuracy'])",
"Naive Bayes",
"nb_classifier_types = [sklearn.naive_bayes.GaussianNB,\n sklearn.naive_bayes.MultinomialNB,\n sklearn.naive_bayes.ComplementNB,\n sklearn.naive_bayes.BernoulliNB]\n\nnb_mean_score_df = pd.DataFrame(columns = ['Type', 'Avg. Accuracy', 'Avg. Precision', 'Avg. Recall'])\n\nfor nb_classifier_type in nb_classifier_types:\n \n nb_score_df = pd.DataFrame(columns = ['Fold', 'Accuracy', 'Precision', 'Recall'])\n\n mean_acc = 0\n for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1):\n\n x_train, x_test = x[train_indices], x[test_indices]\n y_train, y_test = y[train_indices], y[test_indices]\n\n nb = nb_classifier_type()\n nb.fit(x_train, y_train)\n\n acc = nb.score(x_test, y_test)\n mean_acc += acc\n \n y_pred = nb.predict(x_test)\n precision = sklearn.metrics.precision_score(y_test, y_pred)\n recall = sklearn.metrics.recall_score(y_test, y_pred)\n\n nb_score_df.loc[fold_ind] = [f'{fold_ind}', \n f'{acc*100:.2f} %', \n f'{precision*100:.2f} %', \n f'{recall*100:.2f} %']\n\n next_ind = len(nb_score_df) + 1\n \n mean_acc = nb_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean()\n mean_prec = nb_score_df['Precision'].apply(lambda n: float(n[:-2])).mean()\n mean_rec = nb_score_df['Recall'].apply(lambda n: float(n[:-2])).mean()\n \n nb_score_df.loc[next_ind] = ['Avg', \n f'{mean_acc:.2f} %', \n f'{mean_prec:.2f} %', \n f'{mean_rec:.2f} %']\n \n nb_mean_score_df.loc[len(nb_mean_score_df) + 1] = [nb_classifier_type.__name__, \n f'{mean_acc:.2f} %', \n f'{mean_prec:.2f} %', \n f'{mean_rec:.2f} %']\n\n print(nb_classifier_type.__name__)\n print()\n print(nb_score_df)\n print()\n \nnb_mean_score_df.sort_values(by = ['Avg. Accuracy'])",
"SVM",
"svm_classifier_type = sklearn.svm.SVC\n\n# Avg.\n# Args -> acc / prec / rec\n#\n# kernel: linear -> 78.89 % 78.31 % 73.75 %\n# kernel: linear, C: 0.1 -> 84.44 % 88.54 % 75.00 %\n#\n# * No improvement for larger C.\n#\n# kernel: poly, max_iter: 1 -> 46.67 % 34.67 % 21.25 %\n# kernel: poly, max_iter: 10 -> 57.22 % 51.27 % 66.25 %\n# kernel: poly, max_iter: 100 -> 61.67 % 60.18 % 40.00 %\n# kernel: poly, max_iter: 100, coef0: 1 -> 62.22 % 62.19 % 41.25 %\n#\n# * No improvement for more iters.\n# * No improvement for larger C.\n# * No improvement for higher degree.\n# * No improvement for different coef0.\n#\n# kernel: rbf, max_iter: 10 -> 48.89 % 46.07 % 72.50 %\n# kernel: rbf, max_iter: 100 -> 60.00 % 74.00 % 17.50 %\n# kernel: rbf, max_iter: 1000 -> 60.56 % 78.33 % 15.00 %\n\n\nargs = {'kernel': 'linear', 'C': 0.1}\n\nsvm_score_df = pd.DataFrame(columns = ['Type', 'Accuracy', 'Precision', 'Recall'])\n\n# normalized_x = sklearn.preprocessing.normalize(x)\n\nmean_acc = 0\nfor fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1):\n\n x_train, x_test = x[train_indices], x[test_indices]\n y_train, y_test = y[train_indices], y[test_indices]\n\n svm = svm_classifier_type(**args, gamma = 'scale', cache_size = 256)\n svm.fit(x_train, y_train)\n\n acc = svm.score(x_test, y_test)\n mean_acc += acc\n\n y_pred = svm.predict(x_test)\n precision = sklearn.metrics.precision_score(y_test, y_pred)\n recall = sklearn.metrics.recall_score(y_test, y_pred)\n\n svm_score_df.loc[fold_ind] = [f'{fold_ind}', \n f'{acc*100:.2f} %', \n f'{precision*100:.2f} %', \n f'{recall*100:.2f} %']\n\nnext_ind = len(svm_score_df) + 1\n\nmean_acc = svm_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean()\nmean_prec = svm_score_df['Precision'].apply(lambda n: float(n[:-2])).mean()\nmean_rec = svm_score_df['Recall'].apply(lambda n: float(n[:-2])).mean()\n\nsvm_score_df.loc[next_ind] = ['Avg', \n f'{mean_acc:.2f} %', \n f'{mean_prec:.2f} %', \n f'{mean_rec:.2f} %']\n\nprint(svm_score_df)",
"Shallow Neural Nets\nImport deps",
"import pandas as pd\n\nfrom sklearn.model_selection import train_test_split\n\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D\n\n\nfrom keras.layers import Input, Dense, Conv2D, MaxPooling2D, Dropout, Flatten, BatchNormalization, LeakyReLU",
"Import data",
"features = pd.read_csv('train_values.csv')\nlabels = pd.read_csv('train_labels.csv')\n\nprint(labels.head())\nfeatures.head()\n\nFEATURES = ['slope_of_peak_exercise_st_segment', \n 'thal',\n 'resting_blood_pressure', \n 'chest_pain_type', \n 'num_major_vessels', \n 'fasting_blood_sugar_gt_120_mg_per_dl',\n 'resting_ekg_results', \n 'serum_cholesterol_mg_per_dl', \n 'oldpeak_eq_st_depression', \n 'sex',\n 'age', \n 'max_heart_rate_achieved', \n 'exercise_induced_angina']\n\nLABEL = 'heart_disease_present'\n\nEXPLANATIONS = {'slope_of_peak_exercise_st_segment' : 'Quality of Blood Flow to the Heart',\n 'thal' : 'Thallium Stress Test Measuring Blood Flow to the Heart',\n 'resting_blood_pressure' : 'Resting Blood Pressure', \n 'chest_pain_type' : 'Chest Pain Type (1-4)',\n 'num_major_vessels' : 'Major Vessels (0-3) Colored by Flourosopy',\n 'fasting_blood_sugar_gt_120_mg_per_dl' : 'Fasting Blood Sugar > 120 mg/dl',\n 'resting_ekg_results' : 'Resting Electrocardiographic Results (0-2)',\n 'serum_cholesterol_mg_per_dl' : 'Serum Cholesterol in mg/dl',\n 'oldpeak_eq_st_depression' : 'Exercise vs. Rest\\nA Measure of Abnormality in Electrocardiograms',\n 'age' : 'Age (years)',\n 'sex' : 'Sex (m/f)',\n 'max_heart_rate_achieved' : 'Maximum Heart Rate Achieved (bpm)',\n 'exercise_induced_angina' : 'Exercise-Induced Chest Pain (yes/no)'}\n\nNUMERICAL_FEATURES = ['slope_of_peak_exercise_st_segment', \n 'resting_blood_pressure', \n 'chest_pain_type', \n 'num_major_vessels', \n 'resting_ekg_results', \n 'serum_cholesterol_mg_per_dl', \n 'oldpeak_eq_st_depression', \n 'age', \n 'max_heart_rate_achieved']\n\nCATEGORICAL_FEATURES = ['thal', \n 'fasting_blood_sugar_gt_120_mg_per_dl', \n 'sex', \n 'exercise_induced_angina']\n\nCATEGORICAL_FEATURE_VALUES = {'thal' : [[0, 1, 2], ['Normal', \n 'Fixed Defect', \n 'Reversible Defect']], \n 'fasting_blood_sugar_gt_120_mg_per_dl' : [[0, 1], ['No', 'Yes']],\n 'sex' : [[0, 1], ['F', 'M']], \n 'exercise_induced_angina' : [[0, 1], ['No', 'Yes']]}\n\nSEMI_CATEGORICAL_FEATURES = ['slope_of_peak_exercise_st_segment',\n 'chest_pain_type',\n 'num_major_vessels',\n 'resting_ekg_results']\n\nSEMI_CATEGORICAL_FEATURE_LIMITS = {'slope_of_peak_exercise_st_segment' : [1, 3],\n 'chest_pain_type' : [1, 4],\n 'num_major_vessels' : [0, 3],\n 'resting_ekg_results' : [0, 2]}\n\nLABEL_VALUES = [[0, 1], ['No', 'Yes']]\n\n\nfor feature in CATEGORICAL_FEATURES:\n if len(CATEGORICAL_FEATURE_VALUES[feature][0]) > 2:\n \n onehot_feature = pd.get_dummies(features[feature])\n \n feature_index = features.columns.get_loc(feature)\n features.drop(feature, axis=1, inplace=True)\n \n onehot_feature.columns = ['%s=%s' % (feature, feature_value) for feature_value in onehot_feature.columns]\n for colname in onehot_feature.columns[::-1]:\n features.insert(feature_index, colname, onehot_feature[colname])\n\nx = features.values[:,1:].astype(int)\ny = labels.values[:,-1].astype(int)\n\nprint('x =\\n', x)\nprint('y =\\n', y)\n\n# for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1):\n\n# x_train, x_test = x[train_indices], x[test_indices]\n# y_train, y_test = y[train_indices], y[test_indices]\n\nx_train, x_test, y_train, y_test = \\\n train_test_split(x, y, test_size=0.2, random_state=42)\n\nprint(x_train.shape, x_test.shape)\nprint(y_train.shape, y_test.shape)",
"Define model",
"input_shape = (1,15)\nnum_classes = 2\n\nprint(x.shape)\nprint(y.shape)\n\nprint(x[:1])\nprint(y[:1])",
"Architecture 0 - Inflating Dense 120-225, 0.5 Dropout, Batch Norm, Sigmoid Classification",
"arch_cnt = 'arch-0-3'\n\nmodel = Sequential()\nmodel.add(\n Dense(120, input_dim=15, kernel_initializer='normal',\n # kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(225, input_dim=15, kernel_initializer='normal', activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=5, min_lr=0.001,\n verbose=1)\n\n# es_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# min_delta=0.1,\n# patience=7,\n# verbose=1,\n# mode='auto'\n# )\n# 'restore_best_weights' in dir(keras.callbacks.EarlyStopping()) # FALSE = library is not up-to-date\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, es_cb, tb_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Architecture 1 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification",
"arch_cnt = 'arch-1'\n\nmodel = Sequential()\nmodel.add(\n Dense(225, input_dim=15, kernel_initializer='normal',\n # kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(112, input_dim=15, kernel_initializer='normal', activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=7, min_lr=0.001,\n verbose=1)\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, tb_cb]\n # callbacks=[earlystop_cb, reduce_lr_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Architecture 2 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, HE Initialization",
"arch_cnt = 'arch-2'\n\nmodel = Sequential()\nmodel.add(\n Dense(225, input_dim=15, kernel_initializer='he_uniform',\n kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(112, input_dim=15, kernel_initializer='he_uniform', activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=7, min_lr=0.001,\n verbose=1)\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, tb_cb]\n # callbacks=[earlystop_cb, reduce_lr_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, L2 = 1e^-4",
"arch_cnt = 'arch-3-4'\n\nmodel = Sequential()\nmodel.add(\n Dense(225, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.0001), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(\n Dense(112, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.0001), # pierd 0.2 acc\n activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=7, min_lr=0.001,\n verbose=1)\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, tb_cb]\n # callbacks=[earlystop_cb, reduce_lr_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, L2 = 1e^-3",
"arch_cnt = 'arch-3-3'\n\nmodel = Sequential()\nmodel.add(\n Dense(225, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(\n Dense(112, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc\n activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=7, min_lr=0.001,\n verbose=1)\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, tb_cb]\n # callbacks=[earlystop_cb, reduce_lr_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, L2 = 1e^-2",
"arch_cnt = 'arch-3-2'\n\nmodel = Sequential()\nmodel.add(\n Dense(225, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.01), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(\n Dense(112, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.01), # pierd 0.2 acc\n activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=7, min_lr=0.001,\n verbose=1)\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, tb_cb]\n # callbacks=[earlystop_cb, reduce_lr_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, L2 = 1e^-1",
"arch_cnt = 'arch-3-1'\n\nmodel = Sequential()\nmodel.add(\n Dense(225, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.1), # pierd 0.2 acc\n activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(\n Dense(112, input_dim=15, kernel_initializer='normal',\n kernel_regularizer=keras.regularizers.l2(0.1), # pierd 0.2 acc\n activation='relu'))\n# model.add(LeakyReLU(alpha=0.1))\nmodel.add(BatchNormalization(axis = 1))\nmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n\n%%time\n\n# earlystop_cb = keras.callbacks.EarlyStopping(\n# monitor='val_loss',\n# patience=5, restore_best_weights=True,\n# verbose=1)\nreduce_lr_cb = keras.callbacks.ReduceLROnPlateau(\n monitor='val_loss', factor=0.05,\n patience=7, min_lr=0.001,\n verbose=1)\n\ntb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, \n write_graph=True, write_images=True)\n\n\nepochs = 50\nbatch_size = 32\n\nmodel.fit(\n x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n \n shuffle=False,\n validation_data=(x_test, y_test),\n callbacks=[reduce_lr_cb, tb_cb]\n # callbacks=[earlystop_cb, reduce_lr_cb]\n)\n\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Ensemble Methods",
"import matplotlib.pyplot as plt\n%matplotlib inline",
"Bagging Strategies\nRandom Forests",
"from sklearn.ensemble import RandomForestClassifier\n\n# x_train, x_test, y_train, y_test\n\n\nclf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)\n\nclf.fit(x_train, y_train)\n\nprint(clf.feature_importances_)\n\nprint(clf.predict(x_test))\n\n# make predictions for test data\ny_pred = clf.predict(x_test)\npredictions = [round(value) for value in y_pred]\n\n# evaluate predictions\naccuracy = accuracy_score(y_test, predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))",
"ExtraTrees",
"from sklearn.ensemble import ExtraTreesClassifier\n\n# x_train, x_test, y_train, y_test\n\n\nclf = ExtraTreesClassifier(n_estimators=100, max_depth=2, random_state=0)\n\nclf.fit(x_train, y_train)\n\nprint(clf.feature_importances_)\n\nprint(clf.predict(x_test))\n\n# make predictions for test data\ny_pred = clf.predict(x_test)\npredictions = [round(value) for value in y_pred]\n\n# evaluate predictions\naccuracy = accuracy_score(y_test, predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))\n\nfig = plt.figure(figsize=(10,5))\n\nplot_learning_curves(x_train, y_train, x_test, y_test, clf)\n\nplt.show()",
"Stacking Strategies\nSuperLearner\nBoosting Strategies\nxgboost",
"# import xgboost as xgb\nfrom xgboost import XGBClassifier\nfrom sklearn.metrics import accuracy_score\n\n# x_train, x_test, y_train, y_test\n\nmodel = XGBClassifier()\nmodel.fit(x_train, y_train)\n\nprint(model)\n\n# make predictions for test data\ny_pred = model.predict(x_test)\npredictions = [round(value) for value in y_pred]\n\n# evaluate predictions\naccuracy = accuracy_score(y_test, predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))",
"Bibliography\n\n\nhttps://medium.com/@datalesdatales/why-you-should-be-plotting-learning-curves-in-your-next-machine-learning-project-221bae60c53\n\n\nhttps://slideplayer.com/slide/4684120/15/images/6/Outline+Bias%2FVariance+Tradeoff+Ensemble+methods+that+minimize+variance.jpg\n\n\nhttps://slideplayer.com/slide/4684120/\n\n\nplot confusion matrix\n\nhttp://rasbt.github.io/mlxtend/user_guide/plotting/plot_learning_curves/\nhttps://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/\n\n\nhttp://docs.h2o.ai/h2o-tutorials/latest-stable/tutorials/ensembles-stacking/index.html"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dataDogma/Computer-Science | .ipynb_checkpoints/DAT208x - Week 1 - Python Basics-checkpoint.ipynb | gpl-3.0 | [
"Lecture : Hello Python!\n\n\n\n[RQ-1] : Which of the following statements is correct?\nAns: The Ipython Shell is typically used to work with Python interactively.\n\n\n[RQ-2] : Which file extension is used for Python script files?**\nAns: .py\n\n\n[RQ-3] : You need to print the result of adding 3 and 4 inside a script. Which line of code should you write in the script?\nAns: print(int x + int y)\n\n\n\nLab : Hello Python!\n\nObjective :\n\nHow to work with Ipython shell.\nWriting python scripts.\n\n\nThe Python Interface -- 100xp, Status : Earned",
"# working with print function\nprint(5 / 8)\n\n# Add another print function on new line\nprint(7 + 10)",
"When to use python? -- 50xp, Status : Earned \n\nPython is a pretty versatile language. For what applications can you use Python?\nAns: All of the above\n\nAny comments? -- 100xp, Satatus : Earned\n\n\n\nWe can add comments to python scripts.\n\n\nComments are short snippets of plain english, to help you and others understand what the code is about.\n\n\nTo add a comment, use '#'tag, insert it at the front of the text.\n\n\nComments have idle state, i.e. they don't affect the code results.\n\n\nComments are ignored by the python interpretor.",
"# Just testing division\nprint(5 / 8)\n\n# Additon works too ( added comment here )\nprint(7 + 10)",
"Python as a calculator -- 100xp, Status : Earned\n\nPython is perfectly suited to do basic calculations. Apart from addition, subtraction, multiplication and division, there is also support for more advanced operations such as:\n\n\nExponentiation:. This operator raises the number to its left to the power of the number to its right: for example 42 will give 16.\n\n\nModulo: %. It returns the remainder of the division of the number to the left by the number on its right, for example 18 % 7 equals 4.",
"\"\"\"Suppose you have $100, which you can invest with a 10% return each year. After one year, it's \n100 x 1.1 = 110 dollars, and after two years it's 100 x 1.1 x 1.1 = 121.\n\nAdd code to calculate how much money you end up with after 7 years\"\"\"\n\nprint(5 + 5)\nprint(5 - 5)\n\n# Multiplication and division\nprint(3 * 5)\nprint(10 / 2)\n\n# Exponentiation\nprint(4 ** 2)\n\n# Modulo\nprint(18 % 7)\n\n# How much is your $100 worth after 7 years?\n# first try was unsuccesful, so used the only two things * and ** operators.\nprint ( 100 * ( 1.1 ** 7 ) )\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JackWalpole/splitwavepy | devel/Parseval.ipynb | mit | [
"Check whether the Silver and Chan (1991) coefficients for their implementation of Parseval's theorem is correct for digitised data.",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\nfrom scipy import signal\nfrom scipy import stats",
"Parsevals theorem when applied to discrete Fourier Transform looks like this.\n$\\sum {n=0}^{N-1}|x[n]|^{2}={\\frac {1}{N}}\\sum {k=0}^{N-1}|X[k]|^{2}$\nSource: https://en.wikipedia.org/wiki/Parseval%27s_theorem",
"# check Parseval's theorem holds numerically \nnsamps=1000\n\n# window\nw = signal.tukey(nsamps,0.1)\n\na = np.random.normal(0,1,nsamps) * w\nA = np.fft.fft(a)\nb = (1/np.sqrt(2*np.pi))*(signal.gaussian(nsamps,10))\nB = np.fft.fft(b)\nc = np.convolve(a,b,'same')\nC = np.fft.fft(c)\n\n# signal c is convolution of Gaussian noise (a) with a Gaussian wavelet (b)\n# C is the fourier transform of c.\n\nsumt = np.sum(c**2)\nsumf = np.sum(np.abs(C)**2)/nsamps\n\nprint('time domain',sumt)\nprint('fourier domain',sumf)\nprint('difference',np.abs(sumt-sumf))\nprint('percent', (np.abs(sumt-sumf)/sumt)*100)\n\n",
"Furthermore by the convolution theorem: C = A * B.\nAnd therefore sum(C^2) = sum(A^2 * B^2)",
"AB = A * B\nab = np.fft.ifft(AB)\nplt.plot(np.roll(ab,500))\nplt.plot(c)\n\nsumAB = np.sum(np.abs(A**2*B**2))/nsamps\nprint('sum A*B',sumAB)\nprint('difference',np.abs(sumt-sumAB))\nprint('percent',(np.abs(sumt-sumAB)/sumt)*100)",
"Parsevals theorem as applied in Silver and Chan (and Walsh).\n$\\sum {n=0}^{N-1}|x[n]|^{2}={\\frac {1}{N}}\\sum {k=1}^{N-2}|X[k]|^{2}+\\frac{1}_{2}\\sum|X[0,N-1]|$\nSource: https://en.wikipedia.org/wiki/Parseval%27s_theorem",
"def ndf(y,taper=True,detrend=True):\n \"\"\"\n Uses the improvement found by Walsh et al (2013).\n By default will detrend data to ensure zero mean\n and will taper edges using a Tukey filter affecting amplitudes of 5% of data at edges\n \"\"\"\n\n if taper is True:\n y = y * signal.tukey(y.size,0.05)\n \n if detrend is True:\n # ensure no trend on the noise trace\n y = signal.detrend(y)\n\n \n Y = np.fft.fft(y)\n amp = np.absolute(Y)\n \n # estimate E2 and E4 following Walsh et al (2013)\n a = np.ones(Y.size)\n a[0] = a[-1] = 0.5\n E2 = np.sum( a * amp**2)\n E4 = (np.sum( (4 * a**2 / 3) * amp**4))\n \n ndf = 2 * ( 2 * E2**2 / E4 - 1 )\n \n return ndf\n \ndef ndf2(y,taper=True,detrend=True):\n \"\"\"\n \n \"\"\"\n\n if taper is True:\n y = y * signal.tukey(y.size,0.05)\n \n if detrend is True:\n # ensure no trend on the noise trace\n y = signal.detrend(y)\n\n \n Y = np.fft.fft(y)\n amp = np.absolute(Y)**2\n \n E2 = np.sum(amp**2)\n E4 = (np.sum( (4/3) * amp**4))\n \n ndf = 2 * ( 2 * E2**2 / E4 - 1 )\n \n return ndf\n\nprint(ndf(c))\n\nprint(ndf2(c))\n\nstats.moment(c,moment=4)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
davidgutierrez/HeartRatePatterns | Jupyter/LoadDataMimic-II.ipynb | gpl-3.0 | [
"Cargue de datos s SciDB\n1) Verificar Prerequisitos\nPython\nSciDB-Py requires Python 2.6-2.7 or 3.3",
"import sys\nsys.version_info",
"NumPy\ntested with version 1.9 (1.13.1)",
"import numpy as np\nnp.__version__",
"Requests\ntested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.",
"import requests\nrequests.__version__",
"Pandas (optional)\ntested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.",
"import pandas as pd\npd.__version__",
"SciPy (optional)\ntested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.",
"import scipy\nscipy.__version__",
"2) Importar scidbpy\npip install git+http://github.com/paradigm4/scidb-py.git@devel",
"import scidbpy\nscidbpy.__version__\n\nfrom scidbpy import connect",
"conectarse al servidor de Base de datos",
"sdb = connect('http://localhost:8080')",
"3) Leer archivo con cada una de las ondas",
"import urllib.request # urllib2 in python2 the lib that handles the url stuff\ntarget_url = \"https://www.physionet.org/physiobank/database/mimic2wdb/matched/RECORDS-waveforms\"\ndata = urllib.request.urlopen(target_url) # it's a file like object and works just like a file\n\nlines = data.readlines();\nline = str(lines[100])",
"Quitarle caracteres especiales",
"carpeta,onda = line.replace('b\\'','').replace('\\'','').replace('\\\\n','').split(\"/\")\nonda",
"4) Importar WFDB para conectarse a physionet",
"import wfdb\n\nsig, fields = wfdb.srdsamp(onda,pbdir='mimic2wdb/matched/'+carpeta) #, sampfrom=11000\n\nprint(sig)\nprint(\"signame: \" + str(fields['signame']))\nprint(\"units: \" + str(fields['units']))\nprint(\"fs: \" + str(fields['fs']))\nprint(\"comments: \" + str(fields['comments']))\nprint(\"fields: \" + str(fields))",
"Busca la ubicacion de la señal tipo II",
"signalII = None\ntry:\n signalII = fields['signame'].index(\"II\")\nexcept ValueError:\n print(\"List does not contain value\")\nif(signalII!=None):\n print(\"List contain value\")",
"Normaliza la señal y le quita los valores en null",
"array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)\narrayNun = array[~np.isnan(array)]\narrayNun = np.trim_zeros(arrayNun)\narrayNun",
"Cambiar los guiones \"-\" por raya al piso \"_\" porque por algun motivo SciDB tiene problemas con estos caracteres\nSi el arreglo sin valores nulos no queda vacio lo sube al SciDB",
"ondaName = onda.replace(\"-\", \"_\")\nif arrayNun.size>0 :\n sdb.input(upload_data=array).store(ondaName,gc=False)\n# sdb.iquery(\"store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), \"+ondaName+\")\", upload_data=array)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ds-hwang/deeplearning_udacity | udacity_notebook/3_regularization.ipynb | mit | [
"Deep Learning\nAssignment 3\nPreviously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.\nThe goal of this assignment is to explore regularization techniques.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle",
"First reload the data we generated in notmist.ipynb.",
"pickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)",
"Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.",
"image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])",
"Problem 1\nIntroduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.\n\n\nProblem 2\nLet's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n\n\nProblem 3\nIntroduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.\nWhat happens to our extreme overfitting case?\n\n\nProblem 4\nTry to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.\nOne avenue you can explore is to add multiple layers.\nAnother one is to use learning rate decay:\nglobal_step = tf.Variable(0) # count the number of steps taken.\nlearning_rate = tf.train.exponential_decay(0.5, global_step, ...)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
RealPolitiX/mpes | examples/Tutorial_01_HDF5 File Management.ipynb | mit | [
"from mpes import fprocessing as fp\n# from imp import reload\n# reload(fp)\n\nfpath = r'../data/data_20180605_131.h5'",
"1.1 Loading HDF5 files\nHDF5 files can be read using a few different classes operating on different levels. The hierarchy meaningful to the end user is in the following (from low to high),\n* mpes.fprocessing.File() -- local import of h5py.File(), a low-level Python HDF5 parser (wrapped over even lower C code).\n* mpes.fprocessing.hdf5Reader() -- built on the File() class, with the inclusion of several file structure parsing, file component readout and format conversion functions.\n* mpes.fprocessing.hdf5Splitter() -- built on the hdf5Reader() class, used for splitting large hdf5 files.\n* mpes.fprocessing.hdf5Processor() -- built on the hdf5Reader() class, with the inclusion of binning operations and io.\nThe hierarchy goes File $\\in$ hdf5Reader $\\in$ (hdf5Splitter, hdf5Processor)",
"hdff = fp.File(fpath)\nhdff\n\nhdfr = fp.hdf5Reader(fpath)\nhdfr",
"New attributes and methods in the hdf5Reader() class",
"print( list(set(dir(hdfr)) - set(dir(hdff))) )\n\nhdfp = fp.hdf5Processor(fpath)\nhdfp",
"New attributes and methods in the hdf5Processer() class",
"print( list(set(dir(hdfp)) - set(dir(hdfr))) )",
"1.2 Retrieving components from HDF5 files\nReading components can also be done at different levels, the level of hdf5Reader() or above is recommended.",
"hdfp.summarize()\n\nprint(list(hdfr.readGroup(hdfr, 'EventFormat')))",
"1.3 Converting HDF5 files\nConversion of hdf5 to Matlab (mat) format (no data processing).",
"hdfr.convert('mat', save_addr='../data/data_131')",
"Conversion to parquet format",
"hdfr.convert('parquet', save_addr='../data/data_131_parquet', pq_append=False, chunksz=1e7, \\\n compression='gzip')",
"1.4 Splitting HDF5 files",
"hdfs = fp.hdf5Splitter(fpath)\nhdfs.split(nsplit=50, save_addr=r'../data/data_114_parts/data_114_', pbar=True)",
"1.5 Retrieve binned data from stored HDF5 file\nRead binned data over 3 axes",
"fpath_binned = r'../data/binres_114.h5'\n\nbindict = fp.readBinnedhdf5(fpath_binned, combined=True)\nbindict.keys()",
"Read binned data over 4 axes",
"fpath_binned = r'../data/data_114_4axis_binned.h5'\n\nbindict = fp.readBinnedhdf5(fpath_binned, combined=True)\nbindict.keys()\n\nbindict = fp.readBinnedhdf5(fpath_binned, combined=False)\nbindict.keys()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ThinkStats2 | code/chap12ex.ipynb | gpl-3.0 | [
"Chapter 12\nExamples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from os.path import basename, exists\n\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n\n local, _ = urlretrieve(url, filename)\n print(\"Downloaded \" + local)\n\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py\")\n\nimport numpy as np\nimport pandas as pd\n\nimport random\n\nimport thinkstats2\nimport thinkplot",
"Time series analysis\nNOTE: Some of the example in this chapter have been updated to work with more recent versions of the libraries.\nLoad the data from \"Price of Weed\".",
"download(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/mj-clean.csv\")\n\ntransactions = pd.read_csv(\"mj-clean.csv\", parse_dates=[5])\ntransactions.head()",
"The following function takes a DataFrame of transactions and compute daily averages.",
"def GroupByDay(transactions, func=np.mean):\n \"\"\"Groups transactions by day and compute the daily mean ppg.\n\n transactions: DataFrame of transactions\n\n returns: DataFrame of daily prices\n \"\"\"\n grouped = transactions[[\"date\", \"ppg\"]].groupby(\"date\")\n daily = grouped.aggregate(func)\n\n daily[\"date\"] = daily.index\n start = daily.date[0]\n one_year = np.timedelta64(1, \"Y\")\n daily[\"years\"] = (daily.date - start) / one_year\n\n return daily",
"The following function returns a map from quality name to a DataFrame of daily averages.",
"def GroupByQualityAndDay(transactions):\n \"\"\"Divides transactions by quality and computes mean daily price.\n\n transaction: DataFrame of transactions\n\n returns: map from quality to time series of ppg\n \"\"\"\n groups = transactions.groupby(\"quality\")\n dailies = {}\n for name, group in groups:\n dailies[name] = GroupByDay(group)\n\n return dailies",
"dailies is the map from quality name to DataFrame.",
"dailies = GroupByQualityAndDay(transactions)",
"The following plots the daily average price for each quality.",
"import matplotlib.pyplot as plt\n\nthinkplot.PrePlot(rows=3)\nfor i, (name, daily) in enumerate(dailies.items()):\n thinkplot.SubPlot(i + 1)\n title = \"Price per gram ($)\" if i == 0 else \"\"\n thinkplot.Config(ylim=[0, 20], title=title)\n thinkplot.Scatter(daily.ppg, s=10, label=name)\n if i == 2:\n plt.xticks(rotation=30)\n thinkplot.Config()\n else:\n thinkplot.Config(xticks=[])",
"We can use statsmodels to run a linear model of price as a function of time.",
"import statsmodels.formula.api as smf\n\n\ndef RunLinearModel(daily):\n model = smf.ols(\"ppg ~ years\", data=daily)\n results = model.fit()\n return model, results",
"Here's what the results look like.",
"from IPython.display import display\n\nfor name, daily in dailies.items():\n model, results = RunLinearModel(daily)\n print(name)\n display(results.summary())",
"Now let's plot the fitted model with the data.",
"def PlotFittedValues(model, results, label=\"\"):\n \"\"\"Plots original data and fitted values.\n\n model: StatsModel model object\n results: StatsModel results object\n \"\"\"\n years = model.exog[:, 1]\n values = model.endog\n thinkplot.Scatter(years, values, s=15, label=label)\n thinkplot.Plot(years, results.fittedvalues, label=\"model\", color=\"#ff7f00\")",
"The following function plots the original data and the fitted curve.",
"def PlotLinearModel(daily, name):\n \"\"\"Plots a linear fit to a sequence of prices, and the residuals.\n\n daily: DataFrame of daily prices\n name: string\n \"\"\"\n model, results = RunLinearModel(daily)\n PlotFittedValues(model, results, label=name)\n thinkplot.Config(\n title=\"Fitted values\",\n xlabel=\"Years\",\n xlim=[-0.1, 3.8],\n ylabel=\"Price per gram ($)\",\n )",
"Here are results for the high quality category:",
"name = \"high\"\ndaily = dailies[name]\n\nPlotLinearModel(daily, name)",
"Moving averages\nAs a simple example, I'll show the rolling average of the numbers from 1 to 10.",
"array = np.arange(10)",
"With a \"window\" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.",
"series = pd.Series(array)\nseries.rolling(3).mean()",
"The following function plots the rolling mean.",
"def PlotRollingMean(daily, name):\n \"\"\"Plots rolling mean.\n\n daily: DataFrame of daily prices\n \"\"\"\n dates = pd.date_range(daily.index.min(), daily.index.max())\n reindexed = daily.reindex(dates)\n\n thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)\n roll_mean = pd.Series(reindexed.ppg).rolling(30).mean()\n thinkplot.Plot(roll_mean, label=\"rolling mean\", color=\"#ff7f00\")\n plt.xticks(rotation=30)\n thinkplot.Config(ylabel=\"price per gram ($)\")",
"Here's what it looks like for the high quality category.",
"PlotRollingMean(daily, name)",
"The exponentially-weighted moving average gives more weight to more recent points.",
"def PlotEWMA(daily, name):\n \"\"\"Plots rolling mean.\n\n daily: DataFrame of daily prices\n \"\"\"\n dates = pd.date_range(daily.index.min(), daily.index.max())\n reindexed = daily.reindex(dates)\n\n thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)\n roll_mean = reindexed.ppg.ewm(30).mean()\n thinkplot.Plot(roll_mean, label=\"EWMA\", color=\"#ff7f00\")\n plt.xticks(rotation=30)\n thinkplot.Config(ylabel=\"price per gram ($)\")\n\nPlotEWMA(daily, name)",
"We can use resampling to generate missing values with the right amount of noise.",
"def FillMissing(daily, span=30):\n \"\"\"Fills missing values with an exponentially weighted moving average.\n\n Resulting DataFrame has new columns 'ewma' and 'resid'.\n\n daily: DataFrame of daily prices\n span: window size (sort of) passed to ewma\n\n returns: new DataFrame of daily prices\n \"\"\"\n dates = pd.date_range(daily.index.min(), daily.index.max())\n reindexed = daily.reindex(dates)\n\n ewma = pd.Series(reindexed.ppg).ewm(span=span).mean()\n\n resid = (reindexed.ppg - ewma).dropna()\n fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))\n reindexed.ppg.fillna(fake_data, inplace=True)\n\n reindexed[\"ewma\"] = ewma\n reindexed[\"resid\"] = reindexed.ppg - ewma\n return reindexed\n\ndef PlotFilled(daily, name):\n \"\"\"Plots the EWMA and filled data.\n\n daily: DataFrame of daily prices\n \"\"\"\n filled = FillMissing(daily, span=30)\n thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)\n thinkplot.Plot(filled.ewma, label=\"EWMA\", color=\"#ff7f00\")\n plt.xticks(rotation=30)\n thinkplot.Config(ylabel=\"Price per gram ($)\")",
"Here's what the EWMA model looks like with missing values filled.",
"PlotFilled(daily, name)",
"Serial correlation\nThe following function computes serial correlation with the given lag.",
"def SerialCorr(series, lag=1):\n xs = series[lag:]\n ys = series.shift(lag)[lag:]\n corr = thinkstats2.Corr(xs, ys)\n return corr",
"Before computing correlations, we'll fill missing values.",
"filled_dailies = {}\nfor name, daily in dailies.items():\n filled_dailies[name] = FillMissing(daily, span=30)",
"Here are the serial correlations for raw price data.",
"for name, filled in filled_dailies.items():\n corr = thinkstats2.SerialCorr(filled.ppg, lag=1)\n print(name, corr)",
"It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.\nIt is more interested to see whether there are still correlations after we subtract away the trends.",
"for name, filled in filled_dailies.items():\n corr = thinkstats2.SerialCorr(filled.resid, lag=1)\n print(name, corr)",
"Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.",
"rows = []\nfor lag in [1, 7, 30, 365]:\n print(lag, end=\"\\t\")\n for name, filled in filled_dailies.items():\n corr = SerialCorr(filled.resid, lag)\n print(\"%.2g\" % corr, end=\"\\t\")\n print()",
"The strongest correlation is a weekly cycle in the medium quality category.\nAutocorrelation\nThe autocorrelation function is the serial correlation computed for all lags.\nWe can use it to replicate the results from the previous section.",
"# NOTE: acf throws a FutureWarning because we need to replace `unbiased` with `adjusted`,\n# just as soon as Colab gets updated :)\n\nimport warnings\n\nwarnings.simplefilter(action=\"ignore\", category=FutureWarning)\n\nimport statsmodels.tsa.stattools as smtsa\n\nfilled = filled_dailies[\"high\"]\nacf = smtsa.acf(filled.resid, nlags=365, unbiased=True, fft=False)\nprint(\"%0.2g, %.2g, %0.2g, %0.2g, %0.2g\" % (acf[0], acf[1], acf[7], acf[30], acf[365]))",
"To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.",
"def SimulateAutocorrelation(daily, iters=1001, nlags=40):\n \"\"\"Resample residuals, compute autocorrelation, and plot percentiles.\n\n daily: DataFrame\n iters: number of simulations to run\n nlags: maximum lags to compute autocorrelation\n \"\"\"\n # run simulations\n t = []\n for _ in range(iters):\n filled = FillMissing(daily, span=30)\n resid = thinkstats2.Resample(filled.resid)\n acf = smtsa.acf(resid, nlags=nlags, unbiased=True, fft=False)[1:]\n t.append(np.abs(acf))\n\n high = thinkstats2.PercentileRows(t, [97.5])[0]\n low = -high\n lags = range(1, nlags + 1)\n thinkplot.FillBetween(lags, low, high, alpha=0.2, color=\"gray\")",
"The following function plots the actual autocorrelation for lags up to 40 days.\nThe flag add_weekly indicates whether we should add a simulated weekly cycle.",
"def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):\n \"\"\"Plots autocorrelation functions.\n\n dailies: map from category name to DataFrame of daily prices\n nlags: number of lags to compute\n add_weekly: boolean, whether to add a simulated weekly pattern\n \"\"\"\n thinkplot.PrePlot(3)\n daily = dailies[\"high\"]\n SimulateAutocorrelation(daily)\n\n for name, daily in dailies.items():\n\n if add_weekly:\n daily.ppg = AddWeeklySeasonality(daily)\n\n filled = FillMissing(daily, span=30)\n\n acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True, fft=False)\n lags = np.arange(len(acf))\n thinkplot.Plot(lags[1:], acf[1:], label=name)",
"To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.",
"def AddWeeklySeasonality(daily):\n \"\"\"Adds a weekly pattern.\n\n daily: DataFrame of daily prices\n\n returns: new DataFrame of daily prices\n \"\"\"\n fri_or_sat = (daily.index.dayofweek == 4) | (daily.index.dayofweek == 5)\n fake = daily.ppg.copy()\n fake[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())\n return fake",
"Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.",
"axis = [0, 41, -0.2, 0.2]\n\nPlotAutoCorrelation(dailies, add_weekly=False)\nthinkplot.Config(axis=axis, loc=\"lower right\", ylabel=\"correlation\", xlabel=\"lag (day)\")",
"Here's what it would look like if there were a weekly cycle.",
"PlotAutoCorrelation(dailies, add_weekly=True)\nthinkplot.Config(axis=axis, loc=\"lower right\", xlabel=\"lag (days)\")",
"Prediction\nThe simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.",
"def GenerateSimplePrediction(results, years):\n \"\"\"Generates a simple prediction.\n\n results: results object\n years: sequence of times (in years) to make predictions for\n\n returns: sequence of predicted values\n \"\"\"\n n = len(years)\n inter = np.ones(n)\n d = dict(Intercept=inter, years=years, years2=years**2)\n predict_df = pd.DataFrame(d)\n predict = results.predict(predict_df)\n return predict\n\ndef PlotSimplePrediction(results, years):\n predict = GenerateSimplePrediction(results, years)\n\n thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)\n thinkplot.plot(years, predict, color=\"#ff7f00\")\n xlim = years[0] - 0.1, years[-1] + 0.1\n thinkplot.Config(\n title=\"Predictions\",\n xlabel=\"Years\",\n xlim=xlim,\n ylabel=\"Price per gram ($)\",\n loc=\"upper right\",\n )",
"Here's what the prediction looks like for the high quality category, using the linear model.",
"name = \"high\"\ndaily = dailies[name]\n\n_, results = RunLinearModel(daily)\nyears = np.linspace(0, 5, 101)\nPlotSimplePrediction(results, years)",
"When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.",
"def SimulateResults(daily, iters=101, func=RunLinearModel):\n \"\"\"Run simulations based on resampling residuals.\n\n daily: DataFrame of daily prices\n iters: number of simulations\n func: function that fits a model to the data\n\n returns: list of result objects\n \"\"\"\n _, results = func(daily)\n fake = daily.copy()\n\n result_seq = []\n for _ in range(iters):\n fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)\n _, fake_results = func(fake)\n result_seq.append(fake_results)\n\n return result_seq",
"To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.\nIf add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).",
"def GeneratePredictions(result_seq, years, add_resid=False):\n \"\"\"Generates an array of predicted values from a list of model results.\n\n When add_resid is False, predictions represent sampling error only.\n\n When add_resid is True, they also include residual error (which is\n more relevant to prediction).\n\n result_seq: list of model results\n years: sequence of times (in years) to make predictions for\n add_resid: boolean, whether to add in resampled residuals\n\n returns: sequence of predictions\n \"\"\"\n n = len(years)\n d = dict(Intercept=np.ones(n), years=years, years2=years**2)\n predict_df = pd.DataFrame(d)\n\n predict_seq = []\n for fake_results in result_seq:\n predict = fake_results.predict(predict_df)\n if add_resid:\n predict += thinkstats2.Resample(fake_results.resid, n)\n predict_seq.append(predict)\n\n return predict_seq",
"To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.",
"def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):\n \"\"\"Plots predictions.\n\n daily: DataFrame of daily prices\n years: sequence of times (in years) to make predictions for\n iters: number of simulations\n percent: what percentile range to show\n func: function that fits a model to the data\n \"\"\"\n result_seq = SimulateResults(daily, iters=iters, func=func)\n p = (100 - percent) / 2\n percents = p, 100 - p\n\n predict_seq = GeneratePredictions(result_seq, years, add_resid=True)\n low, high = thinkstats2.PercentileRows(predict_seq, percents)\n thinkplot.FillBetween(years, low, high, alpha=0.3, color=\"gray\")\n\n predict_seq = GeneratePredictions(result_seq, years, add_resid=False)\n low, high = thinkstats2.PercentileRows(predict_seq, percents)\n thinkplot.FillBetween(years, low, high, alpha=0.5, color=\"gray\")",
"Here are the results for the high quality category.",
"years = np.linspace(0, 5, 101)\nthinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)\nPlotPredictions(daily, years)\nxlim = years[0] - 0.1, years[-1] + 0.1\nthinkplot.Config(\n title=\"Predictions\", xlabel=\"Years\", xlim=xlim, ylabel=\"Price per gram ($)\"\n)",
"But there is one more source of uncertainty: how much past data should we use to build the model?\nThe following function generates a sequence of models based on different amounts of past data.",
"def SimulateIntervals(daily, iters=101, func=RunLinearModel):\n \"\"\"Run simulations based on different subsets of the data.\n\n daily: DataFrame of daily prices\n iters: number of simulations\n func: function that fits a model to the data\n\n returns: list of result objects\n \"\"\"\n result_seq = []\n starts = np.linspace(0, len(daily), iters).astype(int)\n\n for start in starts[:-2]:\n subset = daily[start:]\n _, results = func(subset)\n fake = subset.copy()\n\n for _ in range(iters):\n fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)\n _, fake_results = func(fake)\n result_seq.append(fake_results)\n\n return result_seq",
"And this function plots the results.",
"def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):\n \"\"\"Plots predictions based on different intervals.\n\n daily: DataFrame of daily prices\n years: sequence of times (in years) to make predictions for\n iters: number of simulations\n percent: what percentile range to show\n func: function that fits a model to the data\n \"\"\"\n result_seq = SimulateIntervals(daily, iters=iters, func=func)\n p = (100 - percent) / 2\n percents = p, 100 - p\n\n predict_seq = GeneratePredictions(result_seq, years, add_resid=True)\n low, high = thinkstats2.PercentileRows(predict_seq, percents)\n thinkplot.FillBetween(years, low, high, alpha=0.2, color=\"gray\")",
"Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.",
"name = \"high\"\ndaily = dailies[name]\n\nthinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)\nPlotIntervals(daily, years)\nPlotPredictions(daily, years)\nxlim = years[0] - 0.1, years[-1] + 0.1\nthinkplot.Config(\n title=\"Predictions\", xlabel=\"Years\", xlim=xlim, ylabel=\"Price per gram ($)\"\n)",
"Exercises\nExercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3.\nUse a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.\nExercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation.\nUse this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.\nBonus Example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this:\n\n\nCompute the EWMA of the time series and use the last point as an intercept, inter.\n\n\nCompute the EWMA of differences between successive elements in the time series and use the last point as a slope, slope.\n\n\nTo predict values at future times, compute inter + slope * dt, where dt is the difference between the time of the prediction and the time of the last observation.",
"name = \"high\"\ndaily = dailies[name]\n\nfilled = FillMissing(daily)\ndiffs = filled.ppg.diff()\n\nthinkplot.plot(diffs)\nplt.xticks(rotation=30)\nthinkplot.Config(ylabel=\"Daily change in price per gram ($)\")\n\nfilled[\"slope\"] = diffs.ewm(span=365).mean()\nthinkplot.plot(filled.slope[-365:])\nplt.xticks(rotation=30)\nthinkplot.Config(ylabel=\"EWMA of diff ($)\")\n\n# extract the last inter and the mean of the last 30 slopes\nstart = filled.index[-1]\ninter = filled.ewma[-1]\nslope = filled.slope[-30:].mean()\n\nstart, inter, slope\n\n# reindex the DataFrame, adding a year to the end\ndates = pd.date_range(filled.index.min(), filled.index.max() + np.timedelta64(365, \"D\"))\npredicted = filled.reindex(dates)\n\n# generate predicted values and add them to the end\npredicted[\"date\"] = predicted.index\none_day = np.timedelta64(1, \"D\")\npredicted[\"days\"] = (predicted.date - start) / one_day\npredict = inter + slope * predicted.days\npredicted.ewma.fillna(predict, inplace=True)\n\n# plot the actual values and predictions\nthinkplot.Scatter(daily.ppg, alpha=0.1, label=name)\nthinkplot.Plot(predicted.ewma, color=\"#ff7f00\")",
"As an exercise, run this analysis again for the other quality categories."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dwhswenson/OPSPiggybacker | examples/example_one_way_shooting.ipynb | lgpl-2.1 | [
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport openpathsampling as paths\nimport ops_piggybacker as oink",
"Create fake trajectories\nThe input trajectories for the one-way shooting version must\n\nnot include the shooting point (which is shared between the two trajectories)\nbe in forward-time order (so reversed paths, which are created as time goes backward, need to be reversed)",
"from openpathsampling.tests.test_helpers import make_1d_traj\n\ntraj1 = make_1d_traj([-0.9, 0.1, 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1, 9.1, 10.1])\ntraj2 = make_1d_traj([-0.8, 1.2])\ntraj3 = make_1d_traj([5.3, 8.3, 11.3])\ntraj4 = make_1d_traj([-0.6, 1.4, 3.4, 5.4, 7.4])\ntraj5 = make_1d_traj([-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5])",
"Make list of move data\nThe input to the pseudo-simulator is a list of data related to the move. For one-way shooting, you need the following information with each move:\n\nthe replica this move applies to (for TPS, just use 0)\nthe single-direction trajectory (as described in the previous section)\nthe index of the shooting point from the previous full trajectory\nwhether the trajectory was accepted\nthe direction of the one-way shooting move (forward is +1, backward is -1)\n\nThe moves object below is a list of tuples of that information, in the order listed above. This is what you need to create from your previous simulation.",
"moves = [\n (0, traj2, 3, True, -1),\n (0, traj3, 4, True, +1),\n (0, traj4, 6, False, -1),\n (0, traj5, 6, True, -1)\n]",
"From here, you've already done everything that needs to be done to reshape your already-run simulation. Now you just need to create the fake OPS simulations.\nCreate OPS objects",
"# volumes\ncv = paths.FunctionCV(\"x\", lambda snap: snap.xyz[0][0])\nleft_state = paths.CVDefinedVolume(cv, float(\"-inf\"), 0.0)\nright_state = paths.CVDefinedVolume(cv, 10.0, float(\"inf\"))\n\n# network\nnetwork = paths.TPSNetwork(left_state, right_state)\nensemble = network.sampling_ensembles[0] # the only one",
"Create initial conditions",
"initial_conditions = paths.SampleSet([\n paths.Sample(replica=0,\n trajectory=traj1,\n ensemble=ensemble)\n])",
"Create OPSPiggybacker objects\nNote that the big difference here is that you use pre_joined=False. This is essential for the automated one-way shooting treatment.",
"shoot = oink.ShootingStub(ensemble, pre_joined=False)\n\nsim = oink.ShootingPseudoSimulator(storage=paths.Storage('one_way.nc', 'w'),\n initial_conditions=initial_conditions,\n mover=shoot,\n network=network)",
"Run the pseudo-simulator",
"sim.run(moves)\n\nsim.storage.close()",
"Analyze with OPS",
"analysis_file = paths.AnalysisStorage(\"one_way.nc\")\n\nscheme = analysis_file.schemes[0]\nscheme.move_summary(analysis_file.steps)\n\nimport openpathsampling.visualize as ops_vis\nfrom IPython.display import SVG\nhistory = ops_vis.PathTree(\n analysis_file.steps,\n ops_vis.ReplicaEvolution(replica=0)\n)\n# switch to the \"boxcar\" look for the trajectories\nhistory.options.movers['default']['new'] = 'single'\nhistory.options.css['horizontal_gap'] = True\nSVG(history.svg())\n\npath_lengths = [len(step.active[0].trajectory) for step in analysis_file.steps]\nplt.hist(path_lengths, alpha=0.5);\n\ncv_x = analysis_file.cvs['x']\n# load the active trajectory as storage.steps[step_num].active[replica_id]\nplt.plot(cv_x(analysis_file.steps[2].active[0]), 'o-');"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
enoordeh/StatisticalMethods | examples/XrayImage/Modeling.ipynb | gpl-2.0 | [
"Forward Modeling the X-ray Image data\nIn this notebook, we'll take a closer look at the X-ray image data products, and build a simple, generative, forward model for the observed data.",
"from __future__ import print_function\nimport astropy.io.fits as pyfits\nimport astropy.visualization as viz\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 10.0)",
"The XMM Image Data\n\n\nRecall that we downloaded some XMM data in the \"First Look\" notebook. \n\n\nWe downloaded three files, and just looked at one - the \"science\" image.",
"imfits = pyfits.open('a1835_xmm/P0098010101M2U009IMAGE_3000.FTZ')\nim = imfits[0].data",
"im is the image, our observed data, presented after some \"standard processing.\" The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure). \n\n\nWe display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.",
"plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');",
"A Model for the Cluster of Galaxies\n\nWe will use a common parametric model for the surface brightness of galaxy clusters: the azimuthally symmetric beta model:\n\n$S(r) = S_0 \\left[1.0 + \\left(\\frac{r}{r_c}\\right)^2\\right]^{-3\\beta + 1/2}$,\nwhere $r$ is projected distance from the cluster center. \n\n\nThe parameters of this model are:\n\n\n$x_0$, the $x$ coordinate of the cluster center\n\n$y_0$, the $y$ coordinate of the cluster center\n$S_0$, the normalization, in surface brightness units\n$r_c$, a radial scale (called the \"core radius\")\n\n$\\beta$, which determines the slope of the profile\n\n\nNote that this model describes a 2D surface brightness distribution, since $r^2 = x^2 + y^2$\n\n\nLet's draw a cartoon of this model on the whiteboard\n\n\nPlanning an Expected Counts Map\n\n\nOur data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by an exposure map, ex. \n\n\nWe expect to see counts due to a number of sources:\n\n\nX-rays from the galaxy cluster\n\nX-rays from other detected sources in the field\nX-rays from unresolved sources (the Cosmic X-ray Background)\nDiffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)\nSoft protons from the solar wind, cosmic rays, and other undesirables (the particle background)\n\nLet's go through these in turn.\n1. Counts from the Cluster\n\n\nSince our data are counts in each pixel, our model needs to first predict the expected counts in each pixel. Physical models predict intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is one of the things accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions about, e.g. the luminosity of the cluster).\n\n\nSince the X-rays from the cluster are transformed according to the exposure map, the units of $S_0$ are counts/s/pixel, and the model prediction for the expected number of counts from the cluster is CL*ex, where CL is an image with pixel values computed from $S(r)$.\n\n\n2-4. X-ray background model\n\n\nThe X-ray background will be \"vignetted\" in the same way as X-rays from the cluster. We can lump sources 2-4 together, to extend our model so that it is composed of a galaxy cluster, plus an X-ray background.\n\n\nThe simplest assumption we can make about the X-ray background is that it is spatially uniform, on average. The model must account for the varying effective exposure as a function of position, however. So the model prediction associated with this component is b*ex, where b is a single number with units of counts/s/pixel.\n\n\nWe can circumvent the problem of the other detected sources in the field by masking them out, leaving us with the assumption that any remaining counts are not due to the masked sources. This could be a source of systematic error, so we'll note it down for later.\n\n\n5. Particle background model\n\n\nThe particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays - so the exposure map (and its vignetting correction) does not apply. \n\n\nInstead, we're given, from a black box, a prediction for the expected counts/pixel due to particles, so the extension to our model is simply to add this image, pb.\n\n\nFull model\n\nCombining these three components, the model (CL+b)*ex + pb gives us an expected number of counts/pixel across the field.\n\nA Look at the Other XMM Products\n\nThe \"exposure map\" and the \"particle background map\" were supplied to us by the XMM reduction pipeline, along with the science image. Let's take a look at them now.",
"pbfits = pyfits.open('a1835_xmm/P0098010101M2X000BKGMAP3000.FTZ')\npb = pbfits[0].data\nexfits = pyfits.open('a1835_xmm/P0098010101M2U009EXPMAP3000.FTZ')\nex = exfits[0].data",
"The \"Exposure Map\"\n\n\nThe ex image is in units of seconds, and represents the effective exposure time at each pixel position. \n\n\nThis is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised. \n\n\nDisplaying the exposure map on a linear scale makes the vignetting pattern and other features clear.",
"plt.imshow(ex, cmap='gray', origin='lower');\nplt.savefig(\"figures/cluster_expmap.png\")",
"The \"Particle Background Map\"\n\n\npb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the \"quiescent particle background.\"\n\n\nThis map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with. \n\n\nNote that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.",
"plt.imshow(pb, cmap='gray', origin='lower');\nplt.savefig(\"figures/cluster_pbmap.png\")",
"Masking out the other sources\n\n\nThere are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment. \n\n\nA convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. \"Not observed\" is different from \"observed zero counts.\"\n\n\nLet's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.",
"mask = np.loadtxt('a1835_xmm/M2ptsrc.txt')\nfor reg in mask:\n # this is inefficient but effective\n for i in np.round(reg[1]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):\n for j in np.round(reg[0]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):\n if (i-reg[1])**2 + (j-reg[0])**2 <= reg[2]**2:\n ex[np.int(i-1), np.int(j-1)] = 0.0",
"As a sanity check, let's have a look at the modified exposure map. \n\n\nCompare the location of the \"holes\" to the science image above.",
"plt.imshow(ex, cmap='gray', origin='lower');\nplt.savefig(\"figures/cluster_expmap_masked.png\")",
"A Generative Model for the X-ray Image\n\n\nAll of the discussion above was in terms of predicting the expected number of counts in each pixel, $\\mu_k$. This is not what we observe: we observe counts.\n\n\nTo be able to generate a mock dataset, we need to make an assumption about the form of the sampling distribution for the counts $N$ in each pixel, ${\\rm Pr}(N_k|\\mu_k)$.\n\n\nLet's assume that this distribution is Poisson, since we expect X-ray photon arrivals to be \"rare events.\"\n\n\n${\\rm Pr}(N_k|\\mu_k) = \\frac{{\\rm e}^{-\\mu_k} \\mu_k^{N_k}}{N_k !}$\n\nHere, $\\mu_k(\\theta)$ is the expected number of counts in the $k$th pixel:\n\n$\\mu_k(\\theta) = \\left( S(r_k;\\theta) + b \\right) \\cdot$ ex + pb\n\n\nNote that writing the sampling distribution like this contains the assumption that the pixels are independent (i.e., there is no cross-talk between the cuboids of silicon that make up the pixels in the CCD chip). (Also note that this assumption is different from the assumption that the expected numbers of counts are independent! They are explicitly not independent: we wrote down a model for a cluster surface brightness distribution that is potentially many pixels in diameter.)\n\n\nAt this point we can draw the PGM for a forward model of this dataset, using the exposure and particle background maps supplied, and some choices for the model parameters.\n\n\nThen, we can go ahead and simulate some mock data and compare with the image we have.",
"# import cluster_pgm\n# cluster_pgm.forward()\n\nfrom IPython.display import Image\nImage(filename=\"cluster_pgm_forward.png\")\n\ndef beta_model_profile(r, S0, rc, beta):\n '''\n The fabled beta model, radial profile S(r)\n '''\n return S0 * (1.0 + (r/rc)**2)**(-3.0*beta + 0.5)\n\ndef beta_model_image(x, y, x0, y0, S0, rc, beta):\n '''\n Here, x and y are arrays (\"meshgrids\" or \"ramps\") containing x and y pixel numbers, \n and the other arguments are galaxy cluster beta model parameters.\n Returns a surface brightness image of the same shape as x and y.\n '''\n r = np.sqrt((x-x0)**2 + (y-y0)**2)\n return beta_model_profile(r, S0, rc, beta)\n\ndef model_image(x, y, ex, pb, x0, y0, S0, rc, beta, b):\n '''\n Here, x, y, ex and pb are images, all of the same shape, and the other args are \n cluster model and X-ray background parameters. ex is the (constant) exposure map\n and pb is the (constant) particle background map.\n '''\n return (beta_model_image(x, y, x0, y0, S0, rc, beta) + b) * ex + pb\n\n# Set up the ramp images, to enable fast array calculations:\n\nnx,ny = ex.shape\nx = np.outer(np.ones(ny),np.arange(nx))\ny = np.outer(np.arange(ny),np.ones(nx))\n\nfig,ax = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(15, 6)\nplt.subplots_adjust(wspace=0.2)\nleft = ax[0].imshow(x, cmap='gray', origin='lower')\nax[0].set_title('x')\nfig.colorbar(left,ax=ax[0],shrink=0.9)\nright = ax[1].imshow(y, cmap='gray', origin='lower')\nax[1].set_title('y')\nfig.colorbar(right,ax=ax[1],shrink=0.9)\n\n# Now choose parameters, compute model and plot, compared to data!\n\nx0,y0 = 328,348 # The center of the image is 328,328\nS0,b = 0.01,5e-7 # Cluster and background surface brightness, arbitrary units\nbeta = 2.0/3.0 # Canonical value is beta = 2/3\nrc = 4 # Core radius, in pixels\n\n# Realize the expected counts map for the model:\nmu = model_image(x,y,ex,pb,x0,y0,S0,rc,beta,b)\n\n# Draw a *sample image* from the Poisson sampling distribution:\nmock = np.random.poisson(mu,mu.shape)\n\n# The difference between the mock and the real data should be symmetrical noise if the model\n# is a good match...\ndiff = im - mock\n\n\n# Plot three panels:\n\nfig,ax = plt.subplots(nrows=1, ncols=3)\nfig.set_size_inches(15, 6)\nplt.subplots_adjust(wspace=0.2)\n\nleft = ax[0].imshow(viz.scale_image(mock, scale='log', max_cut=40), cmap='gray', origin='lower')\nax[0].set_title('Mock (log, rescaled)')\nfig.colorbar(left,ax=ax[0],shrink=0.6)\n\ncenter = ax[1].imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower')\nax[1].set_title('Data (log, rescaled)')\nfig.colorbar(center,ax=ax[1],shrink=0.6)\n\nright = ax[2].imshow(diff, vmin=-40, vmax=40, cmap='gray', origin='lower')\nax[2].set_title('Difference (linear)')\nfig.colorbar(right,ax=ax[2],shrink=0.6)\n\nfig.savefig(\"figures/cluster_mock-data-diff.png\")",
"Exercise: Adjust the model parameters and generate a mock that matches the data\nIf you are not following in your own notebook, you'll need to sit close to someone who is running it."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
thewtex/SimpleITK-Notebooks | 22_Transforms.ipynb | apache-2.0 | [
"<table width=\"100%\">\n<tr style=\"background-color: red;\"><td><font color=\"white\">SimpleITK conventions:</font></td></tr>\n<tr><td>\n<ul>\n<li>Points are represented by vector-like data types: Tuple, Numpy array, List.</li>\n<li>Matrices are represented by vector-like data types in row major order.</li>\n<li>Initializing the DisplacementFieldTransform using an image requires that the image's pixel type be sitk.sitkVectorFloat64</li>\n<li>Initializing the DisplacementFieldTransform using an image will \"clear out\" your image (your alias to the image will point to an empty, zero sized, image).</li>\n</ul>\n</td></tr>\n</table>\n\nSimpleITK Transformation Types\nThis notebook introduces the transformation types supported by SimpleITK and illustrates how to \"promote\" transformations from a lower to higher parameter space (e.g. 3D translation to 3D rigid). \n<table width=\"100%\">\n<tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1TranslationTransform.html\">TranslationTransform</a></td><td>2D or 3D, translation</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1VersorTransform.html\">VersorTransform</a></td><td>3D, rotation represented by a versor</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1VersorRigid3DTransform.html\">VersorRigid3DTransform</a></td><td>3D, rigid transformation with rotation represented by a versor</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1Euler2DTransform.html\">Euler2DTransform</a></td><td>2D, rigid transformation with rotation represented by a Euler angle</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1Euler3DTransform.html\">Euler3DTransform</a></td><td>3D, rigid transformation with rotation represented by Euler angles</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1Similarity2DTransform.html\">Similarity2DTransform</a></td><td>2D, composition of isotropic scaling and rigid transformation with rotation represented by a Euler angle</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1Similarity3DTransform.html\">Similarity3DTransform</a></td><td>3D, composition of isotropic scaling and rigid transformation with rotation represented by a versor</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1ScaleTransform.html\">ScaleTransform</a></td><td>2D or 3D, anisotropic scaling</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1ScaleVersor3DTransform.html\">ScaleVersor3DTransform</a></td><td>3D, rigid transformation and anisotropic scale is <bf>added</bf> to the rotation matrix part (not composed as one would expect)</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1ScaleSkewVersor3DTransform.html\">ScaleSkewVersor3DTransform</a></td><td>3D, rigid transformation with anisotropic scale and skew matrices <bf>added</bf> to the rotation matrix part (not composed as one would expect)</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1AffineTransform.html\">AffineTransform</a></td><td>2D or 3D, affine transformation.</td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1BSplineTransform.html\">BSplineTransform</a></td><td>2D or 3D, deformable transformation represented by a sparse regular grid of control points. </td></tr>\n <tr><td><a href=\"http://www.itk.org/Doxygen/html/classitk_1_1DisplacementFieldTransform.html\">DisplacementFieldTransform</a></td><td>2D or 3D, deformable transformation represented as a dense regular grid of vectors.</td></tr>\n <tr><td><a href=\"http://www.itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1Transform.html\">Transform</a></td>\n <td>A generic transformation. Can represent any of the SimpleITK transformations, and a <b>composite transformation</b> (stack of transformations concatenated via composition, last added, first applied). </td></tr>\n </table>",
"import SimpleITK as sitk\nimport numpy as np\n \nfrom __future__ import print_function\n \nimport matplotlib.pyplot as plt\n%matplotlib inline \nfrom ipywidgets import interact, fixed\n\nOUTPUT_DIR = \"Output\"\n\nprint(sitk.Version())",
"Points in SimpleITK\nUtility functions\nA number of functions that deal with point data in a uniform manner.",
"import numpy as np\n\ndef point2str(point, precision=1):\n \"\"\"\n Format a point for printing, based on specified precision with trailing zeros. Uniform printing for vector-like data \n (tuple, numpy array, list).\n \n Args:\n point (vector-like): nD point with floating point coordinates.\n precision (int): Number of digits after the decimal point.\n Return:\n String represntation of the given point \"xx.xxx yy.yyy zz.zzz...\".\n \"\"\"\n return ' '.join(format(c, '.{0}f'.format(precision)) for c in point)\n\n\ndef uniform_random_points(bounds, num_points):\n \"\"\"\n Generate random (uniform withing bounds) nD point cloud. Dimension is based on the number of pairs in the bounds input.\n \n Args:\n bounds (list(tuple-like)): list where each tuple defines the coordinate bounds.\n num_points (int): number of points to generate.\n \n Returns:\n list containing num_points numpy arrays whose coordinates are within the given bounds.\n \"\"\"\n internal_bounds = [sorted(b) for b in bounds]\n # Generate rows for each of the coordinates according to the given bounds, stack into an array, \n # and split into a list of points.\n mat = np.vstack([np.random.uniform(b[0], b[1], num_points) for b in internal_bounds])\n return list(mat[:len(bounds)].T)\n\n\ndef target_registration_errors(tx, point_list, reference_point_list):\n \"\"\"\n Distances between points transformed by the given transformation and their\n location in another coordinate system. When the points are only used to evaluate\n registration accuracy (not used in the registration) this is the target registration\n error (TRE).\n \"\"\"\n return [np.linalg.norm(np.array(tx.TransformPoint(p)) - np.array(p_ref))\n for p,p_ref in zip(point_list, reference_point_list)]\n\n\ndef print_transformation_differences(tx1, tx2):\n \"\"\"\n Check whether two transformations are \"equivalent\" in an arbitrary spatial region \n either 3D or 2D, [x=(-10,10), y=(-100,100), z=(-1000,1000)]. This is just a sanity check, \n as we are just looking at the effect of the transformations on a random set of points in\n the region.\n \"\"\"\n if tx1.GetDimension()==2 and tx2.GetDimension()==2:\n bounds = [(-10,10),(-100,100)]\n elif tx1.GetDimension()==3 and tx2.GetDimension()==3:\n bounds = [(-10,10),(-100,100), (-1000,1000)]\n else:\n raise ValueError('Transformation dimensions mismatch, or unsupported transformation dimensionality')\n num_points = 10\n point_list = uniform_random_points(bounds, num_points)\n tx1_point_list = [ tx1.TransformPoint(p) for p in point_list]\n differences = target_registration_errors(tx2, point_list, tx1_point_list)\n print(tx1.GetName()+ '-' +\n tx2.GetName()+\n ':\\tminDifference: {:.2f} maxDifference: {:.2f}'.format(min(differences), max(differences)))",
"In SimpleITK points can be represented by any vector-like data type. In Python these include Tuple, Numpy array, and List. In general Python will treat these data types differently, as illustrated by the print function below.",
"# SimpleITK points represented by vector-like data structures. \npoint_tuple = (9.0, 10.531, 11.8341)\npoint_np_array = np.array([9.0, 10.531, 11.8341])\npoint_list = [9.0, 10.531, 11.8341]\n\nprint(point_tuple)\nprint(point_np_array)\nprint(point_list)\n\n# Uniform printing with specified precision.\nprecision = 2\nprint(point2str(point_tuple, precision))\nprint(point2str(point_np_array, precision))\nprint(point2str(point_list, precision))",
"Global Transformations\nAll global transformations <i>except translation</i> are of the form:\n$$T(\\mathbf{x}) = A(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c}$$\nIn ITK speak (when printing your transformation):\n<ul>\n<li>Matrix: the matrix $A$</li>\n<li>Center: the point $\\mathbf{c}$</li>\n<li>Translation: the vector $\\mathbf{t}$</li>\n<li>Offset: $\\mathbf{t} + \\mathbf{c} - A\\mathbf{c}$</li>\n</ul>\n\nTranslationTransform",
"# A 3D translation. Note that you need to specify the dimensionality, as the sitk TranslationTransform \n# represents both 2D and 3D translations.\ndimension = 3 \noffset =(1,2,3) # offset can be any vector-like data \ntranslation = sitk.TranslationTransform(dimension, offset)\nprint(translation)\n\n# Transform a point and use the inverse transformation to get the original back.\npoint = [10, 11, 12]\ntransformed_point = translation.TransformPoint(point)\ntranslation_inverse = translation.GetInverse()\nprint('original point: ' + point2str(point) + '\\n'\n 'transformed point: ' + point2str(transformed_point) + '\\n'\n 'back to original: ' + point2str(translation_inverse.TransformPoint(transformed_point)))",
"Euler2DTransform",
"point = [10, 11]\nrotation2D = sitk.Euler2DTransform()\nrotation2D.SetTranslation((7.2, 8.4))\nrotation2D.SetAngle(np.pi/2)\nprint('original point: ' + point2str(point) + '\\n'\n 'transformed point: ' + point2str(rotation2D.TransformPoint(point)))\n\n# Change the center of rotation so that it coincides with the point we want to\n# transform, why is this a unique configuration?\nrotation2D.SetCenter(point)\nprint('original point: ' + point2str(point) + '\\n'\n 'transformed point: ' + point2str(rotation2D.TransformPoint(point)))",
"VersorTransform",
"# Rotation only, parametrized by Versor (vector part of unit quaternion),\n# quaternion defined by rotation of theta around axis n: \n# q = [n*sin(theta/2), cos(theta/2)]\n \n# 180 degree rotation around z axis\n\n# Use a versor:\nrotation1 = sitk.VersorTransform([0,0,1,0])\n\n# Use axis-angle:\nrotation2 = sitk.VersorTransform((0,0,1), np.pi)\n\n# Use a matrix:\nrotation3 = sitk.VersorTransform()\nrotation3.SetMatrix([-1, 0, 0, 0, -1, 0, 0, 0, 1]);\n\npoint = (10, 100, 1000)\n\np1 = rotation1.TransformPoint(point)\np2 = rotation2.TransformPoint(point)\np3 = rotation3.TransformPoint(point)\n\nprint('Points after transformation:\\np1=' + str(p1) + \n '\\np2='+ str(p2) + '\\np3='+ str(p3))",
"We applied the \"same\" transformation to the same point, so why are the results slightly different for the second initialization method?\nThis is where theory meets practice. Using the axis-angle initialization method involves trigonometric functions which on a fixed precision machine lead to these slight differences. In many cases this is not an issue, but it is something to remember. From here on we will sweep it under the rug (printing with a more reasonable precision). \nTranslation to Rigid [3D]\nCopy the translational component.",
"dimension = 3 \nt =(1,2,3) \ntranslation = sitk.TranslationTransform(dimension, t)\n\n# Only need to copy the translational component.\nrigid_euler = sitk.Euler3DTransform()\nrigid_euler.SetTranslation(translation.GetOffset())\n\nrigid_versor = sitk.VersorRigid3DTransform()\nrigid_versor.SetTranslation(translation.GetOffset())\n\n# Sanity check to make sure the transformations are equivalent.\nbounds = [(-10,10),(-100,100), (-1000,1000)]\nnum_points = 10\npoint_list = uniform_random_points(bounds, num_points)\ntransformed_point_list = [translation.TransformPoint(p) for p in point_list]\n\n# Draw the original and transformed points, include the label so that we \n# can modify the plots without requiring explicit changes to the legend.\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\norig = ax.scatter(list(np.array(point_list).T)[0],\n list(np.array(point_list).T)[1],\n list(np.array(point_list).T)[2], \n marker='o', \n color='blue',\n label='Original points')\ntransformed = ax.scatter(list(np.array(transformed_point_list).T)[0],\n list(np.array(transformed_point_list).T)[1],\n list(np.array(transformed_point_list).T)[2], \n marker='^', \n color='red', \n label='Transformed points')\nplt.legend(loc=(0.0,1.0))\n\neuler_errors = target_registration_errors(rigid_euler, point_list, transformed_point_list)\nversor_errors = target_registration_errors(rigid_versor, point_list, transformed_point_list)\n\nprint('Euler\\tminError: {:.2f} maxError: {:.2f}'.format(min(euler_errors), max(euler_errors)))\nprint('Versor\\tminError: {:.2f} maxError: {:.2f}'.format(min(versor_errors), max(versor_errors)))",
"Rotation to Rigid [3D]\nCopy the matrix or versor and <b>center of rotation</b>.",
"rotationCenter = (10, 10, 10)\nrotation = sitk.VersorTransform([0,0,1,0], rotationCenter)\n\nrigid_euler = sitk.Euler3DTransform()\nrigid_euler.SetMatrix(rotation.GetMatrix())\nrigid_euler.SetCenter(rotation.GetCenter())\n\nrigid_versor = sitk.VersorRigid3DTransform()\nrigid_versor.SetRotation(rotation.GetVersor())\n#rigid_versor.SetCenter(rotation.GetCenter()) #intentional error\n\n# Sanity check to make sure the transformations are equivalent.\nbounds = [(-10,10),(-100,100), (-1000,1000)]\nnum_points = 10\npoint_list = uniform_random_points(bounds, num_points)\ntransformed_point_list = [ rotation.TransformPoint(p) for p in point_list]\n\neuler_errors = target_registration_errors(rigid_euler, point_list, transformed_point_list)\nversor_errors = target_registration_errors(rigid_versor, point_list, transformed_point_list)\n\n# Draw the points transformed by the original transformation and after transformation\n# using the incorrect transformation, illustrate the effect of center of rotation.\nfrom mpl_toolkits.mplot3d import Axes3D\nincorrect_transformed_point_list = [ rigid_versor.TransformPoint(p) for p in point_list]\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\norig = ax.scatter(list(np.array(transformed_point_list).T)[0],\n list(np.array(transformed_point_list).T)[1],\n list(np.array(transformed_point_list).T)[2], \n marker='o', \n color='blue',\n label='Rotation around specific center')\ntransformed = ax.scatter(list(np.array(incorrect_transformed_point_list).T)[0],\n list(np.array(incorrect_transformed_point_list).T)[1],\n list(np.array(incorrect_transformed_point_list).T)[2], \n marker='^', \n color='red', \n label='Rotation around origin')\nplt.legend(loc=(0.0,1.0))\n\nprint('Euler\\tminError: {:.2f} maxError: {:.2f}'.format(min(euler_errors), max(euler_errors)))\nprint('Versor\\tminError: {:.2f} maxError: {:.2f}'.format(min(versor_errors), max(versor_errors)))",
"Similarity [2D]\nWhen the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\\mathbf{x}) = s\\mathbf{x}-s\\mathbf{c} + \\mathbf{c}$. Changing the transformation's center results in scale + translation.",
"def display_center_effect(x, y, tx, point_list, xlim, ylim):\n tx.SetCenter((x,y))\n transformed_point_list = [ tx.TransformPoint(p) for p in point_list]\n\n plt.scatter(list(np.array(transformed_point_list).T)[0],\n list(np.array(transformed_point_list).T)[1],\n marker='^', \n color='red', label='transformed points')\n plt.scatter(list(np.array(point_list).T)[0],\n list(np.array(point_list).T)[1],\n marker='o', \n color='blue', label='original points')\n plt.xlim(xlim)\n plt.ylim(ylim)\n plt.legend(loc=(0.25,1.01))\n\n# 2D square centered on (0,0)\npoints = [np.array((-1,-1)), np.array((-1,1)), np.array((1,1)), np.array((1,-1))]\n\n# Scale by 2 \nsimilarity = sitk.Similarity2DTransform();\nsimilarity.SetScale(2)\n\ninteract(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(similarity), point_list = fixed(points), \n xlim = fixed((-10,10)),ylim = fixed((-10,10)));",
"Rigid to Similarity [3D]\nCopy the translation, center, and matrix or versor.",
"rotation_center = (100, 100, 100)\ntheta_x = 0.0\ntheta_y = 0.0\ntheta_z = np.pi/2.0\ntranslation = (1,2,3)\n\nrigid_euler = sitk.Euler3DTransform(rotation_center, theta_x, theta_y, theta_z, translation)\n\nsimilarity = sitk.Similarity3DTransform()\nsimilarity.SetMatrix(rigid_euler.GetMatrix())\nsimilarity.SetTranslation(rigid_euler.GetTranslation())\nsimilarity.SetCenter(rigid_euler.GetCenter())\n\n# Apply the transformations to the same set of random points and compare the results\n# (see utility functions at top of notebook).\nprint_transformation_differences(rigid_euler, similarity)",
"Similarity to Affine [3D]\nCopy the translation, center and matrix.",
"rotation_center = (100, 100, 100)\naxis = (0,0,1)\nangle = np.pi/2.0\ntranslation = (1,2,3)\nscale_factor = 2.0\nsimilarity = sitk.Similarity3DTransform(scale_factor, axis, angle, translation, rotation_center)\n\naffine = sitk.AffineTransform(3)\naffine.SetMatrix(similarity.GetMatrix())\naffine.SetTranslation(similarity.GetTranslation())\naffine.SetCenter(similarity.GetCenter())\n\n# Apply the transformations to the same set of random points and compare the results\n# (see utility functions at top of notebook).\nprint_transformation_differences(similarity, affine)",
"Scale Transform\nJust as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\\mathbf{x}) = \\mathbf{s}^T\\mathbf{x}-\\mathbf{s}^T\\mathbf{c} + \\mathbf{c}$).",
"# 2D square centered on (0,0).\npoints = [np.array((-1,-1)), np.array((-1,1)), np.array((1,1)), np.array((1,-1))]\n\n# Scale by half in x and 2 in y.\nscale = sitk.ScaleTransform(2, (0.5,2));\n\n# Interactively change the location of the center.\ninteract(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(scale), point_list = fixed(points), \n xlim = fixed((-10,10)),ylim = fixed((-10,10)));",
"Scale Versor\nThis is not what you would expect from the name (composition of anisotropic scaling and rigid). This is:\n$$T(x) = (R+S)(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c},\\;\\; \\textrm{where } S= \\left[\\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \\end{array}\\right]$$ \nThere is no natural way of \"promoting\" the similarity transformation to this transformation.",
"scales = (0.5,0.7,0.9)\ntranslation = (1,2,3)\naxis = (0,0,1)\nangle = 0.0\nscale_versor = sitk.ScaleVersor3DTransform(scales, axis, angle, translation)\nprint(scale_versor)",
"Scale Skew Versor\nAgain, not what you expect based on the name, this is not a composition of transformations. This is:\n$$T(x) = (R+S+K)(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c},\\;\\; \\textrm{where } S = \\left[\\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \\end{array}\\right]\\;\\; \\textrm{and } K = \\left[\\begin{array}{ccc} 0 & k_0 & k_1 \\ k_2 & 0 & k_3 \\ k_4 & k_5 & 0 \\end{array}\\right]$$ \nIn practice this is an over-parametrized version of the affine transform, 15 (scale, skew, versor, translation) vs. 12 parameters (matrix, translation).",
"scale = (2,2.1,3)\nskew = np.linspace(start=0.0, stop=1.0, num=6) #six eqaully spaced values in[0,1], an arbitrary choice\ntranslation = (1,2,3)\nversor = (0,0,0,1.0)\nscale_skew_versor = sitk.ScaleSkewVersor3DTransform(scale, skew, versor, translation)\nprint(scale_skew_versor)",
"Bounded Transformations\nSimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse represntation) and DisplacementFieldTransform (dense representation).\nTransforming a point that is outside the bounds will return the original point - identity transform.",
"#\n# This function displays the effects of the deformable transformation on a grid of points by scaling the\n# initial displacements (either of control points for bspline or the deformation field itself). It does\n# assume that all points are contained in the range(-2.5,-2.5), (2.5,2.5).\n#\ndef display_displacement_scaling_effect(s, original_x_mat, original_y_mat, tx, original_control_point_displacements):\n if tx.GetDimension() !=2:\n raise ValueError('display_displacement_scaling_effect only works in 2D')\n\n plt.scatter(original_x_mat,\n original_y_mat,\n marker='o', \n color='blue', label='original points')\n pointsX = []\n pointsY = []\n tx.SetParameters(s*original_control_point_displacements)\n \n for index, value in np.ndenumerate(original_x_mat):\n px,py = tx.TransformPoint((value, original_y_mat[index]))\n pointsX.append(px) \n pointsY.append(py)\n \n plt.scatter(pointsX,\n pointsY,\n marker='^', \n color='red', label='transformed points')\n plt.legend(loc=(0.25,1.01))\n plt.xlim((-2.5,2.5))\n plt.ylim((-2.5,2.5))",
"BSpline\nUsing a sparse set of control points to control a free form deformation.",
"# Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function\n# or its object oriented counterpart BSplineTransformInitializerFilter).\ndimension = 2\nspline_order = 3\ndirection_matrix_row_major = [1.0,0.0,0.0,1.0] # identity, mesh is axis aligned\norigin = [-1.0,-1.0] \ndomain_physical_dimensions = [2,2]\n\nbspline = sitk.BSplineTransform(dimension, spline_order)\nbspline.SetTransformDomainOrigin(origin)\nbspline.SetTransformDomainDirection(direction_matrix_row_major)\nbspline.SetTransformDomainPhysicalDimensions(domain_physical_dimensions)\nbspline.SetTransformDomainMeshSize((4,3))\n\n# Random displacement of the control points.\noriginalControlPointDisplacements = np.random.random(len(bspline.GetParameters()))\nbspline.SetParameters(originalControlPointDisplacements)\n\n# Apply the bspline transformation to a grid of points \n# starting the point set exactly at the origin of the bspline mesh is problematic as\n# these points are considered outside the transformation's domain,\n# remove epsilon below and see what happens.\nnumSamplesX = 10\nnumSamplesY = 20\n \ncoordsX = np.linspace(origin[0]+np.finfo(float).eps, origin[0] + domain_physical_dimensions[0], numSamplesX)\ncoordsY = np.linspace(origin[1]+np.finfo(float).eps, origin[1] + domain_physical_dimensions[1], numSamplesY)\nXX, YY = np.meshgrid(coordsX, coordsY)\n\ninteract(display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),\n tx = fixed(bspline), original_control_point_displacements = fixed(originalControlPointDisplacements)); ",
"DisplacementField\nA dense set of vectors representing the displacment inside the given domain. The most generic representation of a transformation.",
"# Create the displacment field. \n \n# When working with images the safer thing to do is use the image based constructor,\n# sitk.DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement\n# field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be \n# sitk.sitkVectorFloat64.\ndisplacement = sitk.DisplacementFieldTransform(2)\nfield_size = [10,20]\nfield_origin = [-1.0,-1.0] \nfield_spacing = [2.0/9.0,2.0/19.0] \nfield_direction = [1,0,0,1] # direction cosine matrix (row major order) \n\n# Concatenate all the information into a single list\ndisplacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\n# Set the interpolater, either sitkLinear which is default or nearest neighbor\ndisplacement.SetInterpolator(sitk.sitkNearestNeighbor)\n\noriginalDisplacements = np.random.random(len(displacement.GetParameters()))\ndisplacement.SetParameters(originalDisplacements)\n\ncoordsX = np.linspace(field_origin[0], field_origin[0]+(field_size[0]-1)*field_spacing[0], field_size[0])\ncoordsY = np.linspace(field_origin[1], field_origin[1]+(field_size[1]-1)*field_spacing[1], field_size[1])\nXX, YY = np.meshgrid(coordsX, coordsY)\n\ninteract(display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),\n tx = fixed(displacement), original_control_point_displacements = fixed(originalDisplacements)); ",
"Displacement field transform created from an image. Remember that SimpleITK will clear the image you provide, as shown in the cell below.",
"displacement_image = sitk.Image([64,64], sitk.sitkVectorFloat64)\n# The only point that has any displacement is (0,0)\ndisplacement = (0.5,0.5)\ndisplacement_image[0,0] = displacement\n\nprint('Original displacement image size: ' + point2str(displacement_image.GetSize()))\n\ndisplacement_field_transform = sitk.DisplacementFieldTransform(displacement_image)\n\nprint('After using the image to create a transform, displacement image size: ' + point2str(displacement_image.GetSize()))\n\n# Check that the displacement field transform does what we expect.\nprint('Expected result: {0}\\nActual result:{1}'.format(str(displacement), displacement_field_transform.TransformPoint((0,0))))",
"Composite transform (Transform)\nThe generic SimpleITK transform class. This class can represent both a single transformation (global, local), or a composite transformation (multiple transformations applied one after the other). This is the output typed returned by the SimpleITK registration framework. \nThe choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework.\nBelow we represent the composite transformation $T_{affine}(T_{rigid}(x))$ in two ways: (1) use a composite transformation to contain the two; (2) combine the two into a single affine transformation. We can use both as initial transforms (SetInitialTransform) for the registration framework (ImageRegistrationMethod). The difference is that in the former case the optimized parameters belong to the rigid transformation and in the later they belong to the combined-affine transformation.",
"# Create a composite transformation: T_affine(T_rigid(x)).\nrigid_center = (100,100,100)\ntheta_x = 0.0\ntheta_y = 0.0\ntheta_z = np.pi/2.0\nrigid_translation = (1,2,3)\nrigid_euler = sitk.Euler3DTransform(rigid_center, theta_x, theta_y, theta_z, rigid_translation)\n\naffine_center = (20, 20, 20)\naffine_translation = (5,6,7) \n\n# Matrix is represented as a vector-like data in row major order.\naffine_matrix = np.random.random(9) \naffine = sitk.AffineTransform(affine_matrix, affine_translation, affine_center)\n\n# Using the composite transformation we just add them in (stack based, first in - last applied).\ncomposite_transform = sitk.Transform(affine)\ncomposite_transform.AddTransform(rigid_euler)\n\n# Create a single transform manually. this is a recipe for compositing any two global transformations\n# into an affine transformation, T_0(T_1(x)):\n# A = A=A0*A1\n# c = c1\n# t = A0*[t1+c1-c0] + t0+c0-c1\nA0 = np.asarray(affine.GetMatrix()).reshape(3,3)\nc0 = np.asarray(affine.GetCenter())\nt0 = np.asarray(affine.GetTranslation())\n\nA1 = np.asarray(rigid_euler.GetMatrix()).reshape(3,3)\nc1 = np.asarray(rigid_euler.GetCenter())\nt1 = np.asarray(rigid_euler.GetTranslation())\n\ncombined_mat = np.dot(A0,A1)\ncombined_center = c1\ncombined_translation = np.dot(A0, t1+c1-c0) + t0+c0-c1\ncombined_affine = sitk.AffineTransform(combined_mat.flatten(), combined_translation, combined_center)\n\n# Check if the two transformations are equivalent.\nprint('Apply the two transformations to the same point cloud:')\nprint('\\t', end='')\nprint_transformation_differences(composite_transform, combined_affine)\n\nprint('Transform parameters:')\nprint('\\tComposite transform: ' + point2str(composite_transform.GetParameters(),2))\nprint('\\tCombined affine: ' + point2str(combined_affine.GetParameters(),2))\n\nprint('Fixed parameters:')\nprint('\\tComposite transform: ' + point2str(composite_transform.GetFixedParameters(),2))\nprint('\\tCombined affine: ' + point2str(combined_affine.GetFixedParameters(),2))",
"Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation.\nThe following code illustrates this, where the whole region is translated and subregions have different deformations.",
"# Global transformation.\ntranslation = sitk.TranslationTransform(2,(1.0,0.0))\n\n# Displacement in region 1.\ndisplacement1 = sitk.DisplacementFieldTransform(2)\nfield_size = [10,20]\nfield_origin = [-1.0,-1.0] \nfield_spacing = [2.0/9.0,2.0/19.0] \nfield_direction = [1,0,0,1] # direction cosine matrix (row major order) \n\n# Concatenate all the information into a single list.\ndisplacement1.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\ndisplacement1.SetParameters(np.ones(len(displacement1.GetParameters())))\n\n# Displacement in region 2.\ndisplacement2 = sitk.DisplacementFieldTransform(2)\nfield_size = [10,20]\nfield_origin = [1.0,-3] \nfield_spacing = [2.0/9.0,2.0/19.0] \nfield_direction = [1,0,0,1] #direction cosine matrix (row major order) \n\n# Concatenate all the information into a single list.\ndisplacement2.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\ndisplacement2.SetParameters(-1.0*np.ones(len(displacement2.GetParameters())))\n\n# Composite transform which applies the global and local transformations.\ncomposite = sitk.Transform(translation)\ncomposite.AddTransform(displacement1)\ncomposite.AddTransform(displacement2)\n\n# Apply the composite transformation to points in ([-1,-3],[3,1]) and \n# display the deformation using a quiver plot.\n \n# Generate points.\nnumSamplesX = 10\nnumSamplesY = 10 \ncoordsX = np.linspace(-1.0, 3.0, numSamplesX)\ncoordsY = np.linspace(-3.0, 1.0, numSamplesY)\nXX, YY = np.meshgrid(coordsX, coordsY)\n\n# Transform points and compute deformation vectors.\npointsX = np.zeros(XX.shape)\npointsY = np.zeros(XX.shape)\nfor index, value in np.ndenumerate(XX):\n px,py = composite.TransformPoint((value, YY[index]))\n pointsX[index]=px - value \n pointsY[index]=py - YY[index]\n \nplt.quiver(XX, YY, pointsX, pointsY); ",
"Writing and Reading\nThe SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of tranformations).",
"import os\n\n# Create a 2D rigid transformation, write it to disk and read it back.\nbasic_transform = sitk.Euler2DTransform()\nbasic_transform.SetTranslation((1,2))\nbasic_transform.SetAngle(np.pi/2)\n\nfull_file_name = os.path.join(OUTPUT_DIR, 'euler2D.tfm')\n\nsitk.WriteTransform(basic_transform, full_file_name)\n\n# The ReadTransform function returns an sitk.Transform no matter the type of the transform \n# found in the file (global, bounded, composite).\nread_result = sitk.ReadTransform(full_file_name)\n\nprint('Different types: '+ str(type(read_result) != type(basic_transform)))\nprint_transformation_differences(basic_transform, read_result)\n\n\n# Create a composite transform then write and read.\ndisplacement = sitk.DisplacementFieldTransform(2)\nfield_size = [10,20]\nfield_origin = [-10.0,-100.0] \nfield_spacing = [20.0/(field_size[0]-1),200.0/(field_size[1]-1)] \nfield_direction = [1,0,0,1] #direction cosine matrix (row major order)\n\n# Concatenate all the information into a single list.\ndisplacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\ndisplacement.SetParameters(np.random.random(len(displacement.GetParameters())))\n\ncomposite_transform = sitk.Transform(basic_transform)\ncomposite_transform.AddTransform(displacement)\n\nfull_file_name = os.path.join(OUTPUT_DIR, 'composite.tfm')\n\nsitk.WriteTransform(composite_transform, full_file_name)\nread_result = sitk.ReadTransform(full_file_name)\n\nprint_transformation_differences(composite_transform, read_result) "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
albahnsen/PracticalMachineLearningClass | notebooks/05-SVM.ipynb | mit | [
"05 - Support Vector Machines\nby Alejandro Correa Bahnsen and Jesus Solano\nversion 1.4, January 2019\nPart of the class Practical Machine Learning\nThis notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Rick Muller, Sandia National Laboratories\nPreviously we introduced supervised machine learning.\nThere are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods: Support Vector Machines (SVMs).",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\nplt.style.use('bmh')",
"Motivating Support Vector Machines\nSupport Vector Machines (SVMs) are a powerful supervised learning algorithm used for classification or for regression. SVMs are a discriminative classifier: that is, they draw a boundary between clusters of data.\nLet's show a quick example of support vector classification. First we need to create a dataset:",
"from sklearn.datasets.samples_generator import make_blobs\nX, y = make_blobs(n_samples=50, centers=2,\n random_state=0, cluster_std=0.60)\nplt.figure(figsize=(8,8))\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50);",
"A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:",
"xfit = np.linspace(-1, 3.5)\nplt.figure(figsize=(8,8))\n\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50)\n\nfor m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:\n plt.plot(xfit, m * xfit + b, '-k')\n\nplt.xlim(-1, 3.5);",
"These are three very different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently!\nHow can we improve on this?\nSupport Vector Machines: Maximizing the Margin\nSupport vector machines are one way to address this.\nWhat support vector machined do is to not only draw a line, but consider a region about the line of some given width. Here's an example of what it might look like:",
"xfit = np.linspace(-1, 3.5)\nplt.figure(figsize=(8,8))\n\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50)\n\nfor m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:\n yfit = m * xfit + b\n plt.plot(xfit, yfit, '-k')\n plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)\n\nplt.xlim(-1, 3.5);",
"Notice here that if we want to maximize this width, the middle fit is clearly the best.\nThis is the intuition of support vector machines, which optimize a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets.\nFitting a Support Vector Machine\nNow we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task.",
"from sklearn.svm import SVC # \"Support Vector Classifier\"\nclf = SVC(kernel='linear')\nclf.fit(X, y)",
"To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:",
"import warnings\nwarnings.filterwarnings('ignore')\n\ndef plot_svc_decision_function(clf, ax=None):\n \"\"\"Plot the decision function for a 2D SVC\"\"\"\n if ax is None:\n ax = plt.gca()\n x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)\n y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)\n Y, X = np.meshgrid(y, x)\n P = np.zeros_like(X)\n for i, xi in enumerate(x):\n for j, yj in enumerate(y):\n P[i, j] = clf.decision_function([[xi, yj]])\n # plot the margins\n ax.contour(X, Y, P, colors='k',\n levels=[-1, 0, 1], alpha=0.5,\n linestyles=['--', '-', '--'])\n\nplt.figure(figsize=(8,8))\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50)\nplot_svc_decision_function(clf)",
"Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the support vectors (giving the algorithm its name).\nIn scikit-learn, these are stored in the support_vectors_ attribute of the classifier:",
"plt.figure(figsize=(8,8))\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50)\nplot_svc_decision_function(clf)\nplt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],\n s=200, facecolors='none');",
"Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.\n(This is only available in IPython 2.0+, and will not work in a static view)",
"from IPython.html.widgets import interact\n\ndef plot_svm(N=10):\n X, y = make_blobs(n_samples=200, centers=2,\n random_state=0, cluster_std=0.60)\n X = X[:N]\n y = y[:N]\n clf = SVC(kernel='linear')\n clf.fit(X, y)\n plt.figure(figsize=(8,8))\n plt.scatter(X[:, 0], X[:, 1], c=y, s=50)\n plt.xlim(-1, 4)\n plt.ylim(-1, 6)\n plot_svc_decision_function(clf, plt.gca())\n plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],\n s=200, facecolors='none')\n \ninteract(plot_svm, N=[10, 200], kernel='linear');",
"Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results!\nGoing further: Kernel Methods\nWhere SVM gets incredibly exciting is when it is used in conjunction with kernels.\nTo motivate the need for kernels, let's look at some data which is not linearly separable:",
"from sklearn.datasets.samples_generator import make_circles\nX, y = make_circles(100, factor=.1, noise=.1)\n\nclf = SVC(kernel='linear').fit(X, y)\nplt.figure(figsize=(8,8))\n\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50)\n# plot_svc_decision_function(clf);",
"Clearly, no linear discrimination will ever separate these data.\nOne way we can adjust this is to apply a kernel, which is some functional transformation of the input data.\nFor example, one simple model we could use is a radial basis function",
"r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))",
"If we plot this along with our data, we can see the effect of it:",
"from mpl_toolkits import mplot3d\n\ndef plot_3D(elev=30, azim=30):\n plt.figure(figsize=(8,8))\n ax = plt.subplot(projection='3d')\n ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50)\n ax.view_init(elev=elev, azim=azim)\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_zlabel('r')\n\ninteract(plot_3D, elev=[-90, 90], azip=(-180, 180));",
"We can see that with this additional dimension, the data becomes trivially linearly separable!\nThis is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using kernel='rbf', short for radial basis function:",
"clf = SVC(kernel='rbf')\nclf.fit(X, y)\n\nplt.figure(figsize=(8,8))\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50)\nplot_svc_decision_function(clf)\n",
"Here there are effectively $N$ basis functions: one centered at each point! Through a clever mathematical trick, this computation proceeds very efficiently using the \"Kernel Trick\", without actually constructing the matrix of kernel evaluations."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
steinam/teacher | jup_notebooks/datenbanken/SubSelects.ipynb | mit | [
"Subselects",
"%load_ext sql\n\n%sql mysql://steinam:steinam@localhost/sommer_2015",
"Sommer 2015\nDatenmodell\n\nAufgabe\nErstellen Sie eine Abfrage, mit der Sie die Daten aller Kunden, die Anzahl deren Aufträge, die Anzahl der Fahrten und die Summe der Streckenkilometer erhalten. Die Ausgabe soll nach Kunden-PLZ absteigend sortiert sein.\n\nLösung\nmysql\nselect k.kd_id, \n (select count(a.Au_ID) from auftrag a \n where a.au_kd_id = k.kd_id ) as AnzahlAuftr,\n (select count(f.`f_id`) from fahrten f, auftrag a\n where f.f_au_id = a.au_id and a.`au_kd_id` = k.`kd_id`) as AnzahlFahrt,\n (select sum(ts.ts_strecke) from teilstrecke ts, fahrten f, auftrag a\n where ts.ts_f_id = f.f_id and a.au_id = f.`f_au_id` and a.`au_kd_id` = k.`kd_id`) as SumStrecke\nfrom kunde k\norder by k.kd_plz;",
"%%sql \nselect k.kd_id, k.kd_plz, \n (select count(a.Au_ID) from auftrag a where a.au_kd_id = k.kd_id ) as AnzahlAuftr,\n (select count(f.`f_id`) from fahrten f, auftrag a \n where f.f_au_id = a.au_id and a.`au_kd_id` = k.`kd_id`) as AnzahlFahrt, \n (select sum(ts.ts_strecke) from teilstrecke ts, fahrten f, auftrag a \n where ts.ts_f_id = f.f_id and a.au_id = f.`f_au_id` and a.`au_kd_id` = k.`kd_id`) as SumStrecke \nfrom kunde k order by k.kd_plz;\n\n%sql select count(*) as AnzahlFahrten from fahrten",
"Warum geht kein Join ??\nmysql\nselect k.kd_id, k.`kd_firma`, k.`kd_plz`, \n count(a.Au_ID) as AnzAuftrag, \n count(f.f_id) as AnzFahrt, \n sum(ts.ts_strecke) as SumStrecke\nfrom kunde k left join auftrag a\n on k.`kd_id` = a.`au_kd_id`\nleft join fahrten f\n on a.`au_id` = f.`f_au_id`\nleft join teilstrecke ts\n on ts.`ts_f_id` = f.`f_id`\ngroup by k.kd_id \norder by k.`kd_plz`",
"%%sql \nselect k.kd_id, k.`kd_firma`, k.`kd_plz`, \n count(distinct a.Au_ID) as AnzAuftrag, \n count(distinct f.f_id) as AnzFahrt, \n sum(ts.ts_strecke) as SumStrecke\nfrom kunde k left join auftrag a\n on k.`kd_id` = a.`au_kd_id`\nleft join fahrten f\n on a.`au_id` = f.`f_au_id`\nleft join teilstrecke ts\n on ts.`ts_f_id` = f.`f_id`\ngroup by k.kd_id \norder by k.`kd_plz`\n",
"Der Ansatz mit Join funktioniert in dieser Form nicht, da spätestens beim 2. Join die Firma Trappo mit 2 Datensätzen aus dem 1. Join verknüpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join.\nDie folgende Abfrage zeigt ohne die Aggregatfunktionen das jeweilige Ausgangsergebnis\nmysql\nselect k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id`\nfrom kunde k left join auftrag a\n on k.`kd_id` = a.`au_kd_id`\nleft join fahrten f\n on a.`au_id` = f.`f_au_id`\nleft join teilstrecke ts\n on ts.`ts_f_id` = f.`f_id`\norder by k.`kd_plz`",
"%%sql\nSELECT kunde.Kd_ID, kunde.Kd_Firma, kunde.Kd_Strasse, kunde.Kd_PLZ, \nkunde.Kd_Ort, COUNT(distinct auftrag.Au_ID) AS AnzahlAuftr, COUNT(distinct fahrten.F_ID) AS AnzahlFahrt, SUM(teilstrecke.Ts_Strecke) AS SumStrecke\nFROM kunde\nLEFT JOIN auftrag ON auftrag.Au_Kd_ID = kunde.Kd_ID\nLEFT JOIN fahrten ON fahrten.F_Au_ID = auftrag.Au_ID \nLEFT JOIN Teilstrecke ON teilstrecke.Ts_F_ID = fahrten.F_ID \nGROUP BY kunde.Kd_ID\nORDER BY kunde.Kd_PLZ desc;",
"Winter 2015\nDatenmodell\n\nHinweis: In Rechnung gibt es zusätzlich ein Feld Rechnung.Kd_ID\nAufgabe\nErstellen Sie eine SQL-Abfrage, mit der alle Kunden wie folgt aufgelistet werden, bei denen eine Zahlungsbedingung mit einem Skontosatz größer 3 % ist, mit Ausgabe der Anzahl der hinterlegten Rechnungen aus dem Jahr 2015.\n\nLösung",
"%sql mysql://steinam:steinam@localhost/winter_2015",
"``mysql\nselect count(rechnung.Rg_ID), kunde.Kd_Namefrom rechnung inner join kunde\n onrechnung.Rg_KD_ID= kunde.Kd_IDinner joinzahlungsbedingungon kunde.Kd_Zb_ID=zahlungsbedingung.Zb_IDwherezahlungsbedingung.Zb_SkontoProzent> 3.0\n and year(rechnung.Rg_Datum) = 2015\ngroup by Kunde.Kd_Name`\n```",
"%%sql \nselect count(rechnung.`Rg_ID`), kunde.`Kd_Name` from rechnung \n inner join kunde on `rechnung`.`Rg_KD_ID` = kunde.`Kd_ID` \n inner join `zahlungsbedingung` on kunde.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` \n where `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0 \n and year(`rechnung`.`Rg_Datum`) = 2015 group by Kunde.`Kd_Name`",
"Es geht auch mit einem Subselect\n``mysql\n select kd.Kd_Name, \n (select COUNT(*) from Rechnung as R\n where R.Rg_KD_ID= KD.Kd_IDand year(R.Rg_Datum`) = 2015)\nfrom Kunde kd inner join `zahlungsbedingung` \non kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`\n\nand zahlungsbedingung.Zb_SkontoProzent > 3.0\n```",
"%%sql \nselect kd.`Kd_Name`, \n(select COUNT(*) from Rechnung as R \n where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl\nfrom Kunde kd inner join `zahlungsbedingung` \n on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` \n and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0\n\n%%sql\n-- wortmann und prinz\nselect \n\t(select count(rechnung.rg_id) from rechnung \n\t\twhere\n\t\t\trechnung.rg_kd_id = kunde.kd_id\n\t\t\tand (select zb_skontoprozent from zahlungsbedingung where zahlungsbedingung.zb_id = kunde.kd_zb_id) > 3\n\t\t\tand YEAR(rechnung.rg_datum) = 2015\n\t) as AnzRechnungen,\n\tkunde.*\nfrom kunde;\n\n%%sql\nSELECT COUNT(r.rg_id) AS AnzRechnung, k.*\nFROM kunde AS k\nLEFT JOIN rechnung AS r ON k.kd_id = r.Rg_KD_ID\nWHERE k.kd_zb_id IN \n (SELECT zb_id FROM zahlungsbedingung WHERE zb_skontoprozent > 3) AND YEAR(r.Rg_Datum) = 2015\nGROUP BY k.Kd_ID",
"Versicherung\nZeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden.\nDatenmodell Versicherung",
"%sql -- your code goes here",
"Lösung",
"%sql mysql://steinam:steinam@localhost/versicherung_complete\n\n%%sql \nselect min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID`\nfrom `versicherungsvertrag` vv inner join mitarbeiter m \n on vv.`Mitarbeiter_ID` = m.`ID`\nwhere vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m \n inner join Abteilung a\n on m.`Abteilung_ID` = a.`ID`) \ngroup by vv.`Mitarbeiter_ID`\n\n%%sql\n-- rm\nSELECT m.ID, m.Name, m.Vorname, v.*\nFROM versicherungsvertrag AS v\nJOIN mitarbeiter AS m ON m.ID = v.Mitarbeiter_ID\nWHERE v.Abschlussdatum = (SELECT min(v.Abschlussdatum) \n FROM versicherungsvertrag AS v WHERE v.Mitarbeiter_ID = m.ID \n )\nGROUP BY v.Mitarbeiter_ID\n\n\n\n\n%%sql\n-- original\nSELECT vv.ID as VV, vv.Vertragsnummer, vv.Abschlussdatum, vv.Art,\nmi.ID as MI, mi.Name, mi.Vorname\nfrom Versicherungsvertrag vv\nright join ( select MIN(vv2.ID) as ID, vv2.Mitarbeiter_ID\nfrom Versicherungsvertrag vv2\ngroup by vv2.Mitarbeiter_id ) Temp\non Temp.ID = vv.ID\nright join Mitarbeiter mi on mi.ID = vv.Mitarbeiter_ID\nwhere mi.Abteilung_ID = ( select ID from Abteilung\nwhere Bezeichnung = 'Vertrieb' );\n\n%%sql\n-- rm\nSELECT m.ID, m.Name, m.Vorname, v.*\nFROM versicherungsvertrag AS v\nJOIN mitarbeiter AS m ON m.ID = v.Mitarbeiter_ID\nGROUP BY v.Mitarbeiter_ID\nORDER BY v.Abschlussdatum ASC\n\n%%sql\n-- ruppert_hartmann\n\nSelect mitarbeiter.ID, mitarbeiter.Name, mitarbeiter.Vorname, \n\t\t\tmitarbeiter.Personalnummer,\n\t\t\tabteilung.Bezeichnung, \n\t\t\tmin(versicherungsvertrag.abschlussdatum), \n versicherungsvertrag.vertragsnummer\nFROM mitarbeiter\nLEFT JOIN abteilung ON Abteilung_ID = Abteilung.ID\nLEFT JOIN versicherungsvertrag ON versicherungsvertrag.Mitarbeiter_ID = mitarbeiter.ID\nWHERE abteilung.Bezeichnung = 'Vertrieb'\nGROUP BY mitarbeiter.ID \n\n\nresult = _\n\nresult"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
projectmesa/mesa-examples | examples/Tutorial-Boltzmann_Wealth_Model/.ipynb_checkpoints/Introduction to Mesa Tutorial Code-checkpoint.ipynb | apache-2.0 | [
"Introduction to Mesa Tutorial Code\nThis Notebook contains code corresponding to the Intro to Mesa tutorial, which you should check out for the full explanation and documentation.",
"# Use matplotlib for inline graphing\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Simple Model\nThis section corresponds to the code in the Running Your First Model section of the tutorial.\nFirst, import the base classes we'll use",
"from mesa import Agent, Model\nfrom mesa.time import RandomActivation\nimport random",
"Next, create the agent and model classes:",
"class MoneyAgent(Agent):\n \"\"\" An agent with fixed initial wealth.\"\"\"\n def __init__(self, unique_id):\n self.unique_id = unique_id\n self.wealth = 1\n\n def step(self, model):\n if self.wealth == 0:\n return\n other_agent = random.choice(model.schedule.agents)\n other_agent.wealth += 1\n self.wealth -= 1\n\n\nclass MoneyModel(Model):\n \"\"\"A model with some number of agents.\"\"\"\n def __init__(self, N):\n self.running = True\n self.num_agents = N\n self.schedule = RandomActivation(self)\n # Create agents\n for i in range(self.num_agents):\n a = MoneyAgent(i)\n self.schedule.add(a)\n\n def step(self):\n '''Advance the model by one step.'''\n self.schedule.step()",
"Create a model and run it for 10 steps:",
"model = MoneyModel(10)\nfor i in range(10):\n model.step()",
"And display a histogram of agent wealths:",
"agent_wealth = [a.wealth for a in model.schedule.agents]\nplt.hist(agent_wealth)",
"Create and run 100 models, and visualize the wealth distribution across all of them:",
"all_wealth = []\nfor j in range(100):\n # Run the model\n model = MoneyModel(10)\n for i in range(10):\n model.step()\n # Store the results\n for agent in model.schedule.agents:\n all_wealth.append(agent.wealth)\n\nplt.hist(all_wealth, bins=range(max(all_wealth)+1))",
"Adding space\nThis section puts the agents on a grid, corresponding to the Adding Space section of the tutorial.\nFor this, we need to import the grid class:",
"from mesa.space import MultiGrid",
"Create the new model object. (Note that this overwrites the MoneyModel object created above)",
"class MoneyModel(Model):\n \"\"\"A model with some number of agents.\"\"\"\n def __init__(self, N, width, height):\n self.running = True\n self.num_agents = N\n self.grid = MultiGrid(height, width, True)\n self.schedule = RandomActivation(self)\n # Create agents\n for i in range(self.num_agents):\n a = MoneyAgent(i)\n self.schedule.add(a)\n # Add the agent to a random grid cell\n x = random.randrange(self.grid.width)\n y = random.randrange(self.grid.height)\n self.grid.place_agent(a, (x, y))\n\n def step(self):\n self.schedule.step()",
"And create the agent to go along with it:",
"class MoneyAgent(Agent):\n \"\"\" An agent with fixed initial wealth.\"\"\"\n def __init__(self, unique_id):\n self.unique_id = unique_id\n self.wealth = 1\n\n def move(self, model):\n possible_steps = model.grid.get_neighborhood(self.pos, moore=True, include_center=False)\n new_position = random.choice(possible_steps)\n model.grid.move_agent(self, new_position)\n\n def give_money(self, model):\n cellmates = model.grid.get_cell_list_contents([self.pos])\n if len(cellmates) > 1:\n other = random.choice(cellmates)\n other.wealth += 1\n self.wealth -= 1\n\n def step(self, model):\n self.move(model)\n if self.wealth > 0:\n self.give_money(model)",
"Create a model with 50 agents and a 10x10 grid, and run for 20 steps",
"model = MoneyModel(50, 10, 10)\nfor i in range(20):\n model.step()",
"Visualize the number of agents on each grid cell:",
"import numpy as np\n\nagent_counts = np.zeros((model.grid.width, model.grid.height))\nfor cell in model.grid.coord_iter():\n cell_content, x, y = cell\n agent_count = len(cell_content)\n agent_counts[x][y] = agent_count\nplt.imshow(agent_counts, interpolation='nearest')\nplt.colorbar()\n ",
"Collecting Data\nAdd a Data Collector to the model, as explained in the corresponding section of the tutorial.\nFirst, import the DataCollector",
"from mesa.datacollection import DataCollector",
"Compute the agents' Gini coefficient, measuring inequality.",
"def compute_gini(model):\n '''\n Compute the current Gini coefficient.\n \n Args:\n model: A MoneyModel instance\n Returns:\n The Gini Coefficient for the model's current step.\n '''\n agent_wealths = [agent.wealth for agent in model.schedule.agents]\n x = sorted(agent_wealths)\n N = model.num_agents\n B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))\n return (1 + (1/N) - 2*B)",
"This MoneyModel is identical to the one above, except for the self.datacollector = ... line at the end of the __init__ method, and the collection in step.",
"class MoneyModel(Model):\n \"\"\"A model with some number of agents.\"\"\"\n def __init__(self, N, width, height):\n self.running = True\n self.num_agents = N\n self.grid = MultiGrid(height, width, True)\n self.schedule = RandomActivation(self)\n # Create agents\n for i in range(self.num_agents):\n a = MoneyAgent(i)\n self.schedule.add(a)\n # Add the agent to a random grid cell\n x = random.randrange(self.grid.width)\n y = random.randrange(self.grid.height)\n self.grid.place_agent(a, (x, y))\n \n # New addition: add a DataCollector:\n self.datacollector = DataCollector(model_reporters={\"Gini\": compute_gini},\n agent_reporters={\"Wealth\": lambda a: a.wealth})\n\n def step(self):\n self.datacollector.collect(self) # Collect the data before the agents run.\n self.schedule.step()",
"Now instantiate a model, run it for 100 steps...",
"model = MoneyModel(50, 10, 10)\nfor i in range(100):\n model.step()",
"... And collect and plot the data it generated:",
"gini = model.datacollector.get_model_vars_dataframe()\ngini.head()\n\ngini.plot()\n\nagent_wealth = model.datacollector.get_agent_vars_dataframe()\nagent_wealth.head()\n\nend_wealth = agent_wealth.xs(99, level=\"Step\")[\"Wealth\"]\nend_wealth.hist(bins=range(agent_wealth.Wealth.max()+1))\n\none_agent_wealth = agent_wealth.xs(14, level=\"AgentID\")\none_agent_wealth.Wealth.plot()",
"Batch Run\nRun a parameter sweep, as explained in the Batch Run tutorial section.\nImport the Mesa BatchRunner:",
"from mesa.batchrunner import BatchRunner",
"Set up the batch run:",
"parameters = {\"height\": 10, \"width\": 10, \"N\": range(10, 500, 10)}\n\nbatch_run = BatchRunner(MoneyModel, parameters, iterations=5, max_steps=100,\n model_reporters={\"Gini\": compute_gini})",
"Run the parameter sweep; this step might take a while:",
"batch_run.run_all()",
"Export and plot the results:",
"run_data = batch_run.get_model_vars_dataframe()\nrun_data.head()\nplt.scatter(run_data.N, run_data.Gini)\nplt.xlabel(\"Number of agents\")\nplt.ylabel(\"Gini Coefficient\")",
"The final tutorial section, on building and running a browser-based interactive visualization, isn't intended to be run from within a Jupyter Notebook. Shut down the notebook and follow the tutorial from there!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jserenson/Python_Bootcamp | Advanced Dictionaries.ipynb | gpl-3.0 | [
"Advanced Dictionaries\nUnlike some of the other Data Structures we've worked with, most of the really useful methods available to us in Dictionaries have already been explored throughout this course. Here we will touch on just a few more for good measure:",
"d = {'k1':1,'k2':2}",
"Dictionary Comprehensions\nJust like List Comprehensions, Dictionary Data Types also support their own version of comprehension for quick creation. It is not as commonly used as List Comprehensions, but the syntax is:",
"{x:x**2 for x in range(10)}",
"One of the reasons it is not as common is the difficulty in structuring the key names that are not based off the values.\nIteration over keys,values, and items\nDictionaries can be iterated over using the iter methods available in a dictionary. For example:",
"for k in d.iterkeys():\n print k\n\nfor v in d.itervalues():\n print v\n\nfor item in d.iteritems():\n print item",
"view items,keys and values\nYou can use the view methods to view items keys and values. For example:",
"d.viewitems()\n\nd.viewkeys()\n\nd.viewvalues()",
"Great! You should now feel very comfortable using the variety of methods available to you in Dictionaries!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.