repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
eduardojvieira/Curso-Python-MEC-UCV | 4-matplotlib.ipynb | mit | [
"<table width=\"100%\" border=\"0\">\n <tr> \n <td><img src=\"./images/ing.png\" alt=\"\" align=\"left\" /></td>\n <td><img src=\"./images/ucv.png\" alt=\"\" align=\"center\" height=\"100\" width=\"100\" /></td>\n <td><img src=\"./images/mec.png\" alt=\"\" align=\"right\"/></td>\n </tr>\n</table>\n\n<br>\n<h1 style=\"text-align: center;\"> Curso de Python para Ingenieros Mecánicos </h1>\n<h3 style=\"text-align: center;\"> Por: Eduardo Vieira</h3>\n<br>\n<br>\n<h1 style=\"text-align: center;\"> Visualización con matplotlib </h1>\n<br>\nDespués de estudiar la sintaxis de Python y empezar a manejar datos numéricos de manera un poco más profesional, ha llegado el momento de visualizarlos. Con la biblioteca matplotlib podemos crear gráficos de muy alta calidad y altamente personalizables.\nmatplotlib es una biblioteca muy potente que requiere tiempo de práctica para dominarla. Vamos a empezar por lo más sencillo.\n¿Qué es matplotlib?\n\nEstándar de facto para visualización en Python\nPretende ser similar a las funciones de visualización de MATLAB\nDiferentes formas de usarla: interfaz pyplot y orientada a objetos\n\nLo primero que vamos a hacer es activar el modo inline - de esta manera las figuras aparecerán automáticamente incrustadas en el notebook.",
"%matplotlib inline",
"Importamos los paquetes necesarios:",
"import numpy as np\nimport matplotlib.pyplot as plt",
"La biblioteca matplotlib es gigantesca y es difícil hacerse una idea global de todas sus posibilidades en una primera toma de contacto. Es recomendable tener a mano la documentación y la galería (http://matplotlib.org/gallery.html#pylab_examples):\nInterfaz pyplot\nLa interfaz pyplot proporciona una serie de funciones que operan sobre un estado global - es decir, nosotros no especificamos sobre qué gráfica o ejes estamos actuando. Es una forma rápida y cómoda de crear gráficas pero perdemos parte del control.\nFunción plot\nEl paquete pyplot se suele importar bajo el alias plt, de modo que todas las funciones se acceden a través de plt.<funcion>. La función más básica es la función plot:",
"plt\n\nplt.plot([0.0, 0.1, 0.2, 0.7, 0.9], [1, -2, 3, 4, 1])",
"La función plot recibe una sola lista (si queremos especificar los valores y) o dos listas (si especificamos x e y). Naturalmente si especificamos dos listas ambas tienen que tener la misma longitud.\nLa tarea más habitual a la hora de trabajar con matplotlib es representar una función. Lo que tendremos que hacer es definir un dominio y evaluarla en dicho dominio. Por ejemplo:\n$$ f(x) = e^{-x^2} $$",
"def f(x):\n return np.exp(-x ** 2)",
"Definimos el dominio con la función np.linspace, que crea un vector de puntos equiespaciados:",
"x = np.linspace(-1, 3, 100)",
"Y representamos la función:",
"plt.plot(x, f(x), label=\"Función f(x)\")\nplt.xlabel(\"Eje $x$\")\nplt.ylabel(\"$f(x)$\")\nplt.legend()\nplt.title(\"Función $f(x)$\")",
"Notamos varias cosas:\n\nCon diversas llamadas a funciones dentro de plt. se actualiza el gráfico actual. Esa es la forma de trabajar con la interfaz pyplot.\nPodemos añadir etiquetas, y escribir $\\LaTeX$ en ellas. Tan solo hay que encerrarlo entre signos de dólar $$.\nAñadiendo como argumento label podemos definir una leyenda.\n\nPersonalización\nLa función plot acepta una serie de argumentos para personalizar el aspecto de la función. Con una letra podemos especificar el color, y con un símbolo el tipo de línea.",
"plt.plot(x, f(x), 'ro')\nplt.plot(x, 1 - f(x), 'g--')",
"Esto en realidad son códigos abreviados, que se corresponden con argumentos de la función plot:",
"plt.plot(x, f(x), color='red', linestyle='', marker='o')\nplt.plot(x, 1 - f(x), c='g', ls='--')",
"La lista de posibles argumentos y abreviaturas está disponible en la documentación de la función plot http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot.\nMás personalización, pero a lo loco\nDesde matplotlib 1.4 se puede manipular fácilmente la apariencia de la gráfica usando estilos. Para ver qué estilos hay disponibles, escribiríamos plt.style.available.",
"plt.style.available",
"No hay muchos pero podemos crear los nuestros. Para activar uno de ellos, usamos plt.style.use. ¡Aquí va el que uso yo! https://gist.github.com/Juanlu001/edb2bf7b583e7d56468a",
"#plt.style.use(\"ggplot\") # Afecta a todos los plots",
"<div class=\"alert alert-warning\">No he sido capaz de encontrar una manera fácil de volver a la apariencia por defecto en el notebook. A ver qué dicen los desarrolladores (https://github.com/ipython/ipython/issues/6707) ¡pero de momento si quieres volver a como estaba antes toca reiniciar el notebook!</div>\n\nPara emplear un estilo solo a una porción del código, creamos un bloque with plt.style.context(\"STYLE\"):",
"with plt.style.context('ggplot'):\n plt.plot(x, f(x))\n plt.plot(x, 1 - f(x))",
"Y hay otro tipo de personalización más loca todavía:",
"with plt.xkcd():\n plt.plot(x, f(x))\n plt.plot(x, 1 - f(x))\n plt.xlabel(\"Eje x\")",
"¡Nunca imitar a XKCD fue tan fácil! http://xkcd.com/353/\nOtros tipo de gráficas\nLa función scatter muestra una nube de puntos, con posibilidad de variar también el tamaño y el color.",
"N = 100\nx = np.random.randn(N)\ny = np.random.randn(N)\n\nplt.scatter(x, y)",
"Con s y c podemos modificar el tamaño y el color respectivamente. Para el color, a cada valor numérico se le asigna un color a través de un mapa de colores; ese mapa se puede cambiar con el argumento cmap. Esa correspondencia se puede visualizar llamando a la función colorbar.",
"s = np.abs(50 + 50 * np.random.randn(N))\nc = np.random.randn(N)\n\nplt.scatter(x, y, s=s, c=c, cmap=plt.cm.Blues)\nplt.colorbar()\n\nplt.scatter(x, y, s=s, c=c, cmap=plt.cm.Oranges)\nplt.colorbar()",
"matplotlib trae por defecto muchos mapas de colores. En las SciPy Lecture Notes dan una lista de todos ellos (http://scipy-lectures.github.io/intro/matplotlib/matplotlib.html#colormaps)\n\nLa función contour se utiliza para visualizar las curvas de nivel de funciones de dos variables y está muy ligada a la función np.meshgrid. Veamos un ejemplo:\n$$f(x) = x^2 - y^2$$",
"def f(x, y):\n return x ** 2 - y ** 2\n\nx = np.linspace(-2, 2)\ny = np.linspace(-2, 2)\nxx, yy = np.meshgrid(x, y)\nzz = f(xx, yy)\n\nplt.contour(xx, yy, zz)\nplt.colorbar()",
"La función contourf es casi idéntica pero rellena el espacio entre niveles. Podemos especificar manualmente estos niveles usando el cuarto argumento:",
"plt.contourf(xx, yy, zz, np.linspace(-4, 4, 100))\nplt.colorbar()",
"Para guardar las gráficas en archivos aparte podemos usar la función plt.savefig. matplotlib usará el tipo de archivo adecuado según la extensión que especifiquemos. Veremos esto con más detalle cuando hablemos de la interfaz orientada a objetos.\nVarias figuras\nPodemos crear figuras con varios sistemas de ejes, pasando a subplot el número de filas y de columnas.",
"x = np.linspace(-1, 7, 1000)\n\nfig = plt.figure()\nplt.subplot(211)\nplt.plot(x, np.sin(x))\nplt.grid(False)\nplt.title(\"Función seno\")\n\nplt.subplot(212)\nplt.plot(x, np.cos(x))\nplt.grid(False)\nplt.title(\"Función coseno\")",
"<div class=\"alert alert-info\">¿Cómo se ajusta el espacio entre gráficas para que no se solapen los textos? Buscamos en Google \"plt.subplot adjust\" en el primer resultado tenemos la respuesta http://stackoverflow.com/a/9827848</div>\n\nComo hemos guardado la figura en una variable, puedo recuperarla más adelate y seguir editándola.",
"fig.tight_layout()\nfig",
"<div class=\"alert alert-warning\">Si queremos manipular la figura una vez hemos abandonado la celda donde la hemos definido, tendríamos que utilizar la interfaz orientada a objetos de matplotlib. Es un poco lioso porque algunas funciones cambian de nombre, así que en este curso no la vamos a ver. Si te interesa puedes ver los notebooks de la primera edición, donde sí la introdujimos.\n\nhttps://github.com/AeroPython/Curso_AeroPython/releases/tag/v1.0</div>\n\nEjercicio\nCrear una función que represente gráficamente esta expresión:\n$$\\sin(2 \\pi f_1 t) + \\sin(2 \\pi f_2 t)$$\nSiendo $f_1$ y $f_2$ argumentos de entrada (por defecto $10$ y $100$) y $t \\in [0, 0.5]$. Además, debe mostrar:\n\nleyenda,\ntítulo \"Dos frecuencias\",\neje x \"Tiempo ($t$)\"\n\ny usar algún estilo de los disponibles.",
"def frecuencias(f1=10.0, f2=100.0):\n max_time = 0.5\n times = np.linspace(0, max_time, 1000)\n signal = np.sin(2 * np.pi * f1 * times) + np.sin(2 * np.pi * f2 * times)\n with plt.style.context(\"ggplot\"):\n plt.plot(signal, label=\"Señal\")\n plt.xlabel(\"Tiempo ($t$)\")\n plt.title(\"Dos frecuencias\")\n plt.legend()\n\nfrecuencias()",
"Ejercicio\nRepresentar las curvas de nivel de esta función:\n$$g(x, y) = \\cos{x} + \\sin^2{y}$$\nPara obtener este resultado:",
"def g(x, y):\n return np.cos(x) + np.sin(y) ** 2\n\n# Necesitamos muchos puntos en la malla, para que cuando se\n# crucen las líneas no se vean irregularidades\nx = np.linspace(-2, 3, 1000)\ny = np.linspace(-2, 3, 1000)\n\nxx, yy = np.meshgrid(x, y)\n\nzz = g(xx, yy)\n\n# Podemos ajustar el tamaño de la figura con figsize\nfig = plt.figure(figsize=(6, 6))\n\n# Ajustamos para que tenga 13 niveles y que use el colormap Spectral\n# Tenemos que asignar la salida a la variable cs para luego crear el colorbar\ncs = plt.contourf(xx, yy, zz, np.linspace(-1, 2, 13), cmap=plt.cm.Spectral)\n\n# Creamos la barra de colores\nplt.colorbar()\n\n# Con `colors='k'` dibujamos todas las líneas negras\n# Asignamos la salida a la variable cs2 para crear las etiquetas\ncs = plt.contour(xx, yy, zz, np.linspace(-1, 2, 13), colors='k')\n\n# Creamos las etiquetas sobre las líneas\nplt.clabel(cs)\n\n# Ponemos las etiquetas de los ejes\nplt.xlabel(\"Eje x\")\nplt.ylabel(\"Eje y\")\nplt.title(r\"Función $g(x, y) = \\cos{x} + \\sin^2{y}$\")",
"El truco final: componentes interactivos\nNo tenemos mucho tiempo pero vamos a ver algo interesante que se ha introducido hace poco en el notebook: componentes interactivos.",
"from IPython.html.widgets import interactive\n\ninteractive(frecuencias, f1=(10.0,200.0), f2=(10.0,200.0))",
"Referencias\n\nGuía de matplotlib para principiantes http://matplotlib.org/users/beginner.html\nTutorial de matplotlib en español http://pybonacci.org/tag/tutorial-matplotlib-pyplot/\nReferencia rápida de matplotlib http://scipy-lectures.github.io/intro/matplotlib/matplotlib.html#quick-references",
"# Esta celda da el estilo al notebook\nfrom IPython.core.display import HTML\ncss_file = './css/aeropython.css'\nHTML(open(css_file, \"r\").read())"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jpn--/larch | book/example/legacy/301_itin_mnl.ipynb | gpl-3.0 | [
"301: Itinerary Choice using MNL",
"import pandas as pd\nimport larch\nlarch.__version__",
"This example is an itinerary choice model built using the example\nitinerary choice dataset included with Larch. We'll begin by loading\nthat example data.",
"from larch.data_warehouse import example_file\nitin = pd.read_csv(example_file(\"arc\"), index_col=['id_case','id_alt'])\nd = larch.DataFrames(itin, ch='choice', crack=True, autoscale_weights=True)",
"Now let's make our model. We'll use a few variables to define our\nlinear-in-parameters utility function.",
"m = larch.Model(dataservice=d)\n\nv = [\n \"timeperiod==2\",\n \"timeperiod==3\",\n \"timeperiod==4\",\n \"timeperiod==5\",\n \"timeperiod==6\",\n \"timeperiod==7\",\n \"timeperiod==8\",\n \"timeperiod==9\",\n \"carrier==2\",\n \"carrier==3\",\n \"carrier==4\",\n \"carrier==5\",\n \"equipment==2\",\n \"fare_hy\", \n \"fare_ly\", \n \"elapsed_time\", \n \"nb_cnxs\", \n]\n",
"The larch.roles module defines a few convenient classes for declaring data and parameter.\nOne we will use here is PX which creates a linear-in-parameter term that represents one data\nelement (a column from our data, or an expression that can be evaluated on the data alone) multiplied\nby a parameter with the same name.",
"from larch.roles import PX\nm.utility_ca = sum(PX(i) for i in v)\nm.choice_ca_var = 'choice'",
"Since we are estimating just an MNL model in this example, this is all we need to do to build\nour model, and we're ready to go. To estimate the likelihood maximizing parameters, we give:",
"m.load_data()\nm.maximize_loglike()\n\n# TEST\nresult = _\nfrom pytest import approx\nassert result.loglike == approx(-777770.0688722526)\nassert result.x['carrier==2'] == approx(0.11720047917232307)\nassert result.logloss == approx(3.306873650593341)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Parisson/TimeSide | docs/ipynb/01_Timeside_API.ipynb | agpl-3.0 | [
"%pylab inline\nimport matplotlib.pylab as pylab\npylab.rcParams['figure.figsize'] = 16, 8 # that's default image size for this interactive session",
"TimeSide API\nTimeside API is based on different core processing unit called processors :\n\nDecoders (timeside.api.IDecoder) that enables to decode a giving audio source and split it up into frames for further processing\nAnalyzers (timeside.api.IAnalyzer) that provides some signal processing module to analyze incoming audio frames\nEncoders (timeside.api.IEncoder) that can encode incoming frames back into an audio object\nGraphers (timeside.api.IGrapher) that can display some representations of the signal or corresponding extracted features\n\nDecoders",
"\nimport timeside.core\n\nfrom timeside.core import list_processors\n\nlist_processors(timeside.core.api.IDecoder)",
"Analyzers",
"list_processors(timeside.core.api.IAnalyzer)",
"Encoders",
"list_processors(timeside.core.api.IEncoder)",
"Graphers",
"list_processors(timeside.core.api.IGrapher)",
"Processors pipeline\nAll these processors can be chained to form a process pipeline.\nLet first define a decoder that reads and decodes audio from a file",
"from timeside.core import get_processor\n\nfrom timeside.core.tools.test_samples import samples\nfile_decoder = get_processor('file_decoder')(samples['C4_scale.wav'])",
"And then some other processors",
"# analyzers\npitch = get_processor('aubio_pitch')()\nlevel = get_processor('level')()\n\n# Encoder\nmp3 = get_processor('mp3_encoder')('/tmp/guitar.mp3', overwrite=True)\n\n# Graphers\nspecgram = get_processor('spectrogram_lin')()\nwaveform = get_processor('waveform_simple')()",
"Let's now define a process pipeline with all these processors and run it",
"pipe = (file_decoder | pitch | level | mp3 | specgram | waveform)\npipe.run()",
"Analyzers results are available through the pipe:",
"pipe.results.keys()",
"or from the analyzer:",
"pitch.results.keys()\n\npitch.results['aubio_pitch.pitch'].keys()\n\npitch.results['aubio_pitch.pitch']",
"Grapher result can also be display or save into a file",
"imshow(specgram.render(), origin='lower')\n\nimshow(waveform.render(), origin='lower')\n\nwaveform.render('/tmp/waveform.png')",
"And TimeSide can be embedded into a web page dynamically. For example, in Telemeta:",
"from IPython.display import HTML\nHTML('<iframe width=1300 height=260 frameborder=0 scrolling=no marginheight=0 marginwidth=0 src=http://demo.telemeta.org/archives/items/6/player/1200x170></iframe>')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/eng-edu | ml/cc/prework/zh-CN/tensorflow_programming_concepts.ipynb | apache-2.0 | [
"Copyright 2017 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"# TensorFlow 编程概念\n学习目标:\n * 学习 TensorFlow 编程模型的基础知识,重点了解以下概念:\n * 张量\n * 指令\n * 图\n * 会话\n * 构建一个简单的 TensorFlow 程序,使用该程序绘制一个默认图并创建一个运行该图的会话\n注意:请仔细阅读本教程。TensorFlow 编程模型很可能与您遇到的其他模型不同,因此可能不如您期望的那样直观。\n## 概念概览\nTensorFlow 的名称源自张量,张量是任意维度的数组。借助 TensorFlow,您可以操控具有大量维度的张量。即便如此,在大多数情况下,您会使用以下一个或多个低维张量:\n\n标量是零维数组(零阶张量)。例如,\\'Howdy\\' 或 5\n矢量是一维数组(一阶张量)。例如,[2, 3, 5, 7, 11] 或 [5]\n矩阵是二维数组(二阶张量)。例如,[[3.1, 8.2, 5.9][4.3, -2.7, 6.5]]\n\nTensorFlow 指令会创建、销毁和操控张量。典型 TensorFlow 程序中的大多数代码行都是指令。\nTensorFlow 图(也称为计算图或数据流图)是一种图数据结构。很多 TensorFlow 程序由单个图构成,但是 TensorFlow 程序可以选择创建多个图。图的节点是指令;图的边是张量。张量流经图,在每个节点由一个指令操控。一个指令的输出张量通常会变成后续指令的输入张量。TensorFlow 会实现延迟执行模型,意味着系统仅会根据相关节点的需求在需要时计算节点。\n张量可以作为常量或变量存储在图中。您可能已经猜到,常量存储的是值不会发生更改的张量,而变量存储的是值会发生更改的张量。不过,您可能没有猜到的是,常量和变量都只是图中的一种指令。常量是始终会返回同一张量值的指令。变量是会返回分配给它的任何张量的指令。\n要定义常量,请使用 tf.constant 指令,并传入它的值。例如:\nx = tf.constant([5.2])\n同样,您可以创建如下变量:\ny = tf.Variable([5])\n或者,您也可以先创建变量,然后再如下所示地分配一个值(注意:您始终需要指定一个默认值):\ny = tf.Variable([0])\n y = y.assign([5])\n定义一些常量或变量后,您可以将它们与其他指令(如 tf.add)结合使用。在评估 tf.add 指令时,它会调用您的 tf.constant 或 tf.Variable 指令,以获取它们的值,然后返回一个包含这些值之和的新张量。\n图必须在 TensorFlow 会话中运行,会话存储了它所运行的图的状态:\nwith tf.Session() as sess:\n initialization = tf.global_variables_initializer()\n print(y.eval())\n在使用 tf.Variable 时,您必须在会话开始时调用 tf.global_variables_initializer,以明确初始化这些变量,如上所示。\n注意:会话可以将图分发到多个机器上执行(假设程序在某个分布式计算框架上运行)。有关详情,请参阅分布式 TensorFlow。\n总结\nTensorFlow 编程本质上是一个两步流程:\n\n将常量、变量和指令整合到一个图中。\n在一个会话中评估这些常量、变量和指令。\n\n## 创建一个简单的 TensorFlow 程序\n我们来看看如何编写一个将两个常量相加的简单 TensorFlow 程序。\n### 添加 import 语句\n与几乎所有 Python 程序一样,您首先要添加一些 import 语句。\n当然,运行 TensorFlow 程序所需的 import 语句组合取决于您的程序将要访问的功能。至少,您必须在所有 TensorFlow 程序中添加 import tensorflow 语句:",
"import tensorflow as tf",
"请勿忘记执行前面的代码块(import 语句)。\n其他常见的 import 语句包括:\nimport matplotlib.pyplot as plt # 数据集可视化。\nimport numpy as np # 低级数字 Python 库。\nimport pandas as pd # 较高级别的数字 Python 库。\nTensorFlow 提供了一个默认图。不过,我们建议您明确创建自己的 Graph,以便跟踪状态(例如,您可能希望在每个单元格中使用一个不同的 Graph)。",
"from __future__ import print_function\n\nimport tensorflow as tf\n\n# Create a graph.\ng = tf.Graph()\n\n# Establish the graph as the \"default\" graph.\nwith g.as_default():\n # Assemble a graph consisting of the following three operations:\n # * Two tf.constant operations to create the operands.\n # * One tf.add operation to add the two operands.\n x = tf.constant(8, name=\"x_const\")\n y = tf.constant(5, name=\"y_const\")\n sum = tf.add(x, y, name=\"x_y_sum\")\n\n\n # Now create a session.\n # The session will run the default graph.\n with tf.Session() as sess:\n print(sum.eval())",
"## 练习:引入第三个运算数\n修改上面的代码列表,以将三个整数(而不是两个)相加:\n\n定义第三个标量整数常量 z,并为其分配一个值 4。\n将 sum 与 z 相加,以得出一个新的和。\n\n提示:请参阅有关 tf.add() 的 API 文档,了解有关其函数签名的更多详细信息。\n\n重新运行修改后的代码块。该程序是否生成了正确的总和?\n\n### 解决方案\n点击下方,查看解决方案。",
"# Create a graph.\ng = tf.Graph()\n\n# Establish our graph as the \"default\" graph.\nwith g.as_default():\n # Assemble a graph consisting of three operations. \n # (Creating a tensor is an operation.)\n x = tf.constant(8, name=\"x_const\")\n y = tf.constant(5, name=\"y_const\")\n sum = tf.add(x, y, name=\"x_y_sum\")\n \n # Task 1: Define a third scalar integer constant z.\n z = tf.constant(4, name=\"z_const\")\n # Task 2: Add z to `sum` to yield a new sum.\n new_sum = tf.add(sum, z, name=\"x_y_z_sum\")\n\n # Now create a session.\n # The session will run the default graph.\n with tf.Session() as sess:\n # Task 3: Ensure the program yields the correct grand total.\n print(new_sum.eval())",
"## 更多信息\n要进一步探索基本 TensorFlow 图,请使用以下教程进行实验:\n\nMandelbrot 集"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jasdumas/jasdumas.github.io | post_data/final_project_jasmine_dumas.ipynb | mit | [
"Final Project\nJasmine Dumas (1523905)\nCSC 478: Programming Machine Learning Applications - Autumn 2016\nDue: Tuesday, November 22, 2016\n\nFinal Project Objective:\n\nAnalyze Lending Club's issued loans: https://www.kaggle.com/wendykan/lending-club-loan-data\n\nData Analysis Tasks:\n\nSupervised Learning: Classifier using k Nearest Neighbor of payment status (Current, Late, Fully Paid, etc.)\nExploratory Data Analysis\nPre-processing & Data Cleaning\nBuilding the Classifier\nEvaluating the model\n\n1. Load Libraries",
"## load libraries\nimport sys\nfrom numpy import *\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport operator\n%matplotlib inline\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn import preprocessing\nfrom sklearn import neighbors, tree, naive_bayes\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import classification_report\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier",
"2. Load the data",
"data = pd.read_csv(\"loan.csv\", low_memory=False)",
"a. Data reduction for computation\n\nFrom previous attempts to create a model matrix below and having the kernal crash, I'm going to reduce the data set size to compute better by selecting a random sample of 20% from the original dataset",
"# 5% of the data without replacement\ndata = data.sample(frac=0.05, replace=False, random_state=123) ",
"3. Explore the data\n\nvisaully and descriptive methods",
"data.shape\n\ndata.head(n=5)\n\ndata.columns",
"The loan_status column is the target!\na. How many classes are there?",
"pd.unique(data['loan_status'].values.ravel())\n\nprint(\"Amount of Classes: \", len(pd.unique(data['loan_status'].values.ravel())))\n\nlen(pd.unique(data['zip_code'].values.ravel())) # want to make sure this was not too unique\n\nlen(pd.unique(data['url'].values.ravel())) # drop url\n\nlen(pd.unique(data['last_pymnt_d'].values.ravel()))\n\nlen(pd.unique(data['next_pymnt_d'].values.ravel()))\n\nfor col in data.select_dtypes(include=['object']).columns:\n print (\"Column {} has {} unique instances\".format( col, len(data[col].unique())) )",
"b. Are there unique customers in the data or repeats?",
"len(pd.unique(data['member_id'].values.ravel())) == data.shape[0]",
"c. Drop some of the junk variables (id, member_id, ...)\n\nReasons: High Cardinality\npre-pre-processing 😃",
"data = data.drop('id', 1) #\ndata = data.drop('member_id', 1)#\ndata = data.drop('url', 1)#\ndata = data.drop('purpose', 1)\ndata = data.drop('title', 1)#\ndata = data.drop('zip_code', 1)#\ndata = data.drop('emp_title', 1)#\ndata = data.drop('earliest_cr_line', 1)#\ndata = data.drop('term', 1)\ndata = data.drop('sub_grade', 1) #\ndata = data.drop('last_pymnt_d', 1)#\ndata = data.drop('next_pymnt_d', 1)#\ndata = data.drop('last_credit_pull_d', 1)\ndata = data.drop('issue_d', 1) ##\ndata = data.drop('desc', 1)##\ndata = data.drop('addr_state', 1)##\n\ndata.shape\n\n# yay this is better\nfor col in data.select_dtypes(include=['object']).columns:\n print (\"Column {} has {} unique instances\".format( col, len(data[col].unique())) )",
"d. Exploratory Data Analysis: What is the distribution of the loan amount?\n\nIn general the loans amount was usually under $15,000",
"data['loan_amnt'].plot(kind=\"hist\", bins=10)\n\ndata['grade'].value_counts().plot(kind='bar')\n\ndata['emp_length'].value_counts().plot(kind='bar')",
"e. What is the distribution of target class?\n\nMost of this dataset the loans are in a current state (in-payment?), or Fully paid off\nLooks like a Poisson Distribution?!",
"data['loan_status'].value_counts().plot(kind='bar')",
"f. What are the numeric columns?\n\nFor pre-processing and scaling",
"data._get_numeric_data().columns\n\n\"There are {} numeric columns in the data set\".format(len(data._get_numeric_data().columns) ) ",
"g. What are the character columns?\n\nFor one-hot encoding into a model matrix",
"data.select_dtypes(include=['object']).columns\n\n\"There are {} Character columns in the data set (minus the target)\".format(len(data.select_dtypes(include=['object']).columns) -1) ",
"4. Pre-processing the data\na. Remove the target from the entire dataset",
"X = data.drop(\"loan_status\", axis=1, inplace = False)\ny = data.loan_status\n\ny.head()",
"b. Transform the data into a model matrix with one-hot encoding\n\nisolate the variables of char class",
"def model_matrix(df , columns):\n dummified_cols = pd.get_dummies(df[columns])\n df = df.drop(columns, axis = 1, inplace=False)\n df_new = df.join(dummified_cols)\n return df_new\n\nX = model_matrix(X, ['grade', 'emp_length', 'home_ownership', 'verification_status',\n 'pymnt_plan', 'initial_list_status', 'application_type', 'verification_status_joint'])\n\n# 'issue_d' 'desc' 'addr_state'\n\nX.head()\n\nX.shape",
"c. Scale the continuous variables use min max calculation",
"# impute rows with NaN with a 0 for now\nX2 = X.fillna(value = 0)\nX2.head()\n\nfrom sklearn.preprocessing import MinMaxScaler\n\nScaler = MinMaxScaler()\n\nX2[['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'int_rate',\n 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths',\n 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc',\n 'pub_rec', 'revol_bal', 'revol_util', 'total_acc', 'out_prncp',\n 'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp',\n 'total_rec_int', 'total_rec_late_fee', 'recoveries',\n 'collection_recovery_fee', 'last_pymnt_amnt',\n 'collections_12_mths_ex_med', 'mths_since_last_major_derog',\n 'policy_code', 'annual_inc_joint', 'dti_joint', 'acc_now_delinq',\n 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_il_6m',\n 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il',\n 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util',\n 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m']] = Scaler.fit_transform(X2[['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'int_rate',\n 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths',\n 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc',\n 'pub_rec', 'revol_bal', 'revol_util', 'total_acc', 'out_prncp',\n 'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp',\n 'total_rec_int', 'total_rec_late_fee', 'recoveries',\n 'collection_recovery_fee', 'last_pymnt_amnt',\n 'collections_12_mths_ex_med', 'mths_since_last_major_derog',\n 'policy_code', 'annual_inc_joint', 'dti_joint', 'acc_now_delinq',\n 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_il_6m',\n 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il',\n 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util',\n 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m']])\n\nX2.head()",
"d. Partition the data into train and testing",
"x_train, x_test, y_train, y_test = train_test_split(X2, y, test_size=.3, random_state=123)\n\nprint(x_train.shape)\nprint(y_train.shape)\nprint(x_test.shape)\nprint(y_test.shape)",
"5. Building the k Nearest Neighbor Classifier\n\nexperiment with different values for neighbors",
"# start out with the number of classes for neighbors\ndata_knn = KNeighborsClassifier(n_neighbors = 10, metric='euclidean')\ndata_knn\n\ndata_knn.fit(x_train, y_train)",
"a. predict on the test data using the knn model created above",
"data_knn.predict(x_test)",
"b. Evaluating the classifier model using R squared",
"# R-square from training and test data\nrsquared_train = data_knn.score(x_train, y_train)\nrsquared_test = data_knn.score(x_test, y_test)\nprint ('Training data R-squared:')\nprint(rsquared_train)\nprint ('Test data R-squared:')\nprint(rsquared_test)",
"c. Confusion Matrix",
"# confusion matrix\nfrom sklearn.metrics import confusion_matrix\n\nknn_confusion_matrix = confusion_matrix(y_true = y_test, y_pred = data_knn.predict(x_test))\nprint(\"The Confusion matrix:\\n\", knn_confusion_matrix)\n\n# visualize the confusion matrix\n# http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html\nplt.matshow(knn_confusion_matrix, cmap = plt.cm.Blues)\nplt.title(\"KNN Confusion Matrix\\n\")\n#plt.xticks([0,1], ['No', 'Yes'])\n#plt.yticks([0,1], ['No', 'Yes'])\nplt.ylabel('True label')\nplt.xlabel('Predicted label')\nfor y in range(knn_confusion_matrix.shape[0]):\n for x in range(knn_confusion_matrix.shape[1]):\n plt.text(x, y, '{}'.format(knn_confusion_matrix[y, x]),\n horizontalalignment = 'center',\n verticalalignment = 'center',)\nplt.show()",
"d. Classification Report",
"#Generate the classification report\nfrom sklearn.metrics import classification_report\nknn_classify_report = classification_report(y_true = y_test, \n y_pred = data_knn.predict(x_test))\nprint(knn_classify_report)",
"fin."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rdhyee/nypl50 | rebuild_travis_on_repos.ipynb | apache-2.0 | [
"(2016.01.12) I think this notebook is about pushing ahead on getting the following in place. In one commit, I'd to be able to:\n\nadd travis.deploy.api_key.txt\nincrease patch version number\ngit commit with appropriate message\ngit tag\ngit push",
"from __future__ import print_function\n\nimport os\nimport json\nimport shutil\nimport sh\nimport yaml\nfrom pandas import DataFrame, Series\nfrom itertools import islice\n\nREPOS_LIST = \"/Users/raymondyee/C/src/gitenberg/Second-Folio/list_of_repos.txt\"\nGITENBERG_DIR = \"/Users/raymondyee/C/src/gitenberg/\"\n\nMETADATA_DIR = \"/Users/raymondyee/C/src/gitenberg-dev/giten_site/metadata\"\nCOVERS_DATA = \"/Users/raymondyee/C/src/gitenberg/Second-Folio/covers_data.json\"\n\nimport os\nimport glob\nimport sh\nimport yaml\n\nfrom gitenberg import metadata\nimport jinja2\n\nfrom second_folio import (GITENBERG_DIR, \n all_repos, \n apply_to_repos, \n travis_template, \n travis_setup_releases, \n git_pull,\n apply_travis,\n finish_travis,\n repo_is_buildable,\n has_travis_with_gitenberg_build,\n slugify,\n write_repo_token_file,\n latest_epub,\n repo_version\n )\n\nfrom github_settings import (username, password)\n\nfrom itertools import islice, izip\n\n# pick subset of repositories to calculate on\nrepos = list(islice(all_repos,0,None))\n\n# determine which repos are \"buildable\"\nrepos_statues = list(izip(repos, \n apply_to_repos(repo_is_buildable, repos=repos), \n apply_to_repos(has_travis_with_gitenberg_build, repos=repos) ))\n\n# we want to apply travis to repos that are buildable but that don't yet have .travis.yml. \n\nrepos_to_travisfy = [repo[0] for repo in repos_statues if repo[1] and not repo[2]]\nrepos_to_travisfy\n\nall_repos\n\nrepo = all_repos[2]\nrepo",
"semantic_version 2.4.2 : Python Package Index",
"list(apply_to_repos(repo_version,kwargs={'version_type':'patch'},repos=all_repos))",
"templates\ntemplate path? \nvariables to fill:\n\nepub_title\nencrypted_key\nrepo_name",
"def new_travis_template(repo, template, write_template=False):\n \"\"\"\n compute (and optionally write) .travis.yml based on the template and current metadata.yaml \n \"\"\"\n template_written = False\n \n sh.cd(os.path.join(GITENBERG_DIR, repo))\n\n metadata_path = os.path.join(GITENBERG_DIR, repo, \"metadata.yaml\")\n travis_path = os.path.join(GITENBERG_DIR, repo, \".travis.yml\")\n travis_api_key_path = os.path.join(GITENBERG_DIR, repo, \".travis.deploy.api_key.txt\") \n \n md = metadata.pandata.Pandata(metadata_path)\n epub_title = slugify(md.metadata.get(\"title\"))\n encrypted_key = open(travis_api_key_path).read().strip()\n repo_name = md.metadata.get(\"_repo\")\n \n template_vars = {\n 'epub_title': epub_title,\n 'encrypted_key': encrypted_key,\n 'repo_name': repo_name\n }\n \n template_result = template.render(**template_vars)\n \n if write_template:\n with open(travis_path, \"w\") as f:\n f.write(template_result)\n template_written = True\n \n return (template_result, template_written) \n\nfrom itertools import izip\n\ntemplate = template = travis_template()\n\nresults = list(izip(all_repos, apply_to_repos(new_travis_template,\n kwargs={'template':template},\n repos=all_repos)))\n[result for result in results if isinstance(result[1], Exception) ]\n\nimport os\nimport yaml\nimport pdb\n\ndef commit_travis_api_key_and_update_travis(repo, template, write_updates=False):\n \"\"\"\n create .travis.deploy.api_key.txt and update .travis.yml; do git commit\n \"\"\"\n sh.cd(os.path.join(GITENBERG_DIR, repo))\n\n metadata_path = os.path.join(GITENBERG_DIR, repo, \"metadata.yaml\")\n travis_path = os.path.join(GITENBERG_DIR, repo, \".travis.yml\")\n travis_api_key_path = os.path.join(GITENBERG_DIR, repo, \".travis.deploy.api_key.txt\") \n \n # git add .travis.deploy.api_key.txt\n \n if write_updates:\n sh.git.add(travis_api_key_path)\n \n # read the current metadata file and replace current_ver with next_ver\n\n (v0, v1, v_updated) = repo_version(repo, version_type='patch', write_version=write_updates)\n if v_updated:\n sh.git.add(metadata_path)\n \n # write new .travis.yml\n (new_template, template_written) = new_travis_template(repo, template, write_template=write_updates)\n if template_written:\n sh.git.add(travis_path)\n \n if write_updates:\n sh.git.commit(\"-m\", \"add .travis.deploy.api_key.txt; updated .travis.yml\")\n \n # add tag\n if v_updated:\n sh.git.tag(v1)\n sh.git.push(\"origin\", \"master\", \"--tags\")\n\n return True\n\n else:\n return False\n \n\n\nproblem_repos = ('The-Picture-of-Dorian-Gray_174',\n 'The-Hunchback-of-Notre-Dame_6539', \n 'Divine-Comedy-Longfellow-s-Translation-Hell_1001',\n 'The-Works-of-Edgar-Allan-Poe-The-Raven-EditionTable-Of-Contents-And-Index-Of-The-Five-Volumes_25525'\n )\n\n\nrepos = all_repos[36:][0:]\nrepos = [repo for repo in repos if repo not in problem_repos]\nrepos\n\ntemplate = travis_template()\n\n# I wish there would be a way to figure out variables in a template from jinja2...but I don't see a way.\n\n\nresults = list(apply_to_repos(commit_travis_api_key_and_update_travis, \n kwargs={'template':template,\n 'write_updates':True},\n repos=repos))\nresults\n\nimport requests\n\ndef url_status(url):\n r = requests.get(url, allow_redirects=True, stream=True)\n return r.status_code\n\ndef repo_epub_status(repo):\n return url_status(latest_epub(repo))\n\nlist(izip(repos, apply_to_repos(repo_epub_status, \n repos=repos)))\n\nresults = list(izip(all_repos, apply_to_repos(repo_epub_status, \n repos=all_repos)))\n\nresults\n\nok_repos = [result[0] for result in results if result[1] == 200 ]\nnot_ok_repos = [result[0] for result in results if result[1] <> 200 ]\nlen(ok_repos), len(not_ok_repos)\n\nfor (i, repo) in enumerate(ok_repos):\n print (i+1, \"\\t\", repo, \"\\t\", latest_epub(repo))\n\nnot_ok_repos",
"Divine Comedy\nDivine-Comedy-Longfellow-s-Translation-Hell_1001 / /Users/raymondyee/C/src/gitenberg/Divine-Comedy-Longfellow-s-Translation-Hell_1001: there is a book.asciidoc but no .travis.yml \nLet's do this by hand and document the process...\ntemplate",
"from second_folio import TRAVIS_TEMPLATE_URL\n\nrepo = \"Divine-Comedy-Longfellow-s-Translation-Hell_1001\"\n\ntitle = \"Divine Comedy, Longfellow's Translation, Hell\"\nslugify(title)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session04/Day0/TooBriefMachLearn.ipynb | mit | [
"Introduction to Machine Learning:\nExamples of Unsupervised and Supervised Machine-Learning Algorithms\nVersion 0.1\nBroadly speaking, machine-learning methods constitute a diverse collection of data-driven algorithms designed to classify/characterize/analyze sources in multi-dimensional spaces. The topics and studies that fall under the umbrella of machine learning is growing, and there is no good catch-all definition. The number (and variation) of algorithms is vast, and beyond the scope of these exercises. While we will discuss a few specific algorithms today, more importantly, we will explore the scope of the two general methods: unsupervised learning and supervised learning and introduce the powerful (and dangerous?) Python package scikit-learn.\n\nBy AA Miller\n2017 September 16",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Problem 1) Introduction to scikit-learn\nAt the most basic level, scikit-learn makes machine learning extremely easy within python. By way of example, here is a short piece of code that builds a complex, non-linear model to classify sources in the Iris data set that we learned about earlier:\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\niris = datasets.load_iris()\nRFclf = RandomForestClassifier().fit(iris.data, iris.target)\n\nThose 4 lines of code have constructed a model that is superior to any system of hard cuts that we could have encoded while looking at the multidimensional space. This can be fast as well: execute the dummy code in the cell below to see how \"easy\" machine-learning is with scikit-learn.",
"# execute dummy code here\n\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\niris = datasets.load_iris()\nRFclf = RandomForestClassifier().fit(iris.data, iris.target)",
"Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, X, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y. \nUnsupervised models are fit by calling .fit(X) and supervised models are fit by calling .fit(X, y). In both cases, predictions for new observations, Xnew, can be obtained by calling .predict(Xnew). Those are the basics and beyond that, the details are algorithm specific, but the documentation for essentially everything within scikit-learn is excellent, so read the docs.\nTo further develop our intuition, we will now explore the Iris dataset a little further.\nProblem 1a What is the pythonic type of iris?",
"# complete",
"You likely haven't encountered a scikit-learn Bunch before. It's functionality is essentially the same as a dictionary. \nProblem 1b What are the keys of iris?",
"# complete",
"Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful.\nProblem 1c What is the shape and content of the iris data?",
"print( # complete\n# complete",
"Problem 1d What is the shape and content of the iris target?",
"print( # complete\n# complete",
"Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms. \nProblem 1e Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.",
"print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature\n\nplt.scatter( # complete\n# complete\n# complete",
"Problem 2) Unsupervised Machine Learning\nUnsupervised machine learning, sometimes referred to as clustering or data mining, aims to group or classify sources in the multidimensional feature space. The \"unsupervised\" comes from the fact that there are no target labels provided to the algorithm, so the machine is asked to cluster the data \"on its own.\" The lack of labels means there is no (simple) method for validating the accuracy of the solution provided by the machine (though sometimes simple examination can show the results are terrible). \nFor this reason [note - this is my (AAM) opinion and there many be many others who disagree], unsupervised methods are not particularly useful for astronomy. Supposing one did find some useful clustering structure, an adversarial researcher could always claim that the current feature space does not accurately capture the physics of the system and as such the clustering result is not interesting or, worse, erroneous. The one potentially powerful exception to this broad statement is outlier detection, which can be a branch of both unsupervised and supervised learning. Finding weirdo objects is an astronomical pastime, and there are unsupervised methods that may help in that regard in the LSST era. \nTo begin today we will examine one of the most famous, and simple, clustering algorithms: $k$-means. $k$-means clustering looks to identify $k$ convex clusters, where $k$ is a user defined number. And here-in lies the rub: if we truly knew the number of clusters in advance, we likely wouldn't need to perform any clustering in the first place. This is the major downside to $k$-means. Operationally, pseudocode for the algorithm can be summarized as the following: \ninitiate search by identifying k points (i.e. the cluster centers)\nloop \n assign each point in the data set to the closest cluster center\n calculate new cluster centers based on mean position of all points within cluster\n if diff(new center - old center) < threshold:\n stop (i.e. clusters are defined)\n\nThe threshold is defined by the user, though in some cases the total number of iterations is also. An advantage of $k$-means is that the solution will always converge, though the solution may only be a local minimum. Disadvantages include the assumption of convexity, i.e. difficult to capture complex geometry, and the curse of dimensionality.\nIn scikit-learn the KMeans algorithm is implemented as part of the sklearn.cluster module. \nProblem 2a Fit two different $k$-means models to the iris data, one with 2 clusters and one with 3 clusters. Plot the resulting clusters in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications?",
"from sklearn.cluster import KMeans\n\nKcluster = KMeans( # complete\nKcluster.fit( # complete\n\nplt.figure()\nplt.scatter( # complete\n\n# complete\n# complete\n# complete\n# complete",
"With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior. \nProblem 2b How do the results change if the 3 cluster model is called with n_init = 1 and init = 'random' options? Use rs for the random state [this allows me to cheat in service of making a point].\n*Note - the respective defaults for these two parameters are 10 and k-means++, respectively. Read the docs to see why these choices are, likely, better than those in 2b.",
"rs = 14\nKcluster1 = KMeans( # complete\n# complete\n# complete\n# complete",
"A random aside that is not particularly relevant here\n$k$-means evaluates the Euclidean distance between individual sources and cluster centers, thus, the magnitude of the individual features has a strong effect on the final clustering outcome. \nProblem 2c Calculate the mean, standard deviation, min, and max of each feature in the iris data set. Based on these summaries, which feature is most important for clustering?",
"print(\"feature\\t\\t\\tmean\\tstd\\tmin\\tmax\")\nfor featnum, feat in enumerate(iris.feature_names):\n print(\"{:s}\\t{:.2f}\\t{:.2f}\\t{:.2f}\\t{:.2f}\".format(feat, np.mean(iris.data[:,featnum]), \n np.std(iris.data[:,featnum]), np.min(iris.data[:,featnum]),\n np.max(iris.data[:,featnum])))",
"Petal length has the largest range and standard deviation, thus, it will have the most \"weight\" when determining the $k$ clusters. \nThe truth is that the iris data set is fairly small and straightfoward. Nevertheless, we will now examine the clustering results after re-scaling the features. [Some algorithms, cough Support Vector Machines cough, are notoriously sensitive to the feature scaling, so it is important to know about this step.] Imagine you are classifying stellar light curves: the data set will include contact binaries with periods of $\\sim 0.1 \\; \\mathrm{d}$ and Mira variables with periods of $\\gg 100 \\; \\mathrm{d}$. Without re-scaling, this feature that covers 4 orders of magnitude may dominate all others in the final model projections.\nThe two most common forms of re-scaling are to rescale to a guassian with mean $= 0$ and variance $= 1$, or to rescale the min and max of the feature to $[0, 1]$. The best normalization is problem dependent. The sklearn.preprocessing module makes it easy to re-scale the feature set. It is essential that the same scaling used for the training set be used for all other data run through the model. The testing, validation, and field observations cannot be re-scaled independently. This would result in meaningless final classifications/predictions. \nProblem 2d Re-scale the features to normal distributions, and perform $k$-means clustering on the iris data. How do the results compare to those obtained earlier? \nHint - you may find 'StandardScaler()' within the sklearn.preprocessing module useful.",
"from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler().fit( # complete\n\n# complete\n# complete\n# complete\n# complete",
"These results are almost identical to those obtained without scaling. This is due to the simplicity of the iris data set. \nHow do I test the accuracy of my clusters?\nEssentially - you don't. There are some methods that are available, but they essentially compare clusters to labeled samples, and if the samples are labeled it is likely that supervised learning is more useful anyway. If you are curious, scikit-learn does provide some built-in functions for analyzing clustering, but again, it is difficult to evaluate the validity of any newly discovered clusters. \nWhat if I don't know how many clusters are present in the data?\nAn excellent question, as you will almost never know this a priori. Many algorithms, like $k$-means, do require the number of clusters to be specified, but some other methods do not. As an example DBSCAN. In brief, DBSCAN requires two parameters: minPts, the minimum number of points necessary for a cluster, and $\\epsilon$, a distance measure. Clusters are grown by identifying core points, objects that have at least minPts located within a distance $\\epsilon$. Reachable points are those within a distance $\\epsilon$ of at least one core point but less than minPts core points. Identically, these points define the outskirts of the clusters. Finally, there are also outliers which are points that are $> \\epsilon$ away from any core points. Thus, DBSCAN naturally identifies clusters, does not assume clusters are convex, and even provides a notion of outliers. The downsides to the algorithm are that the results are highly dependent on the two tuning parameters, and that clusters of highly different densities can be difficult to recover (because $\\epsilon$ and minPts is specified for all clusters. \nIn scitkit-learn the \nDBSCAN algorithm is part of the sklearn.cluster module. $\\epsilon$ and minPts are set by eps and min_samples, respectively. \nProblem 2e Cluster the iris data using DBSCAN. Play around with the tuning parameters to see how they affect the final clustering results. How does the use of DBSCAN compare to $k$-means? Can you obtain 3 clusters with DBSCAN? If not, given the knowledge that the iris dataset has 3 classes - does this invalidate DBSCAN as a viable algorithm?\nNote - DBSCAN labels outliers as $-1$, and thus, plt.scatter(), will plot all these points as the same color.",
"# execute this cell\n\nfrom sklearn.cluster import DBSCAN\n\ndbs = DBSCAN(eps = 0.7, min_samples = 7)\ndbs.fit(scaler.transform(iris.data)) # best to use re-scaled data since eps is in absolute units\n\ndbs_outliers = dbs.labels_ == -1\n\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], c = dbs.labels_, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.scatter(iris.data[:,0][dbs_outliers], iris.data[:,1][dbs_outliers], s = 30, c = 'k')\n\n\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')",
"I was unable to obtain 3 clusters with DBSCAN. While these results are, on the surface, worse than what we got with $k$-means, my suspicion is that the 4 features do not adequately separate the 3 classes. [See - a nayseyer can always make that argument.] This is not a problem for DBSCAN as an algorithm, but rather, evidence that no single algorithm works well in all cases. \nChallenge Problem) Cluster SDSS Galaxy Data\nThe following query will select 10k likely galaxies from the SDSS database and return the results of that query into an astropy.Table object. (For now, if you are not familiar with the SDSS DB schema, don't worry about this query, just know that it returns a bunch of photometric features.)",
"from astroquery.sdss import SDSS # enables direct queries to the SDSS database\n\nGALquery = \"\"\"SELECT TOP 10000 \n p.dered_u - p.dered_g as ug, p.dered_g - p.dered_r as gr, \n p.dered_g - p.dered_i as gi, p.dered_g - p.dered_z as gz, \n p.petroRad_i, p.petroR50_i, p.deVAB_i\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND p.type = 3\n \"\"\"\nSDSSgals = SDSS.query_sql(GALquery)\nSDSSgals",
"I have used my own domain knowledge to specifically choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query. \nOne nice feature of astropy tables is that they can readily be turned into pandas DataFrames, which can in turn easily be turned into a sklearn X array with NumPy. For example: \nX = np.array(SDSSgals.to_pandas())\n\nAnd you are ready to go. \nChallenge Problem Using the SDSS dataset above, identify interesting clusters within the data [this is intentionally very open ended, if you uncover anything especially exciting you'll have a chance to share it with the group]. Feel free to use the algorithms discussed above, or any other packages available via sklearn. Can you make sense of the clusters in the context of galaxy evolution? \nHint - don't fret if you know nothing about galaxy evolution (neither do I!). Just take a critical look at the clusters that are identified",
"# complete",
"Note - I was unable to get the galaxies to clusster using DBSCAN.\nProblem 3) Supervised Machine Learning\nSupervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The \"supervised\" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later).\nWe will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem.\nIn scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module. \nProblem 3a \nFit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure?\nHint - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, including the same features and order as the training set, as input.\nHint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?",
"from sklearn.neighbors import KNeighborsClassifier\n\nKNNclf = KNeighborsClassifier( # complete\npreds = KNNclf.predict( # complete\nplt.figure()\nplt.scatter( # complete\n\n# complete\n# complete\n# complete",
"These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data? \nWithout going into too much detail, we will test this using cross validation (CV). In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using cross_val_predict, we can get a better sense of the model accuracy. Predictions from cross_val_predict are produced in the following manner:\nfrom sklearn.cross_validation import cross_val_predict\nCVpreds = cross_val_predict(sklearn.model(), X, y)\n\nwhere sklearn.model() is the desired model, X is the feature array, and y is the label array.\nProblem 3b \nProduce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?",
"from sklearn.cross_validation import cross_val_predict\n\nCVpreds = cross_val_predict( # complete\n\nplt.scatter( # complete\nprint(\"The accuracy of the kNN = 5 model is ~{:.4}\".format( # complete\n\n# complete\n# complete\n# complete",
"While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur. \nProblem 3c \nCalculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.",
"# complete\n# complete\n# complete",
"We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor \"neighborhoods\". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is \"confusion\" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal. \nLike almost everything else we have encountered during this exercise, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following: \nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_prep)\n\nProblem 3d \nCalculate the confusion matrix for the iris training set and the $k$NN = 50 model.",
"from sklearn.metrics import confusion_matrix\ncm = confusion_matrix( # complete",
"From this representation, we see right away that most of the virginica that are being misclassifed are being scattered into the versicolor class. However, this representation could still be improved: it'd be helpful to normalize each value relative to the total number of sources in each class, and better still, it'd be good to have a visual representation of the confusion matrix. This visual representation will be readily digestible. Now let's normalize the confusion matrix.\nProblem 3e \nCalculate the normalized confusion matrix. Be careful, you have to sum along one axis, and then divide along the other. \nAnti-hint: This operation is actually straightforward using some array manipulation that we have not covered up to this point. Thus, we have performed the necessary operations for you below. If you have extra time, you should try to develop an alternate way to arrive at the same normalization.",
"normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]\n\nnormalized_cm",
"The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful. \nProblem 3f \nPlot the confusion matrix. Be sure to label each of the axeses.\nHint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.",
"# complete\n# complete\n# complete",
"Now it is straight-forward to see that virginica and versicolor flowers are the most likely to be confused, which we could intuit from the very first plot in this notebook, but this exercise becomes far more important for large data sets with many, many classes. \nThus concludes our introduction to scikit-learn and supervised and unsupervised learning."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mrphyja/bioinfo-intro-python | bioinformatics_intro_python.ipynb | mit | [
"%matplotlib inline\n\nfrom pprint import pprint\nimport random\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import YouTubeVideo",
"The Source Code of Life: Using Python to Explore Our DNA\n\nResearchers just found the gene responsible for mistakenly thinking we've found the gene for specific things. It's the region between the start and the end of every chromosome, plus a few segments in our mitochondria.\nEvery good presentation starts with xkcd, right?\nBioinformatics\nHINT: If you're viewing this notebook as slides, press the \"s\" key to see a bunch of extra notes.\n\nImage: http://www.sciencemag.org/sites/default/files/styles/article_main_large/public/images/13%20June%202014.jpg?itok=DPBy5nLZ\nToday we're going to talk about part off the wonderful field of study known as bioinformatics. What is bioinformatics? According to Wikipedia, its \"an interdisciplinary field that develops methods and software tools for understanding biological data.\" Here's another definition that more fits my experiences: \"The mathematical, statistical and computing methods that aim to solve biological problems using DNA and amino acid sequences and related information.\" (Fredj Tekaia, Institut Pasteur)\nWhat is DNA?\nA long string of A, C, G, and T (bases)\nACGTTCATGG <- Ten bases\n\nACGTTCATGGATGTGACCAG <- Twenty bases\n\netc...\n\nSo what is it about biology that needs such computationally intensive stuff? I thought biology was a \"squishy\" science? Well guess what, nature has been playing with big data long before it became the buzzword it is now. Let's start by talking about DNA. DNA is your \"source code,\" but we'll get to more about how that works in a minute. First we're just going to talk about what it is. It's a string of four different types of molecules with long names that we're not going to worry about right now, so we'll just call them by their abbreviations (which is what everyone uses most of the time anyway): A, C, G, and T. And instead of molecules, we'll call them bases, because reasons. So if you have just an A, that's one base, or if you have ACGTTCATGG, that is ten bases.\nWell, not really a string...\n<img src=\"images/double_helix.jpg\" alt=\"Double helix\" style=\"width: 200px;\"/>\nTwo strings stuck together? But if you know one you know the other...\nOk, when I said it was a string that was a bit of a simplification, it's actually two strings stuck together. That's why pictures of DNA look like a twisted ladder (that's the \"double helix\" you may hear about): each side of the ladder is a string, and the \"rungs\" are where they stick together. But the nice thing is, each base has only one other base it can stick to. A always sticks to T and vice versa, same with C and G. So if you know that one side of the ladder is ACGTTCATGG, then we know the other side is TGCAAGTACC. This is really nice because it cuts the amount of information we actually have to know in half.\nThe Human Genome\nIt's big. How big?\nSo now that you know everything (just kidding) about DNA, it's time for a question: How many bases long would you guess the human genome (fancy word for all the DNA) is? It ends up being about 3.2 billion base pairs long. (Remember the double helix? That's why it's pairs)\nAbout 3,200,000,000bp (bp is base pairs). Actual file format commonly used in bioinformatics (FASTA):\n```\n\nSequence0\nTTTCTGACTAACACTACAATTACCACTTGATGTTACCGACTAAGTGGTACGACTTGCTAGAACCGACTCTCGTACGTAT\nCGCAGACTAGTGCGCGCGCTTAGTGACTATACTAGAATATACCTGGGGCCCAAGGAGTGTCGGGCGATCGTCCTTGAAA\nTAAATATCTCAACCATCGTCATCTAGGGGGAACAGAGCGGTGGGCAGGTCCCAACCTGTTTATTTGTGTTGCTAACACT\nACGGCGCAGCTGCTCAAGTAGGTGCGATTATCGAGTAGAGGCTCCACCGGCTCTATGTGCCACGCATCTACTGAACCGA\nATTCTATCCCTGATACTCCAGAAGGTCGCAGGTTTACAGACACGTTTCAGCTCGAGAGGCCATCGATTATCTTAATATA\nCCACACTGCCGAATAGCATGCCCGTAGAATCCAAGCCACGAGATAGCGTTACTTAATGAGTACCCAACGCAAATGAGGT\nTGATTATCCCTAACCTGCAATCTAGGCCTTGTTCTGGAGGGGGTTATCCTTTATAGTTGATTACTTACACTCACCATGT\nTCGTAGTCGGAACTCACCGATTAAGACCGATTTTACTATGGGAAGGCCAGGTTACACCTGTTTCGGGGGGGCCGCGGCG\nGGTTACTTTAACCTGTCCATCCATCAGTCACTGGGCGCCAAGATTCTCCTATAGTTATATCCGCCCTTTGATTTAAACC\nTAGGCCTACCTCAACGAACTGGGCCATGGGGTTCACACAGAAACAAGGGGGATAGACAGTCTTATTGAGCGCTTCTGAA\nCAGCGTGTGTTCACGGTACGGCAATACCACCAGTAAACCGAGAACAGTGTTGAAGGTGATCGAACACGTGTTTTCTTCA\nCCGTAGGGCTTCTAGGGAGTATCGCCCCCATATAGGCAGACGAGAAGGACTGTCACGCGCGGAGATCGATAATACGTAT\nAACACAAGCACAGTAACTGCCCCGACCGGCTAAAGGACGTGGCCCAGTGTACCCAACGTACGTAATTGCAAGAGGTCTG\nTCTGTCATCCCGAGGACTGCTTCTATAACTCGTTGAGGGCACTAGGCTTGAGACAATCAGCTTCGCTCGTCACGATTTT\nACTTTTTTCCTGGAAAAGCCCCCCCACAGACTATCAGGTCGCGCTTACCATACCAGTCCTTCTTGATAAGCCAATCCGT\nATTAGGTAGATTAAGCTGACAGTCGGGGCGACTCTTTGGAAACAGTATTCCCGTTTCGGGCACCTAGGATTCAGGCTTG\nTACAACGATCATAGACGTCGCGGAAAGAAATAGCACAGTGTAGGAGCTGGTCGTGACCCGTGCTGTCAAGTTTATTGCA\nCGGCTTGCTAAAAGGTACAGTGTAACGTTTCACAAACAAGCGAGACCCATTGTTGGTCTAACGCTATCGTACTTGATAC\nCAGCCTGTGACGTCACGCGAAATCGTCTGTATAACTAGTTCTTCCCCGACTGCCACGGTATCCCAAAATTACATACTGA\nCAGGACCTCTTCCATATTCATCAGGACTCGACGAAGCGCGCCCCGTGTAGTACGCGAAAATTATACCGTCCGTAGGTAC\n\n```\nNow picture this 2 million more times. \nIf you wrapped it in PEP 8-guideline adhering lines of 79 characters, it would still be over 40 million lines (although granted DNA is about as un-Pythonic of code as you can get). And we're not even all that special when it comes to genome length. A lot of plants have us beat, actually, with paris japonica being a top contender for longest genome at 150 billion base pairs.\nSo what does all of that code even do? Well, most of it doesn't appear to \"do\" much. (How much code in your codebase is like that? Although I'd be willing to bet there's a lot of it that we just don't understand what it does yet) But we'll stick with the ~1.5% of it that we have a pretty good idea about.\n\nhttps://en.wikipedia.org/wiki/Central_dogma_of_molecular_biology\nIt's time to learn something called \"The Central Dogma of Molecular Biology,\" since that's a lot cooler-sounding than \"what DNA does.\" Don't worry, we'll keep to the very basics. There are two steps, \"transcription\" and \"translation.\" Transcription basically involves making a copy of a chunk of DNA, except that you only copy one side of it (it looks like half a strand of DNA). We call this copy RNA. Translation involves reading the pattern of the DNA into protein. What is protein? Basically most of what we're made of is either protein or is made by proteins (excluding water, of course). So that's basically how your source code works: your DNA tells your body how to make proteins, which are like parts of a machine. These little parts make little machines, which are part of bigger machines, etc. etc. until all the parts fit together to make you! (But how do the parts know how to fit together you ask? Umm...great question...that we're not going to cover today. Next slide!)\nSequencing DNA",
"YouTubeVideo('fCd6B5HRaZ8', start=135)",
"Ok, now you know something about DNA, so we can start getting into some of the fun puzzles that this leads to. So how do we find out what someone's DNA sequence is? There are several methods, including some newer ones that I'm really excited about, but we'll stick with the most popular here: Illuimina's sequencing by synthesis. It's probably so popular because it's fast and keeps getting cheaper. The reason it's so fast is because it's parallelized. It works by breaking the DNA up into little chunks and then looking at all the chunks at the same time. Basically you have a whole bunch of these fragments on a slide and then flood the slide with a whole bunch of one base (say A). The base is modified slightly so that it fluoresces a certain color when illuminated by a certain wavelength and also so that no other bases can attach after it. Then the excess bases are washed off, the slide is imaged to see which fragments got a base added, and then the fluorescent part of the new addition as well as the part that blocks the next base are removed and the process is repeated with another base (say C). Repeat until you've got most of the fragments.\nSequence Alignment\nNow things start to get fun\nThat's a nice-looking lake...\n\nUntil now!\n\nSo now you've got a whole bunch of little pieces of DNA that you have to match up to a reference sequence. A quick analogy (credit for the analogy/pics to Aaron Quinlan, who has all his slides for a course in applied computational genomics freely available). What makes this lake puzzle so hard? So much blue and white! So we need to find a way to determine how well the piece we have actually matches the picture at that point. We'll call this aligning sequences.\nFirst step: Assign scores\nIf a base matches with itself, it gets a score of one. Otherwise it gets a score of zero.\n||A|C|G|T|\n|-|-|-|-|-|\n|A|1|0|0|0|\n|C|0|1|0|0|\n|G|0|0|1|0|\n|T|0|0|0|1|\nSo these two sequences:\nAACTGTGGTAC\nACTTGTGGAAC\n10011111011\nhave an alignment score of eight.\nThis is going to seem like complete overkill at first, but you'll understand the advantages to doing it this way after a bit. Each base that aligns with itself gets a score of one, any other alignment is zero. But what if these are long segments, could we maybe get a better score by shifting one in reference to another? That's a valid question, but before we can tackle that, I need to introduce you to another unfortunate aspect of sequencing...\nWhat about gaps?\nYes, unfortunately there are going to be gaps.\nWe'll add a \"gap penalty\" of -3.\nSo the score for this alignment:\nACTACA-ACGTTGAC\nA-TAGAAACGCT-AC\n1 1101 11101 11\n-3 -3 -3\nis just one\nSequencing DNA is great, but it's also kind of messy. You may end up with extra bases or missing bases in your sequences. Also, people don't all have the same DNA (unless you're identical twins!) so you may have bases that are actually missing or extra with respect to the \"reference\" sequence. But, gaps are problematic because when they're in a place that codes for a protein (remember earlier?) they are pretty good at making the protein not work. So we want to introduce gaps only when it's a lot better than the alternative. For now we'll have a \"gap penalty\" of -3.\nScoring matrix\n||-|A|C|G|T|T|T|G|T|C|G|C|\n|-|\n|-|0|-3|-6|-9|-12|-15|-18|-21|-24|-27|-30|-33|\n|A|-3||||||||||||\n|C|-6||||||||||||\n|T|-9||||||||||||\n|T|-12||||||||||||\n|T|-15||||||||||||\n|C|-18||||||||||||\n|T|-21||||||||||||\n|G|-24||||||||||||\n|C|-27||||||||||||\n$$\nS_{m,n}=\\left{\n \\begin{array}{ll}\n S_{m-1,n} + gap\\\n S_{m,n-1} + gap\\\n S_{m-1,n-1} + B(a,b)\\\n \\end{array}\n\\right.\n$$\n||-|A|C|G|T|T|T|G|T|C|G|C|\n|-|\n|-|0|-3|-6|-9|-12|-15|-18|-21|-24|-27|-30|-33|\n|A|-3|1|-2|-5|-8|-11|-14|-17|-20|-23|-26|-29|\n|C|-6||||||||||||\n|T|-9||||||||||||\n|T|-12||||||||||||\n|T|-15||||||||||||\n|C|-18||||||||||||\n|T|-21||||||||||||\n|G|-24||||||||||||\n|C|-27||||||||||||\nWe need to keep track of where that score came from.\n||-|A|C|G|T|T|T|G|T|C|G|C|\n|-|\n|-|0|←-3|←-6|←-9|←-12|←-15|←-18|←-21|←-24|←-27|←-30|←-33|\n|A|↑-3|↖1|←-2|←-5|←-8|←-11|←-14|←-17|←-20|←-23|←-26|←-29|\n|C|↑-6||||||||||||\n|T|↑-9||||||||||||\n|T|↑-12||||||||||||\n|T|↑-15||||||||||||\n|C|↑-18||||||||||||\n|T|↑-21||||||||||||\n|G|↑-24||||||||||||\n|C|↑-27||||||||||||\n||-|A|C|G|T|T|T|G|T|C|G|C|\n|-|\n|-|0|←-3|←-6|←-9|←-12|←-15|←-18|←-21|←-24|←-27|←-30|←-33|\n|A|↑-3|↖1|←-2|←-5|←-8|←-11|←-14|←-17|←-20|←-23|←-26|←-29|\n|C|↑-6|↑-2|↖2|←1|←-4|←-7|←-10|←-13|←-16|↖-19|←-22|↖-25|\n|T|↑-9|↑-5|↑-1|↖2|↖0|←-3|←-6|←-9|←-12|←-15|←-18|←-21|\n|T|↑-12|↑-8|↑-4|↑-1|↖3|↖1|←-2|←-5|↖-8|←-11|←-14|←-17|\n|T|↑-15|↑-11|↑-7|↖-4|↑0|↖4|↖2|←-1|↖-4|←-7|←-10|←-13|\n|C|↑-18|↑-14|↖-10|↖-7|↑-3|↑1|↖4|↖2|↖-1|↖-3|←-6|←-9|\n|T|↑-21|↑-17|↑-13|↑-10|↑-6|↑-2|↖2|↖4|↖3|←0|↖-3|←-6|\n|G|↑-24|↑-20|↑-16|↖-12|↑-9|↑-5|↑-1|↖3|↖4|↖3|↖1|←-2|\n|C|↑-27|↑-23|↖-19|↑-15|↖-12|↑-8|↑-4|↑0|↖3|↖5|↖3|↖2|\nNow just follow the arrows from that bottom-right corner back to the top-left zero.\n```\nACGTTTGTCGC\n|| ||| | ||\nAC-TTTCT-GC\n```\nSo how do we keep track of where we actually want gaps? We will use a scoring matrix. We start out with one sequence at the top and one on the side with an extra space added at the beginning of each. The two spaces align with a score of zero and we start form there. Each cell gets filled in with whatever ends up the highest of three possibilities:\n1. The score above plus the gap penalty (remember the gap penalty is negative)\n2. The score to the left plus the gap penalty\n3. The score to the upper left plus the alignment score\nWe can fill in the top and left columns right away since they don't have a score to their upper-left, so their only possible score is the gap penalty.\nBut our choice affects the score of everything down and to the right, so besides just the score, we need to keep track of where that score came from.\nOnce it's all filled out, just follow the arrows from the bottom-right corner to the top-left zero. Every time you go straight up, it's a gap in the left sequence. Every time you go left it's a gap in the top sequence. Every time you go up-left, the two bases align.\nLet's code this!\nSee Python for Bioinformatics for the inspiration for this demo.\nHere is the \"substitution matrix\" and its corresponding \"alphabet\":",
"dna_sub_mat = np.array(\n [[ 1, 0, 0, 0],\n [ 0, 1, 0, 0],\n [ 0, 0, 1, 0],\n [ 0, 0, 0, 1]])\n\ndbet = 'ACGT'",
"Same as we defined above, with dbet being the \"dna alphabet\" used for this matrix (four-letter alphabet, could be in a different order, this is just the one we chose)\nAnd here are we calculate the scores and arrows as separate matrices",
"def nw_alignment(sub_mat, abet, seq1, seq2, gap=-8):\n # Get the lengths of the sequences\n seq1_len, seq2_len = len(seq1), len(seq2)\n # Create the scoring and arrow matrices\n score_mat = np.zeros((seq1_len+1, seq2_len+1), int)\n arrow_mat = np.zeros((seq1_len+1, seq2_len+1), int)\n # Fill first column and row of score matrix with scores based on gap penalty\n score_mat[0] = np.arange(seq2_len+1) * gap\n score_mat[:,0] = np.arange(seq1_len+1) * gap\n # Fill top row of arrow matrix with ones (left arrow)\n arrow_mat[0] = np.ones(seq2_len+1)\n for seq1_pos, seq1_letter in enumerate(seq1):\n for seq2_pos, seq2_letter in enumerate(seq2):\n f = np.zeros(3)\n # Cell above + gap penalty\n f[0] = score_mat[seq1_pos,seq2_pos+1] + gap\n # Cell to left + gap penalty\n f[1] = score_mat[seq1_pos+1,seq2_pos] + gap\n n1 = abet.index(seq1_letter)\n n2 = abet.index(seq2_letter)\n # Cell to upper-left + alignment score\n f[2] = score_mat[seq1_pos,seq2_pos] + sub_mat[n1,n2]\n score_mat[seq1_pos+1, seq2_pos+1] = f.max()\n arrow_mat[seq1_pos+1, seq2_pos+1] = f.argmax()\n return score_mat, arrow_mat",
"I'm calling this nw_align after the Needleman-Wunsch algorithm. It's hard to put score and directional information in just one matrix, so we make two matrices. We start with a matrix of zeros for each and then fill them in concurrently as we run through the possibilities.\nWhat does our result look like?",
"s1 = 'ACTTCTGC'\n\ns2 = 'ACGTTTGTCGC'\n\nscore_mat, arrow_mat = nw_alignment(dna_sub_mat, dbet, s1, s2, gap=-3)\nprint(score_mat)\nprint(arrow_mat)",
"Looks good, but not all that useful by itself.\nNow we need a way to get the sequences back out of our scoring matrix:",
"def backtrace(arrow_mat, seq1, seq2):\n align1, align2 = '', ''\n align1_pos, align2_pos = arrow_mat.shape\n align1_pos -= 1\n align2_pos -= 1\n selected = []\n while True:\n selected.append((align1_pos, align2_pos))\n if arrow_mat[align1_pos, align2_pos] == 0:\n # Up arrow, add gap to align2\n align1 += seq1[align1_pos-1]\n align2 += '-'\n align1_pos -= 1\n elif arrow_mat[align1_pos, align2_pos] == 1:\n # Left arrow, add gap to align2\n align1 += '-'\n align2 += seq2[align2_pos-1]\n align2_pos -= 1\n elif arrow_mat[align1_pos, align2_pos] == 2:\n # Up-arrow arrow, no gap\n align1 += seq1[align1_pos-1]\n align2 += seq2[align2_pos-1]\n align1_pos -= 1\n align2_pos -= 1\n if align1_pos==0 and align2_pos==0:\n break\n # reverse the strings\n return align1[::-1], align2[::-1], selected\n\na1, a2, selected = backtrace(arrow_mat, s1, s2)\nprint(a1)\nprint(a2)",
"Sometimes it's nice to see the scoring matrix, though, so here's a function to visualize it",
"def visual_scoring_matrix(seq1, seq2, score_mat, arrow_mat):\n visual_mat = []\n for i, row in enumerate(arrow_mat):\n visual_mat_row = []\n for j, col in enumerate(row):\n if col == 0:\n arrow = '↑'\n elif col == 1:\n arrow = '←'\n else:\n arrow = '↖'\n visual_mat_row.append(arrow + ' ' + str(score_mat[i,j]))\n visual_mat.append(visual_mat_row)\n visual_mat = np.array(visual_mat, object)\n\n tab = plt.table(\n cellText=visual_mat,\n rowLabels=['-'] + list(s1),\n colLabels=['-'] + list(s2),\n loc='center')\n tab.scale(2, 3)\n tab.set_fontsize(30)\n plt.axis('tight')\n plt.axis('off')\n \n align1, align2, selected = backtrace(arrow_mat, seq1, seq2)\n for pos in selected:\n y, x = pos\n tab._cells[(y+1, x)]._text.set_color('green')\n tab._cells[(y+1, x)]._text.set_weight(1000)\n plt.show()\n\nvisual_scoring_matrix(s1, s2, score_mat, arrow_mat)",
"Let's generate some sequences and see how fast this is",
"def random_dna_seq(length=1000):\n seq = [random.choice(dbet) for x in range(length)]\n return ''.join(seq)\n\ndef mutate_dna_seq(seq, chance=1/5):\n mut_seq_base = [random.choice(dbet) if random.random() < chance else x for x in seq]\n mut_seq_indel = [random.choice(('', x + random.choice(dbet))) if random.random() < chance else x for x in mut_seq_base]\n return ''.join(mut_seq_indel)\n\ns1 = random_dna_seq()\ns2 = mutate_dna_seq(s1)\nprint(s1)\nprint(s2)\na = %timeit -o nw_alignment(dna_sub_mat, dbet, s1, s2, gap=-3)\nprint('{:.1f} years for the whole genome'.format(a.average * 2300000000 / 60 / 60 / 24 / 365.25))",
"If we wanted to shift this one position at a time along the whole genome and check the alignments, how long would it take?\nThat's a long time!\nLet's make it faster! (Just because)\nSo in reality, we don't actually want to use this algorithm to align our fragments to the whole genome. It's too slow, and there's no good way to decide which alignment is \"best.\" It's still a good introduction to thinking about these types of problems, though. And since it's actually fairly easy and demonstrates how you can improve your code by understanding your algorithm, we'll do something to make it a bit faster.\n||-|A|C|G|T|T|T|G|T|C|G|C|\n|-|\n|-|0|←-3|←-6|←-9|←-12|←-15|←-18|←-21|←-24|←-27|←-30|←-33|\n|A|↑-3|↖1|←-2||||||||||\n|C|↑-6|↑-2|||||||||||\n|T|↑-9||||||||||||\n|T|↑-12||||||||||||\n|T|↑-15||||||||||||\n|C|↑-18||||||||||||\n|T|↑-21||||||||||||\n|G|↑-24||||||||||||\n|C|↑-27||||||||||||\nWe can't calculate a whole row or column at a time because the values depend on those in the same row/column. But what about diagonals?\nIf you look at the diagonals, you know the values above and to the left, so you have everything you need to calculate your score.\nWe'll \"get rid of\" our nested loop (really just abstract it into a faster numpy \"loop\")\nThis is going to take a couple more steps, but it will be worth it in the end.\nFirst we pre-calculate the \"upper-left score\" for each location.",
"def sub_values(sub_mat, abet, seq1, seq2):\n # convert the sequences to numbers\n seq1_ind = [abet.index(i) for i in seq1]\n seq2_ind = [abet.index(i) for i in seq2]\n sub_vals = np.array([[0] * (len(seq2)+1)] + [[0] + [sub_mat[y, x] for x in seq2_ind] for y in seq1_ind], int)\n return sub_vals\n\nsub_values(dna_sub_mat, dbet, 'AACGTTA', 'AAGCTTAAAAAAAA')",
"Then we get a list of all the diagonals in the matrix.",
"def diags(l1, l2):\n ys = np.array([np.arange(l1) + 1 for i in np.arange(l2)])\n xs = np.array([np.arange(l2) + 1 for i in np.arange(l1)])\n diag_ys = [np.flip(ys.diagonal(i), 0) for i in range(1-l2, l1)]\n diag_xs = [xs.diagonal(i) for i in range(1-l1, l2)]\n index_list = []\n for y, x in zip(diag_ys, diag_xs):\n index_list.append([y, x])\n return index_list\n\ndiags(6, 3)",
"And here's the actual function. It takes the same arguments and produces the same matrices.",
"def FastNW(sub_mat, abet, seq1, seq2, gap=-8):\n sub_vals = sub_values(sub_mat, abet, seq1, seq2)\n # Get the lengths of the sequences\n seq1_len, seq2_len = len(seq1), len(seq2)\n # Create the scoring and arrow matrices\n score_mat = np.zeros((seq1_len+1, seq2_len+1), int)\n arrow_mat = np.zeros((seq1_len+1, seq2_len+1), int)\n # Fill first column and row of score matrix with scores based on gap penalty\n score_mat[0] = np.arange(seq2_len+1) * gap\n score_mat[:,0] = np.arange(seq1_len+1) * gap\n # Fill top row of arrow matrix with ones (left arrow)\n arrow_mat[0] = np.ones(seq2_len+1)\n # Get the list of diagonals\n diag_list = diags(seq1_len, seq2_len)\n # fill in the matrix\n for diag in diag_list:\n # Matrix to hold all three possible scores for every element in the diagonal\n f = np.zeros((3, len(diag[0])), float)\n # Cell above + gap penalty for every cell in the diagonal\n x, y = diag[0]-1, diag[1]\n f[0] = score_mat[x, y] + gap\n # Cell to the left + gap penalty for every cell in the diagonal\n x, y = diag[0], diag[1]-1\n f[1] = score_mat[x, y] + gap\n # Cell to the upper left + alignment score for every cell in the diagonal\n x, y = diag[0]-1, diag[1]-1\n f[2] = score_mat[x,y] + sub_vals[diag]\n max_score = (f.max(0))\n max_score_pos = f.argmax(0)\n score_mat[diag] = max_score\n arrow_mat[diag] = max_score_pos\n return score_mat, arrow_mat\n\nFastNW(dna_sub_mat, dbet, s1, s2)",
"So how much faster is it?",
"s1 = random_dna_seq()\ns2 = mutate_dna_seq(s1)\na = %timeit -o nw_alignment(dna_sub_mat, dbet, s1, s2)\nprint('{:.1f} years for the whole genome'.format(a.average * 2300000000 / 60 / 60 / 24 / 365.25))\na = %timeit -o FastNW(dna_sub_mat, dbet, s1, s2)\nprint('{:.1f} years for the whole genome'.format(a.average * 2300000000 / 60 / 60 / 24 / 365.25))",
"Now why did we use the substitution matrix?\nHere's how DNA translates to protein:\nWikipedia\nSo we can align proteins, too!",
"blosum50 = np.array(\n [[ 5,-2,-1,-2,-1,-1,-1, 0,-2,-1,-2,-1,-1,-3,-1, 1, 0,-3,-2, 0],\n [-2, 7,-1,-2,-1, 1, 0,-3, 0,-4,-3, 3,-2,-3,-3,-1,-1,-3,-1,-3],\n [-1,-1, 7, 2,-2, 0, 0, 0, 1,-3,-4,-0,-2,-4,-2,-1, 0,-4,-2,-3],\n [-2,-2, 2, 8,-4, 0, 2,-1,-1,-4,-4,-1,-4,-5,-1, 0,-1,-5,-3,-4],\n [-1,-4,-2,-4,13,-3,-3,-3,-3,-2,-2,-3,-2,-2,-4,-1,-1,-5,-3,-1],\n [-1,-1, 0, 0,-3, 7, 2,-2, 1,-3,-2, 2, 0,-4,-1,-0,-1,-1,-1,-3],\n [-1, 0, 0, 2,-3, 2, 6,-3, 0,-4,-3, 1,-2,-3,-1,-1,-1,-3,-2,-3],\n [ 0,-3, 0,-1,-3,-2,-3, 8,-2,-4,-4,-2,-3,-4,-2, 0,-2,-3,-3,-4],\n [-2, 0, 1,-1,-3, 1, 0,-2,10,-4,-3, 0,-1,-1,-2,-1,-2,-3,-1, 4],\n [-1,-4,-3,-4,-2,-3,-4,-4,-4, 5, 2,-3, 2, 0,-3,-3,-1,-3,-1, 4],\n [-2,-3,-4,-4,-2,-2,-3,-4,-3, 2, 5,-3, 3, 1,-4,-3,-1,-2,-1, 1],\n [-1, 3, 0,-1,-3, 2, 1,-2, 0,-3,-3, 6,-2,-4,-1, 0,-1,-3,-2,-3],\n [-1,-2,-2,-4,-2, 0,-2,-3,-1, 2, 3,-2, 7, 0,-3,-2,-1,-1, 0, 1],\n [-3,-3,-4,-5,-2,-4,-3,-4,-1, 0, 1,-4, 0, 8,-4,-3,-2, 1, 4,-1],\n [-1,-3,-2,-1,-4,-1,-1,-2,-2,-3,-4,-1,-3,-4,10,-1,-1,-4,-3,-3],\n [ 1,-1, 1, 0,-1, 0,-1, 0,-1,-3,-3, 0,-2,-3,-1, 5, 2,-4,-2,-2],\n [ 0,-1, 0,-1,-1,-1,-1,-2,-2,-1,-1,-1,-1,-2,-1, 2, 5,-3,-2, 0],\n [-3,-3,-4,-5,-5,-1,-3,-3,-3,-3,-2,-3,-1, 1,-4,-4,-3,15, 2,-3],\n [-2,-1,-2,-3,-3,-1,-2,-3, 2,-1,-1,-2, 0, 4,-3,-2,-2, 2, 8,-1],\n [ 0,-3,-3,-4,-1,-3,-3,-4,-4, 4, 1,-3, 1,-1,-3,-2, 0,-3,-1, 5]])\npbet = 'ARNDCQEGHILKMFPSTWYV'\n\ns1 = [random.choice(pbet) for _ in range(10)]\ns2 = [random.choice(pbet) if random.random() < .25 else x for x in s1] + [random.choice(pbet) for _ in range(10)]\nscore_mat, arrow_mat = FastNW(blosum50, pbet, s1, s2)\nvisual_scoring_matrix(s1, s2, score_mat, arrow_mat)\n",
"Other things we can do with this algorithm:\n\nLocal alignment\nAffine gap penalties\n\nOther useful bioinformatics Python packages:",
"from pysam import FastaFile\nfrom os.path import getsize\n\nprint(getsize('Homo_sapiens.GRCh38.dna.primary_assembly.fasta.gz')/1024**2, 'MiB')\n\n\nwith FastaFile('Homo_sapiens.GRCh38.dna.primary_assembly.fasta.gz') as myfasta:\n chr17len = myfasta.get_reference_length('17')\n print(chr17len, 'bp')\n seq = myfasta.fetch('17', int(chr17len/2), int(chr17len/2)+500)\nprint(seq)\n\nwith FastaFile('Homo_sapiens.GRCh38.dna.primary_assembly.fasta.gz') as myfasta:\n chr17len = myfasta.get_reference_length('17')\n %timeit myfasta.fetch('17', int(chr17len/2), int(chr17len/2)+500)\n\nfrom Bio.Seq import Seq\nfrom Bio.Alphabet import IUPAC\ncoding_dna = Seq(\"ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG\", IUPAC.unambiguous_dna)\nprint(coding_dna.__repr__())\nprint(coding_dna.translate().__repr__())",
"Any questions?\nOther possible topics:\nHow do you actually get from reads to something usable?\n\nWhy does bioinformatics use flat files and not databases for everything?\nThe best answer I can give is that of Heng Li when talking about Tabix:\n\nIt is straightforward to achieve overlap queries using the standard B-tree index (with or without binning) implemented in all SQL databases, or the R-tree index in PostgreSQL and Oracle. But there are still many reasons to use tabix. Firstly, tabix directly works with a lot of widely used TAB-delimited formats such as GFF/GTF and BED. We do not need to design database schema or specialized binary formats. Data do not need to be duplicated in different formats, either. Secondly, tabix works on compressed data files while most SQL databases do not. The GenCode annotation GTF can be compressed down to 4%. Thirdly, tabix is fast. The same indexing algorithm is known to work efficiently for an alignment with a few billion short reads. SQL databases probably cannot easily handle data at this scale. Last but not the least, tabix supports remote data retrieval. One can put the data file and the index at an FTP or HTTP server, and other users or even web services will be able to get a slice without downloading the entire file.\n\nWhat things can you do in bioinformatics?\n* Look at variants - which ones matter?\n* Assemble genomes for new organisms\n* Compare genomes of different organisms\n* Biomarker studies:\n * Compare gene expression levels across case and control samples, see if you can find significantly different ones. (It's messy. Also, R is better for this than Python (shhh...))\n* Molecular interaction networks"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IanHawke/maths-with-python | 05-classes-oop.ipynb | mit | [
"Classes and Object Oriented Programming\nWe have looked at functions which take input and return output (or do things to the input). However, sometimes it is useful to think about objects first rather than the actions applied to them.\nThink about a polynomial, such as the cubic\n$$ p(x) = 12 - 14 x + 2 x^3. $$\nThis is one of the standard forms that we would expect to see for a polynomial. We could imagine representing this in Python using a container containing the coefficients, such as:",
"p_normal = (12, -14, 0, 2)",
"The order of the polynomial is given by the number of coefficients (minus one), which is given by len(p_normal)-1.\nHowever, there are many other ways it could be written, which are useful in different contexts. For example, we are often interested in the roots of the polynomial, so would want to express it in the form\n$$ p(x) = 2 (x - 1)(x - 2)(x + 3). $$\nThis allows us to read off the roots directly. We could imagine representing this in Python using a container containing the roots, such as:",
"p_roots = (1, 2, -3)",
"combined with a single variable containing the leading term,",
"p_leading_term = 2",
"We see that the order of the polynomial is given by the number of roots (and hence by len(p_roots)). This form represents the same polynomial but requires two pieces of information (the roots and the leading coefficient).\nThe different forms are useful for different things. For example, if we want to add two polynomials the standard form makes it straightforward, but the factored form does not. Conversely, multiplying polynomials in the factored form is easy, whilst in the standard form it is not.\nBut the key point is that the object - the polynomial - is the same: the representation may appear different, but it's the object itself that we really care about. So we want to represent the object in code, and work with that object.\nClasses\nPython, and other languages that include object oriented concepts (which is most modern languages) allow you to define and manipulate your own objects. Here we will define a polynomial object step by step.",
"class Polynomial(object):\n explanation = \"I am a polynomial\"\n \n def explain(self):\n print(self.explanation)",
"We have defined a class, which is a single object that will represent a polynomial. We use the keyword class in the same way that we use the keyword def when defining a function. The definition line ends with a colon, and all the code defining the object is indented by four spaces.\nThe name of the object - the general class, or type, of the thing that we're defining - is Polynomial. The convention is that class names start with capital letters, but this convention is frequently ignored.\nThe type of object that we are building on appears in brackets after the name of the object. The most basic thing, which is used most often, is the object type as here.\nClass variables are defined in the usual way, but are only visible inside the class. Variables that are set outside of functions, such as explanation above, will be common to all class variables.\nFunctions are defined inside classes in the usual way (using the def keyword, indented by four additional spaces). They work in a special way: they are not called directly, but only when you have a member of the class. This is what the self keyword does: it takes the specific instance of the class and uses its data. Class functions are often called methods.\nLet's see how this works on a specific example:",
"p = Polynomial()\nprint(p.explanation)\np.explain()\np.explanation = \"I change the string\"\np.explain()",
"The first line, p = Polynomial(), creates an instance of the class. That is, it creates a specific Polynomial. It is assigned to the variable named p. We can access class variables using the \"dot\" notation, so the string can be printed via p.explanation. The method that prints the class variable also uses the \"dot\" notation, hence p.explain(). The self variable in the definition of the function is the instance itself, p. This is passed through automatically thanks to the dot notation.\nNote that we can change class variables in specific instances in the usual way (p.explanation = ... above). This only changes the variable for that instance. To check that, let us define two polynomials:",
"p = Polynomial()\np.explanation = \"Changed the string again\"\nq = Polynomial()\np.explanation = \"Changed the string a third time\"\np.explain()\nq.explain()",
"We can of course make the methods take additional variables. We modify the class (note that we have to completely re-define it each time):",
"class Polynomial(object):\n explanation = \"I am a polynomial\"\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))",
"We then use this, remembering that the self variable is passed through automatically:",
"r = Polynomial()\nr.explain_to(\"Alice\")",
"At the moment the class is not doing anything interesting. To do something interesting we need to store (and manipulate) relevant variables. The first thing to do is to add those variables when the instance is actually created. We do this by adding a special function (method) which changes how the variables of type Polynomial are created:",
"class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))",
"This __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. This is another Python convention that is effectively a rule: functions surrounded by two underscores have special effects, and will be called by other Python functions internally. So now we can create a variable that represents a specific polynomial by storing its roots and the leading term:",
"p = Polynomial(p_roots, p_leading_term)\np.explain_to(\"Alice\")\nq = Polynomial((1,1,0,-2), -1)\nq.explain_to(\"Bob\")",
"It is always useful to have a function that shows what the class represents, and in particular what this particular instance looks like. We can define another method that explicitly displays the Polynomial:",
"class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def display(self):\n string = str(self.leading_term)\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))\n\np = Polynomial(p_roots, p_leading_term)\nprint(p.display())\nq = Polynomial((1,1,0,-2), -1)\nprint(q.display())",
"Where classes really come into their own is when we manipulate them as objects in their own right. For example, we can multiply together two polynomials to get another polynomial. We can create a method to do that:",
"class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def display(self):\n string = str(self.leading_term)\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n \n def multiply(self, other):\n roots = self.roots + other.roots\n leading_term = self.leading_term * other.leading_term\n return Polynomial(roots, leading_term)\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))\n\np = Polynomial(p_roots, p_leading_term)\nq = Polynomial((1,1,0,-2), -1)\nr = p.multiply(q)\nprint(r.display())",
"We now have a simple class that can represent polynomials and multiply them together, whilst printing out a simple string form representing itself. This can obviously be extended to be much more useful.\nExercise: Equivalence classes\nAn equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \\sim 10$ to denote two objects within the same equivalence class.\nHere, we are going to define the positive integers programmatically from equivalent sequences.\nExercise 1\nDefine a Python class Eqint. This should be\n\nInitialized by a sequence;\nStore the sequence;\nHave a display method that returns a string showing the integer length of the sequence;\nHave an equals method that checks if two Eqints are equal, which is True if, and only if, their sequences have the same length.\n\nExercise 2\nDefine a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example\npython\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\nCheck that none of the one objects equal the zero object, but all equal the other one objects. Display each object to check that the representation gives the integer length.\nExercise 3\nRedefine the class by including an add method that combines the two sequences. That is, if a and b are Eqints then a.add(b) should return an Eqint defined from combining a and bs sequences.\nNote\nAdding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.\nExercise 4\nCheck your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Display the resulting object to check you get 3, and also print its internal sequence.\nExercise 5\nWe will sketch a construction of the positive integers from nothing.\n\nDefine an empty list positive_integers.\nDefine an Eqint called zero from the empty list. Append it to positive_integers.\nDefine an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.\nRepeat step 3 as often as needed.\n\nUse this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
adukic/nd101 | autoencoder/Convolutional_Autoencoder_Solution.ipynb | mit | [
"Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\n\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\n\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called \"transpose convolution\" which is what you'll find the TensorFlow API, with tf.nn.conv2d_transpose. \nHowever, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\n\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.",
"inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')\n# Now 14x14x16\nconv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')\n# Now 7x7x8\nconv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')\n# Now 4x4x8\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))\n# Now 7x7x8\nconv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x8\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))\n# Now 14x14x8\nconv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x8\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))\n# Now 28x28x8\nconv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x16\n\nlogits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)\n#Now 28x28x1\n\ndecoded = tf.nn.sigmoid(logits, name='decoded')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)",
"Training\nAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.",
"sess = tf.Session()\n\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\n\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\n\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\n\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.",
"inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x32\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')\n# Now 14x14x32\nconv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x32\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')\n# Now 7x7x32\nconv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x16\nencoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')\n# Now 4x4x16\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))\n# Now 7x7x16\nconv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x16\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))\n# Now 14x14x16\nconv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x32\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))\n# Now 28x28x32\nconv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x32\n\nlogits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)\n#Now 28x28x1\n\ndecoded = tf.nn.sigmoid(logits, name='decoded')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)\n\nsess = tf.Session()\n\nepochs = 100\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\n\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io | python/pandas_selecting_rows_on_conditions.ipynb | mit | [
"Title: Selecting Pandas DataFrame Rows Based On Conditions\nSlug: pandas_selecting_rows_on_conditions\nSummary: Selecting Pandas DataFrame Rows Based On Conditions\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Data Wrangling\nAuthors: Chris Albon \nPreliminaries",
"# Import modules\nimport pandas as pd\nimport numpy as np\n\n# Create a dataframe\nraw_data = {'first_name': ['Jason', 'Molly', np.nan, np.nan, np.nan], \n 'nationality': ['USA', 'USA', 'France', 'UK', 'UK'], \n 'age': [42, 52, 36, 24, 70]}\ndf = pd.DataFrame(raw_data, columns = ['first_name', 'nationality', 'age'])\ndf",
"Method 1: Using Boolean Variables",
"# Create variable with TRUE if nationality is USA\namerican = df['nationality'] == \"USA\"\n\n# Create variable with TRUE if age is greater than 50\nelderly = df['age'] > 50\n\n# Select all casess where nationality is USA and age is greater than 50\ndf[american & elderly]",
"Method 2: Using variable attributes",
"# Select all cases where the first name is not missing and nationality is USA \ndf[df['first_name'].notnull() & (df['nationality'] == \"USA\")]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DominikDitoIvosevic/Uni | STRUCE/2018/SU-2018-LAB02-0036477171.ipynb | mit | [
"Sveučilište u Zagrebu\nFakultet elektrotehnike i računarstva \nStrojno učenje 2018/2019\nhttp://www.fer.unizg.hr/predmet/su\n\nLaboratorijska vježba 2: Linearni diskriminativni modeli\nVerzija: 1.2\nZadnji put ažurirano: 26. listopada 2018.\n(c) 2015-2018 Jan Šnajder, Domagoj Alagić \nObjavljeno: 26. listopada 2018.\nRok za predaju: 5. studenog 2018. u 07:00h\n\nUpute\nPrva laboratorijska vježba sastoji se od šest zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija. \nOsigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.\nVježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.",
"# Učitaj osnovne biblioteke...\nimport sklearn\nimport mlutils\nimport matplotlib.pyplot as plt\n%pylab inline",
"Zadatci\n1. Linearna regresija kao klasifikator\nU prvoj laboratorijskoj vježbi koristili smo model linearne regresije za, naravno, regresiju. Međutim, model linearne regresije može se koristiti i za klasifikaciju. Iako zvuči pomalo kontraintuitivno, zapravo je dosta jednostavno. Naime, cilj je naučiti funkciju $f(\\mathbf{x})$ koja za negativne primjere predviđa vrijednost $1$, dok za pozitivne primjere predviđa vrijednost $0$. U tom slučaju, funkcija $f(\\mathbf{x})=0.5$ predstavlja granicu između klasa, tj. primjeri za koje vrijedi $h(\\mathbf{x})\\geq 0.5$ klasificiraju se kao pozitivni, dok se ostali klasificiraju kao negativni.\nKlasifikacija pomoću linearne regresije implementirana je u razredu RidgeClassifier. U sljedećim podzadatcima istrenirajte taj model na danim podatcima i prikažite dobivenu granicu između klasa. Pritom isključite regularizaciju ($\\alpha = 0$, odnosno alpha=0). Također i ispišite točnost vašeg klasifikacijskog modela (smijete koristiti funkciju metrics.accuracy_score). Skupove podataka vizualizirajte korištenjem pomoćne funkcije plot_clf_problem(X, y, h=None) koja je dostupna u pomoćnom paketu mlutils (datoteku mlutils.py možete preuzeti sa stranice kolegija). X i y predstavljaju ulazne primjere i oznake, dok h predstavlja funkciju predikcije modela (npr. model.predict). \nU ovom zadatku cilj je razmotriti kako se klasifikacijski model linearne regresije ponaša na linearno odvojim i neodvojivim podatcima.",
"from sklearn.linear_model import LinearRegression, RidgeClassifier\nfrom sklearn.metrics import accuracy_score",
"(a)\nPrvo, isprobajte ugrađeni model na linearno odvojivom skupu podataka seven ($N=7$).",
"seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])\nseven_y = np.array([1, 1, 1, 1, 0, 0, 0])\n\nclf = RidgeClassifier().fit(seven_X, seven_y)\npredicted_y = clf.predict(seven_X)\nscore = accuracy_score(y_pred=predicted_y, y_true=seven_y)\nprint(score)\n\nmlutils.plot_2d_clf_problem(X=seven_X, y=predicted_y, h=None)",
"Kako bi se uvjerili da se u isprobanoj implementaciji ne radi o ničemu doli o običnoj linearnoj regresiji, napišite kôd koji dolazi do jednakog rješenja korištenjem isključivo razreda LinearRegression. Funkciju za predikciju, koju predajete kao treći argument h funkciji plot_2d_clf_problem, možete definirati lambda-izrazom: lambda x : model.predict(x) >= 0.5.",
"\nlr = LinearRegression().fit(seven_X, seven_y)\npredicted_y_2 = lr.predict(seven_X)\n\n\nmlutils.plot_2d_clf_problem(X=seven_X, y=seven_y, h= lambda x : lr.predict(x) >= 0.5)",
"Q: Kako bi bila definirana granica između klasa ako bismo koristili oznake klasa $-1$ i $1$ umjesto $0$ i $1$?\n(b)\nProbajte isto na linearno odvojivom skupu podataka outlier ($N=8$):",
"outlier_X = np.append(seven_X, [[12,8]], axis=0)\noutlier_y = np.append(seven_y, 0)\n\nlr2 = LinearRegression().fit(outlier_X, outlier_y)\npredicted_y_2 = lr2.predict(outlier_X)\n\nmlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : lr2.predict(x) >= 0.5)",
"Q: Zašto model ne ostvaruje potpunu točnost iako su podatci linearno odvojivi?\n(c)\nZavršno, probajte isto na linearno neodvojivom skupu podataka unsep ($N=8$):",
"unsep_X = np.append(seven_X, [[2,2]], axis=0)\nunsep_y = np.append(seven_y, 0)\n\nlr3 = LinearRegression().fit(unsep_X, unsep_y)\npredicted_y_2 = lr3.predict(unsep_X)\n\nmlutils.plot_2d_clf_problem(X=unsep_X, y=unsep_y, h= lambda x : lr3.predict(x) >= 0.5)",
"Q: Očito je zašto model nije u mogućnosti postići potpunu točnost na ovom skupu podataka. Međutim, smatrate li da je problem u modelu ili u podacima? Argumentirajte svoj stav.\n2. Višeklasna klasifikacija\nPostoji više načina kako se binarni klasifikatori mogu se upotrijebiti za višeklasnu klasifikaciju. Najčešće se koristi shema tzv. jedan-naspram-ostali (engl. one-vs-rest, OVR), u kojoj se trenira po jedan klasifikator $h_j$ za svaku od $K$ klasa. Svaki klasifikator $h_j$ trenira se da razdvaja primjere klase $j$ od primjera svih drugih klasa, a primjer se klasificira u klasu $j$ za koju je $h_j(\\mathbf{x})$ maksimalan.\nPomoću funkcije datasets.make_classification generirajte slučajan dvodimenzijski skup podataka od tri klase i prikažite ga koristeći funkciju plot_2d_clf_problem. Radi jednostavnosti, pretpostavite da nema redundantnih značajki te da je svaka od klasa \"zbijena\" upravo u jednu grupu.",
"from sklearn.datasets import make_classification\n\nx, y = sklearn.datasets.make_classification(n_samples=100, n_informative=2, n_redundant=0, n_repeated=0, n_features=2, n_classes=3, n_clusters_per_class=1)\n\n#print(dataset)\nmlutils.plot_2d_clf_problem(X=x, y=y, h=None)",
"Trenirajte tri binarna klasifikatora, $h_1$, $h_2$ i $h_3$ te prikažite granice između klasa (tri grafikona). Zatim definirajte $h(\\mathbf{x})=\\mathrm{argmax}_j h_j(\\mathbf{x})$ (napišite svoju funkciju predict koja to radi) i prikažite granice između klasa za taj model. Zatim se uvjerite da biste identičan rezultat dobili izravno primjenom modela RidgeClassifier, budući da taj model za višeklasan problem zapravo interno implementira shemu jedan-naspram-ostali.\nQ: Alternativna shema jest ona zvana jedan-naspram-jedan (engl, one-vs-one, OVO). Koja je prednost sheme OVR nad shemom OVO? A obratno?",
"fig = plt.figure(figsize=(5,15))\nfig.subplots_adjust(wspace=0.2)\n\ny_ovo1 = [ 0 if i == 0 else 1 for i in y]\nlrOvo1 = LinearRegression().fit(x, y_ovo1)\nfig.add_subplot(3,1,1)\nmlutils.plot_2d_clf_problem(X=x, y=y_ovo1, h= lambda x : lrOvo1.predict(x) >= 0.5)\n\ny_ovo2 = [ 0 if i == 1 else 1 for i in y]\nlrOvo2 = LinearRegression().fit(x, y_ovo2)\nfig.add_subplot(3,1,2)\nmlutils.plot_2d_clf_problem(X=x, y=y_ovo2, h= lambda x : lrOvo2.predict(x) >= 0.5)\n\ny_ovo3 = [ 0 if i == 2 else 1 for i in y]\nlrOvo3 = LinearRegression().fit(x, y_ovo3)\nfig.add_subplot(3,1,3)\nmlutils.plot_2d_clf_problem(X=x, y=y_ovo3, h= lambda x : lrOvo3.predict(x) >= 0.5)\n\n",
"3. Logistička regresija\nOvaj zadatak bavi se probabilističkim diskriminativnim modelom, logističkom regresijom, koja je, unatoč nazivu, klasifikacijski model.\nLogistička regresija tipičan je predstavnik tzv. poopćenih linearnih modela koji su oblika: $h(\\mathbf{x})=f(\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}})$. Logistička funkcija za funkciju $f$ koristi tzv. logističku (sigmoidalnu) funkciju $\\sigma (x) = \\frac{1}{1 + \\textit{exp}(-x)}$.\n(a)\nDefinirajte logističku (sigmoidalnu) funkciju $\\mathrm{sigm}(x)=\\frac{1}{1+\\exp(-\\alpha x)}$ i prikažite je za $\\alpha\\in{1,2,4}$.",
"def sigm(alpha):\n def f(x):\n return 1 / (1 + exp(-alpha*x))\n \n return f\n\nax = list(range(-10, 10))\nay1 = list(map(sigm(1), ax))\nay2 = list(map(sigm(2), ax))\nay3 = list(map(sigm(4), ax))\n\nfig = plt.figure(figsize=(5,15))\np1 = fig.add_subplot(3, 1, 1)\np1.plot(ax, ay1)\np2 = fig.add_subplot(3, 1, 2)\np2.plot(ax, ay2)\np3 = fig.add_subplot(3, 1, 3)\np3.plot(ax, ay3)\n",
"Q: Zašto je sigmoidalna funkcija prikladan izbor za aktivacijsku funkciju poopćenoga linearnog modela? \n</br>\nQ: Kakav utjecaj ima faktor $\\alpha$ na oblik sigmoide? Što to znači za model logističke regresije (tj. kako izlaz modela ovisi o normi vektora težina $\\mathbf{w}$)?\n(b)\nImplementirajte funkciju \n\nlr_train(X, y, eta=0.01, max_iter=2000, alpha=0, epsilon=0.0001, trace=False) \n\nza treniranje modela logističke regresije gradijentnim spustom (batch izvedba). Funkcija uzima označeni skup primjera za učenje (matrica primjera X i vektor oznaka y) te vraća $(n+1)$-dimenzijski vektor težina tipa ndarray. Ako je trace=True, funkcija dodatno vraća listu (ili matricu) vektora težina $\\mathbf{w}^0,\\mathbf{w}^1,\\dots,\\mathbf{w}^k$ generiranih kroz sve iteracije optimizacije, od 0 do $k$. Optimizaciju treba provoditi dok se ne dosegne max_iter iteracija, ili kada razlika u pogrešci unakrsne entropije između dviju iteracija padne ispod vrijednosti epsilon. Parametar alpha predstavlja faktor regularizacije.\nPreporučamo definiranje pomoćne funkcije lr_h(x,w) koja daje predikciju za primjer x uz zadane težine w. Također, preporučamo i funkciju cross_entropy_error(X,y,w) koja izračunava pogrešku unakrsne entropije modela na označenom skupu (X,y) uz te iste težine.\nNB: Obratite pozornost na to da je način kako su definirane oznake (${+1,-1}$ ili ${1,0}$) kompatibilan s izračunom funkcije gubitka u optimizacijskome algoritmu.",
"from sklearn.preprocessing import PolynomialFeatures as PolyFeat\nfrom sklearn.metrics import log_loss\n\ndef loss_function(h_x, y):\n return -y * np.log(h_x) - (1 - y) * np.log(1 - h_x)\n\ndef lr_h(x, w):\n Phi = PolyFeat(1).fit_transform(x.reshape(1,-1))\n return sigm(1)(Phi.dot(w))\n \ndef cross_entropy_error(X, y, w):\n Phi = PolyFeat(1).fit_transform(X)\n return log_loss(y, sigm(1)(Phi.dot(w)))\n\n\ndef lr_train(X, y, eta = 0.01, max_iter = 2000, alpha = 0, epsilon = 0.0001, trace= False):\n w = zeros(shape(X)[1] + 1)\n N = len(X)\n w_trace = [];\n error = epsilon**-1\n \n for i in range(0, max_iter):\n dw0 = 0; dw = zeros(shape(X)[1]);\n new_error = 0\n \n for j in range(0, N):\n h = lr_h(X[j], w)\n dw0 += h - y[j]\n dw += (h - y[j])*X[j]\n \n new_error += loss_function(h, y[j])\n\n if abs(error - new_error) < epsilon: \n print('stagnacija na i = ', i)\n break\n \n else: error = new_error\n \n w[0] -= eta*dw0\n w[1:] = w[1:] * (1-eta*alpha) - eta*dw\n \n w_trace.extend(w)\n \n if trace:\n return w, w_trace\n \n else: return w\n \n",
"(c)\nKoristeći funkciju lr_train, trenirajte model logističke regresije na skupu seven, prikažite dobivenu granicu između klasa te izračunajte pogrešku unakrsne entropije. \nNB: Pripazite da modelu date dovoljan broj iteracija.",
"trained = lr_train(seven_X, seven_y)\nprint(cross_entropy_error(seven_X, seven_y, trained))\nprint(trained)\n\nh3c = lambda x: lr_h(x, trained) > 0.5\n\nfigure()\nmlutils.plot_2d_clf_problem(seven_X, seven_y, h3c)",
"Q: Koji kriterij zaustavljanja je aktiviran?\nQ: Zašto dobivena pogreška unakrsne entropije nije jednaka nuli?\nQ: Kako biste utvrdili da je optimizacijski postupak doista pronašao hipotezu koja minimizira pogrešku učenja? O čemu to ovisi?\nQ: Na koji način biste preinačili kôd ako biste htjeli da se optimizacija izvodi stohastičkim gradijentnim spustom (online learning)?\n(d)\nPrikažite na jednom grafikonu pogrešku unakrsne entropije (očekivanje logističkog gubitka) i pogrešku klasifikacije (očekivanje gubitka 0-1) na skupu seven kroz iteracije optimizacijskog postupka. Koristite trag težina funkcije lr_train iz zadatka (b) (opcija trace=True). Na drugom grafikonu prikažite pogrešku unakrsne entropije kao funkciju broja iteracija za različite stope učenja, $\\eta\\in{0.005,0.01,0.05,0.1}$.",
"from sklearn.metrics import zero_one_loss\n\neta = [0.005, 0.01, 0.05, 0.1]\n[w3d, w3d_trace] = lr_train(seven_X, seven_y, trace=True)\n\n\nPhi = PolyFeat(1).fit_transform(seven_X)\nh_3d = lambda x: x >= 0.5\n\nerror_unakrs = []\nerrror_classy = []\nerrror_eta = []\n\nfor k in range(0, len(w3d_trace), 3):\n error_unakrs.append(cross_entropy_error(seven_X, seven_y, w3d_trace[k:k+3]))\n errror_classy.append(zero_one_loss(seven_y, h_3d(sigm(1)(Phi.dot(w3d_trace[k:k+3])))))\n \nfor i in eta:\n err = []\n [w3, w3_trace] = lr_train(seven_X, seven_y, i, trace=True)\n \n for j in range(0, len(w3_trace), 3):\n err.append(cross_entropy_error(seven_X, seven_y, w3_trace[j:j+3]))\n \n errror_eta.append(err)\n \nfigure(figsize(12, 15))\nsubplots_adjust(wspace=0.1)\nsubplot(2,1,1)\ngrid()\nplot(error_unakrs); plot(errror_classy);\n\nsubplot(2,1,2)\ngrid()\nfor i in range(0, len(eta)):\n plot(errror_eta[i], label = 'eta = ' + str(i))\nlegend(loc = 'best');",
"Q: Zašto je pogreška unakrsne entropije veća od pogreške klasifikacije? Je li to uvijek slučaj kod logističke regresije i zašto?\nQ: Koju stopu učenja $\\eta$ biste odabrali i zašto?\n(e)\nUpoznajte se s klasom linear_model.LogisticRegression koja implementira logističku regresiju. Usporedite rezultat modela na skupu seven s rezultatom koji dobivate pomoću vlastite implementacije algoritma.\nNB: Kako ugrađena implementacija koristi naprednije verzije optimizacije funkcije, vrlo je vjerojatno da Vam se rješenja neće poklapati, ali generalne performanse modela bi trebale. Ponovno, pripazite na broj iteracija i snagu regularizacije.",
"from sklearn.linear_model import LogisticRegression\n\nreg3e = LogisticRegression(max_iter=2000, tol=0.0001, C=0.01**-1, solver='lbfgs').fit(seven_X,seven_y)\nh3e = lambda x : reg3e.predict(x)\n\nfigure(figsize(7, 7))\nmlutils.plot_2d_clf_problem(seven_X,seven_y, h3e)",
"4. Analiza logističke regresije\n(a)\nKoristeći ugrađenu implementaciju logističke regresije, provjerite kako se logistička regresija nosi s vrijednostima koje odskaču. Iskoristite skup outlier iz prvog zadatka. Prikažite granicu između klasa.\nQ: Zašto se rezultat razlikuje od onog koji je dobio model klasifikacije linearnom regresijom iz prvog zadatka?",
" logReg4 = LogisticRegression(solver='liblinear').fit(outlier_X, outlier_y)\n mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : logReg4.predict(x) >= 0.5)",
"(b)\nTrenirajte model logističke regresije na skupu seven te na dva odvojena grafikona prikažite, kroz iteracije optimizacijskoga algoritma, (1) izlaz modela $h(\\mathbf{x})$ za svih sedam primjera te (2) vrijednosti težina $w_0$, $w_1$, $w_2$.",
"[w4b, w4b_trace] = lr_train(seven_X, seven_y, trace = True)\n\nw0_4b = []; w1_4b = []; w2_4b = [];\n\nfor i in range(0, len(w4b_trace), 3):\n w0_4b.append(w4b_trace[i])\n w1_4b.append(w4b_trace[i+1])\n w2_4b.append(w4b_trace[i+2])\n \nh_gl = []\n\nfor i in range(0, len(seven_X)):\n h = []\n\n for j in range(0, len(w4b_trace), 3):\n h.append(lr_h(seven_X[i], w4b_trace[j:j+3]))\n \n h_gl.append(h)\n\n\nfigure(figsize(7, 14))\nsubplot(2,1,1)\ngrid()\nfor i in range(0, len(h_gl)):\n plot(h_gl[i], label = 'x' + str(i))\n\nlegend(loc = 'best') ;\n \nsubplot(2,1,2)\ngrid()\nplot(w0_4b); plot(w1_4b); plot(w2_4b);\nlegend(['w0', 'w1', 'w2'], loc = 'best');\n ",
"(c)\nPonovite eksperiment iz podzadatka (b) koristeći linearno neodvojiv skup podataka unsep iz prvog zadatka.\nQ: Usporedite grafikone za slučaj linearno odvojivih i linearno neodvojivih primjera te komentirajte razliku.",
"unsep_y = np.append(seven_y, 0)\n[w4c, w4c_trace] = lr_train(unsep_X, unsep_y, trace = True)\n\nw0_4c = []; w1_4c = []; w2_4c = [];\n\nfor i in range(0, len(w4c_trace), 3):\n w0_4c.append(w4c_trace[i])\n w1_4c.append(w4c_trace[i+1])\n w2_4c.append(w4c_trace[i+2])\n \nh_gl = []\n\nfor i in range(0, len(unsep_X)):\n h = []\n\n for j in range(0, len(w4c_trace), 3):\n h.append(lr_h(unsep_X[i], w4c_trace[j:j+3]))\n \n h_gl.append(h)\n \n\nfigure(figsize(7, 14))\nsubplots_adjust(wspace=0.1)\nsubplot(2,1,1)\ngrid()\nfor i in range(0, len(h_gl)):\n plot(h_gl[i], label = 'x' + str(i))\n\nlegend(loc = 'best') ;\n \nsubplot(2,1,2)\ngrid()\nplot(w0_4c); plot(w1_4c); plot(w2_4c);\nlegend(['w0', 'w1', 'w2'], loc = 'best');\n",
"5. Regularizirana logistička regresija\nTrenirajte model logističke regresije na skupu seven s različitim faktorima L2-regularizacije, $\\alpha\\in{0,1,10,100}$. Prikažite na dva odvojena grafikona (1) pogrešku unakrsne entropije te (2) L2-normu vektora $\\mathbf{w}$ kroz iteracije optimizacijskog algoritma.\nQ: Jesu li izgledi krivulja očekivani i zašto?\nQ: Koju biste vrijednost za $\\alpha$ odabrali i zašto?",
"from numpy.linalg import norm\n\nalpha5 = [0, 1, 10, 100]\n\nerr_gl = []; norm_gl = [];\n\nfor a in alpha5:\n [w5, w5_trace] = lr_train(seven_X, seven_y, alpha = a, trace = True)\n err = []; L2_norm = [];\n \n for k in range(0, len(w5_trace), 3):\n err.append(cross_entropy_error(seven_X, seven_y, w5_trace[k:k+3]))\n L2_norm.append(linalg.norm(w5_trace[k:k+1]))\n \n err_gl.append(err)\n norm_gl.append(L2_norm)\n \nfigure(figsize(7, 14))\nsubplot(2,1,1)\ngrid()\nfor i in range(0, len(err_gl)):\n plot(err_gl[i], label = 'alpha = ' + str(alpha5[i]) )\n \nlegend(loc = 'best') ;\n\nsubplot(2,1,2)\ngrid()\nfor i in range(0, len(err_gl)):\n plot(norm_gl[i], label = 'alpha = ' + str(alpha5[i]) )\n \nlegend(loc = 'best');\n\n\n ",
"6. Logistička regresija s funkcijom preslikavanja\nProučite funkciju datasets.make_classification. Generirajte i prikažite dvoklasan skup podataka s ukupno $N=100$ dvodimenzijskih ($n=2)$ primjera, i to sa dvije grupe po klasi (n_clusters_per_class=2). Malo je izgledno da će tako generiran skup biti linearno odvojiv, međutim to nije problem jer primjere možemo preslikati u višedimenzijski prostor značajki pomoću klase preprocessing.PolynomialFeatures, kao što smo to učinili kod linearne regresije u prvoj laboratorijskoj vježbi. Trenirajte model logističke regresije koristeći za preslikavanje u prostor značajki polinomijalnu funkciju stupnja $d=2$ i stupnja $d=3$. Prikažite dobivene granice između klasa. Možete koristiti svoju implementaciju, ali se radi brzine preporuča koristiti linear_model.LogisticRegression. Regularizacijski faktor odaberite po želji.\nNB: Kao i ranije, za prikaz granice između klasa koristite funkciju plot_2d_clf_problem. Funkciji kao argumente predajte izvorni skup podataka, a preslikavanje u prostor značajki napravite unutar poziva funkcije h koja čini predikciju, na sljedeći način:",
"from sklearn.preprocessing import PolynomialFeatures\n\n[x6, y6] = make_classification(n_samples=100, n_features=2, n_redundant=0, n_classes=2, n_clusters_per_class=2)\n\nfigure(figsize(7, 5))\nmlutils.plot_2d_clf_problem(x6, y6)\n\nd = [2,3]\nj = 1\nfigure(figsize(12, 4))\nsubplots_adjust(wspace=0.1)\nfor i in d:\n subplot(1,2,j)\n poly = PolynomialFeatures(i)\n Phi = poly.fit_transform(x6)\n\n model = LogisticRegression(solver='lbfgs')\n model.fit(Phi, y6)\n h = lambda x : model.predict(poly.transform(x))\n\n mlutils.plot_2d_clf_problem(x6, y6, h)\n title('d = ' + str(i))\n j += 1\n\n\n# Vaš kôd ovdje...",
"Q: Koji biste stupanj polinoma upotrijebili i zašto? Je li taj odabir povezan s odabirom regularizacijskog faktora $\\alpha$? Zašto?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session04/Day0/TooBriefMLSolutions.ipynb | mit | [
"Introduction to Machine Learning:\nExamples of Unsupervised and Supervised Machine-Learning Algorithms\nVersion 0.1\nBroadly speaking, machine-learning methods constitute a diverse collection of data-driven algorithms designed to classify/characterize/analyze sources in multi-dimensional spaces. The topics and studies that fall under the umbrella of machine learning is growing, and there is no good catch-all definition. The number (and variation) of algorithms is vast, and beyond the scope of these exercises. While we will discuss a few specific algorithms today, more importantly, we will explore the scope of the two general methods: unsupervised learning and supervised learning and introduce the powerful (and dangerous?) Python package scikit-learn.\n\nBy AA Miller\n2017 September 16",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Problem 1) Introduction to scikit-learn\nAt the most basic level, scikit-learn makes machine learning extremely easy within python. By way of example, here is a short piece of code that builds a complex, non-linear model to classify sources in the Iris data set that we learned about earlier:\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\niris = datasets.load_iris()\nRFclf = RandomForestClassifier().fit(iris.data, iris.target)\n\nThose 4 lines of code have constructed a model that is superior to any system of hard cuts that we could have encoded while looking at the multidimensional space. This can be fast as well: execute the dummy code in the cell below to see how \"easy\" machine-learning is with scikit-learn.",
"# execute dummy code here\n\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\niris = datasets.load_iris()\nRFclf = RandomForestClassifier().fit(iris.data, iris.target)",
"Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, X, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y. \nUnsupervised models are fit by calling .fit(X) and supervised models are fit by calling .fit(X, y). In both cases, predictions for new observations, Xnew, can be obtained by calling .predict(Xnew). Those are the basics and beyond that, the details are algorithm specific, but the documentation for essentially everything within scikit-learn is excellent, so read the docs.\nTo further develop our intuition, we will now explore the Iris dataset a little further.\nProblem 1a What is the pythonic type of iris?",
"type(iris)",
"You likely haven't encountered a scikit-learn Bunch before. It's functionality is essentially the same as a dictionary. \nProblem 1b What are the keys of iris?",
"iris.keys()",
"Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful.\nProblem 1c What is the shape and content of the iris data?",
"print(np.shape(iris.data))\nprint(iris.data)",
"Problem 1d What is the shape and content of the iris target?",
"print(np.shape(iris.target))\nprint(iris.target)",
"Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms. \nProblem 1e Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.",
"print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature\n\nplt.scatter(iris.data[:,0], iris.data[:,1], c = iris.target, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')",
"Problem 2) Unsupervised Machine Learning\nUnsupervised machine learning, sometimes referred to as clustering or data mining, aims to group or classify sources in the multidimensional feature space. The \"unsupervised\" comes from the fact that there are no target labels provided to the algorithm, so the machine is asked to cluster the data \"on its own.\" The lack of labels means there is no (simple) method for validating the accuracy of the solution provided by the machine (though sometimes simple examination can show the results are terrible). \nFor this reason [note - this is my (AAM) opinion and there many be many others who disagree], unsupervised methods are not particularly useful for astronomy. Supposing one did find some useful clustering structure, an adversarial researcher could always claim that the current feature space does not accurately capture the physics of the system and as such the clustering result is not interesting or, worse, erroneous. The one potentially powerful exception to this broad statement is outlier detection, which can be a branch of both unsupervised and supervised learning. Finding weirdo objects is an astronomical pastime, and there are unsupervised methods that may help in that regard in the LSST era. \nTo begin today we will examine one of the most famous, and simple, clustering algorithms: $k$-means. $k$-means clustering looks to identify $k$ convex clusters, where $k$ is a user defined number. And here-in lies the rub: if we truly knew the number of clusters in advance, we likely wouldn't need to perform any clustering in the first place. This is the major downside to $k$-means. Operationally, pseudocode for the algorithm can be summarized as the following: \ninitiate search by identifying k points (i.e. the cluster centers)\nloop \n assign each point in the data set to the closest cluster center\n calculate new cluster centers based on mean position of all points within cluster\n if diff(new center - old center) < threshold:\n stop (i.e. clusters are defined)\n\nThe threshold is defined by the user, though in some cases the total number of iterations is also. An advantage of $k$-means is that the solution will always converge, though the solution may only be a local minimum. Disadvantages include the assumption of convexity, i.e. difficult to capture complex geometry, and the curse of dimensionality.\nIn scikit-learn the KMeans algorithm is implemented as part of the sklearn.cluster module. \nProblem 2a Fit two different $k$-means models to the iris data, one with 2 clusters and one with 3 clusters. Plot the resulting clusters in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications?",
"from sklearn.cluster import KMeans\n\nKcluster = KMeans(n_clusters = 2)\nKcluster.fit(iris.data)\n\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')\n\nKcluster = KMeans(n_clusters = 3)\nKcluster.fit(iris.data)\n\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')",
"With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior. \nProblem 2b How do the results change if the 3 cluster model is called with n_init = 1 and init = 'random' options? Use rs for the random state [this allows me to cheat in service of making a point].\n*Note - the respective defaults for these two parameters are 10 and k-means++, respectively. Read the docs to see why these choices are, likely, better than those in 2b.",
"rs = 14\nKcluster1 = KMeans(n_clusters = 3, n_init = 1, init = 'random', random_state = rs)\nKcluster1.fit(iris.data)\n\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster1.labels_, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')",
"A random aside that is not particularly relevant here\n$k$-means evaluates the Euclidean distance between individual sources and cluster centers, thus, the magnitude of the individual features has a strong effect on the final clustering outcome. \nProblem 2c Calculate the mean, standard deviation, min, and max of each feature in the iris data set. Based on these summaries, which feature is most important for clustering?",
"print(\"feature\\t\\t\\tmean\\tstd\\tmin\\tmax\")\nfor featnum, feat in enumerate(iris.feature_names):\n print(\"{:s}\\t{:.2f}\\t{:.2f}\\t{:.2f}\\t{:.2f}\".format(feat, np.mean(iris.data[:,featnum]), \n np.std(iris.data[:,featnum]), np.min(iris.data[:,featnum]),\n np.max(iris.data[:,featnum])))",
"Petal length has the largest range and standard deviation, thus, it will have the most \"weight\" when determining the $k$ clusters. \nThe truth is that the iris data set is fairly small and straightfoward. Nevertheless, we will now examine the clustering results after re-scaling the features. [Some algorithms, cough Support Vector Machines cough, are notoriously sensitive to the feature scaling, so it is important to know about this step.] Imagine you are classifying stellar light curves: the data set will include contact binaries with periods of $\\sim 0.1 \\; \\mathrm{d}$ and Mira variables with periods of $\\gg 100 \\; \\mathrm{d}$. Without re-scaling, this feature that covers 4 orders of magnitude may dominate all others in the final model projections.\nThe two most common forms of re-scaling are to rescale to a guassian with mean $= 0$ and variance $= 1$, or to rescale the min and max of the feature to $[0, 1]$. The best normalization is problem dependent. The sklearn.preprocessing module makes it easy to re-scale the feature set. It is essential that the same scaling used for the training set be used for all other data run through the model. The testing, validation, and field observations cannot be re-scaled independently. This would result in meaningless final classifications/predictions. \nProblem 2d Re-scale the features to normal distributions, and perform $k$-means clustering on the iris data. How do the results compare to those obtained earlier? \nHint - you may find 'StandardScaler()' within the sklearn.preprocessing module useful.",
"from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler().fit(iris.data)\n\nKcluster = KMeans(n_clusters = 3)\nKcluster.fit(scaler.transform(iris.data))\n\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], c = Kcluster.labels_, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')",
"These results are almost identical to those obtained without scaling. This is due to the simplicity of the iris data set. \nHow do I test the accuracy of my clusters?\nEssentially - you don't. There are some methods that are available, but they essentially compare clusters to labeled samples, and if the samples are labeled it is likely that supervised learning is more useful anyway. If you are curious, scikit-learn does provide some built-in functions for analyzing clustering, but again, it is difficult to evaluate the validity of any newly discovered clusters. \nWhat if I don't know how many clusters are present in the data?\nAn excellent question, as you will almost never know this a priori. Many algorithms, like $k$-means, do require the number of clusters to be specified, but some other methods do not. As an example DBSCAN. In brief, DBSCAN requires two parameters: minPts, the minimum number of points necessary for a cluster, and $\\epsilon$, a distance measure. Clusters are grown by identifying core points, objects that have at least minPts located within a distance $\\epsilon$. Reachable points are those within a distance $\\epsilon$ of at least one core point but less than minPts core points. Identically, these points define the outskirts of the clusters. Finally, there are also outliers which are points that are $> \\epsilon$ away from any core points. Thus, DBSCAN naturally identifies clusters, does not assume clusters are convex, and even provides a notion of outliers. The downsides to the algorithm are that the results are highly dependent on the two tuning parameters, and that clusters of highly different densities can be difficult to recover (because $\\epsilon$ and minPts is specified for all clusters. \nIn scitkit-learn the \nDBSCAN algorithm is part of the sklearn.cluster module. $\\epsilon$ and minPts are set by eps and min_samples, respectively. \nProblem 2e Cluster the iris data using DBSCAN. Play around with the tuning parameters to see how they affect the final clustering results. How does the use of DBSCAN compare to $k$-means? Can you obtain 3 clusters with DBSCAN? If not, given the knowledge that the iris dataset has 3 classes - does this invalidate DBSCAN as a viable algorithm?\nNote - DBSCAN labels outliers as $-1$, and thus, plt.scatter(), will plot all these points as the same color.",
"# execute this cell\n\nfrom sklearn.cluster import DBSCAN\n\ndbs = DBSCAN(eps = 0.7, min_samples = 7)\ndbs.fit(scaler.transform(iris.data)) # best to use re-scaled data since eps is in absolute units\n\ndbs_outliers = dbs.labels_ == -1\n\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], c = dbs.labels_, s = 30, edgecolor = \"None\", cmap = \"viridis\")\nplt.scatter(iris.data[:,0][dbs_outliers], iris.data[:,1][dbs_outliers], s = 30, c = 'k')\n\n\nplt.xlabel('sepal length')\nplt.ylabel('sepal width')",
"I was unable to obtain 3 clusters with DBSCAN. While these results are, on the surface, worse than what we got with $k$-means, my suspicion is that the 4 features do not adequately separate the 3 classes. [See - a nayseyer can always make that argument.] This is not a problem for DBSCAN as an algorithm, but rather, evidence that no single algorithm works well in all cases. \nChallenge Problem) Cluster SDSS Galaxy Data\nThe following query will select 10k likely galaxies from the SDSS database and return the results of that query into an astropy.Table object. (For now, if you are not familiar with the SDSS DB schema, don't worry about this query, just know that it returns a bunch of photometric features.)",
"from astroquery.sdss import SDSS # enables direct queries to the SDSS database\n\nGALquery = \"\"\"SELECT TOP 10000 \n p.dered_u - p.dered_g as ug, p.dered_g - p.dered_r as gr, \n p.dered_g - p.dered_i as gi, p.dered_g - p.dered_z as gz, \n p.petroRad_i, p.petroR50_i, p.deVAB_i\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND p.type = 3\n \"\"\"\nSDSSgals = SDSS.query_sql(GALquery)\nSDSSgals",
"I have used my own domain knowledge to specifically choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query. \nOne nice feature of astropy tables is that they can readily be turned into pandas DataFrames, which can in turn easily be turned into a sklearn X array with NumPy. For example: \nX = np.array(SDSSgals.to_pandas())\n\nAnd you are ready to go. \nChallenge Problem Using the SDSS dataset above, identify interesting clusters within the data [this is intentionally very open ended, if you uncover anything especially exciting you'll have a chance to share it with the group]. Feel free to use the algorithms discussed above, or any other packages available via sklearn. Can you make sense of the clusters in the context of galaxy evolution? \nHint - don't fret if you know nothing about galaxy evolution (neither do I!). Just take a critical look at the clusters that are identified",
"Xgal = np.array(SDSSgals.to_pandas())\n\ngalScaler = StandardScaler().fit(Xgal)\n\ndbs = DBSCAN(eps = .25, min_samples=55)\n\ndbs.fit(galScaler.transform(Xgal))\n\ncluster_members = dbs.labels_ != -1\noutliers = dbs.labels_ == -1\n\nplt.figure(figsize = (10,8))\nplt.scatter(Xgal[:,0][outliers], Xgal[:,3][outliers], \n c = \"k\", \n s = 4, alpha = 0.1)\nplt.scatter(Xgal[:,0][cluster_members], Xgal[:,3][cluster_members], \n c = dbs.labels_[cluster_members], \n alpha = 0.4, edgecolor = \"None\", cmap = \"viridis\")\n\nplt.xlim(-1,5)\nplt.ylim(-0,3.5)",
"Note - I was unable to get the galaxies to clusster using DBSCAN.\nProblem 3) Supervised Machine Learning\nSupervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The \"supervised\" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later).\nWe will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem.\nIn scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module. \nProblem 3a \nFit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure?\nHint - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, including the same features and order as the training set, as input.\nHint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?",
"from sklearn.neighbors import KNeighborsClassifier\n\nKNNclf = KNeighborsClassifier(n_neighbors = 3).fit(iris.data, iris.target)\npreds = KNNclf.predict(iris.data)\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], \n c = preds, cmap = \"viridis\", s = 30, edgecolor = \"None\")\n\nKNNclf = KNeighborsClassifier(n_neighbors = 10).fit(iris.data, iris.target)\npreds = KNNclf.predict(iris.data)\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], \n c = preds, cmap = \"viridis\", s = 30, edgecolor = \"None\")",
"These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data? \nWithout going into too much detail, we will test this using cross validation (CV). In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using cross_val_predict, we can get a better sense of the model accuracy. Predictions from cross_val_predict are produced in the following manner:\nfrom sklearn.cross_validation import cross_val_predict\nCVpreds = cross_val_predict(sklearn.model(), X, y)\n\nwhere sklearn.model() is the desired model, X is the feature array, and y is the label array.\nProblem 3b \nProduce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?",
"from sklearn.cross_validation import cross_val_predict\n\nCVpreds = cross_val_predict(KNeighborsClassifier(n_neighbors=5), iris.data, iris.target)\nplt.figure()\nplt.scatter(iris.data[:,0], iris.data[:,1], \n c = preds, cmap = \"viridis\", s = 30, edgecolor = \"None\")\nprint(\"The accuracy of the kNN = 5 model is ~{:.4}\".format( sum(CVpreds == iris.target)/len(CVpreds) ))\n\nCVpreds50 = cross_val_predict(KNeighborsClassifier(n_neighbors=50), iris.data, iris.target)\n\nprint(\"The accuracy of the kNN = 50 model is ~{:.4}\".format( sum(CVpreds50 == iris.target)/len(CVpreds50) ))",
"While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur. \nProblem 3c \nCalculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.",
"for iris_type in range(3):\n iris_acc = sum( (CVpreds50 == iris_type) & (iris.target == iris_type)) / sum(iris.target == iris_type)\n\n print(\"The accuracy for class {:s} is ~{:.4f}\".format(iris.target_names[iris_type], iris_acc))\n",
"We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor \"neighborhoods\". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is \"confusion\" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal. \nLike almost everything else we have encountered during this exercise, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following: \nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_prep)\n\nProblem 3d \nCalculate the confusion matrix for the iris training set and the $k$NN = 50 model.",
"from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(iris.target, CVpreds50)\nprint(cm)",
"From this representation, we see right away that most of the virginica that are being misclassifed are being scattered into the versicolor class. However, this representation could still be improved: it'd be helpful to normalize each value relative to the total number of sources in each class, and better still, it'd be good to have a visual representation of the confusion matrix. This visual representation will be readily digestible. Now let's normalize the confusion matrix.\nProblem 3e \nCalculate the normalized confusion matrix. Be careful, you have to sum along one axis, and then divide along the other. \nAnti-hint: This operation is actually straightforward using some array manipulation that we have not covered up to this point. Thus, we have performed the necessary operations for you below. If you have extra time, you should try to develop an alternate way to arrive at the same normalization.",
"normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]\n\nnormalized_cm",
"The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful. \nProblem 3f \nPlot the confusion matrix. Be sure to label each of the axeses.\nHint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.",
"plt.imshow(normalized_cm, interpolation = 'nearest', cmap = 'bone_r')# complete\n\ntick_marks = np.arange(len(iris.target_names))\nplt.xticks(tick_marks, iris.target_names, rotation=45)\nplt.yticks(tick_marks, iris.target_names)\n\n\nplt.ylabel( 'True')# complete\nplt.xlabel( 'Predicted' )# complete\nplt.colorbar()\nplt.tight_layout()",
"Now it is straight-forward to see that virginica and versicolor flowers are the most likely to be confused, which we could intuit from the very first plot in this notebook, but this exercise becomes far more important for large data sets with many, many classes. \nThus concludes our introduction to scikit-learn and supervised and unsupervised learning."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
imatge-upc/activitynet-2016-cvprw | notebooks/16 Visualization of Results.ipynb | mit | [
"Generate some validation videos random, download them from the server and then use them to visualize the results.",
"import random\nimport os\nimport numpy as np\nfrom work.dataset.activitynet import ActivityNetDataset\n\ndataset = ActivityNetDataset(\n videos_path='../dataset/videos.json',\n labels_path='../dataset/labels.txt'\n)\nvideos = dataset.get_subset_videos('validation')\nvideos = random.sample(videos, 8)\n\nexamples = []\nfor v in videos:\n file_dir = os.path.join('../downloads/features/', v.features_file_name)\n if not os.path.isfile(file_dir):\n os.system('scp imatge:~/work/datasets/ActivityNet/v1.3/features/{} ../downloads/features/'.format(v.features_file_name))\n features = np.load(file_dir)\n examples.append((v, features))",
"Load the trained model with its weigths",
"from keras.layers import Input, BatchNormalization, LSTM, TimeDistributed, Dense\nfrom keras.models import Model\n\ninput_features = Input(batch_shape=(1, 1, 4096,), name='features')\ninput_normalized = BatchNormalization(mode=1)(input_features)\nlstm1 = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(input_normalized)\nlstm2 = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1)\noutput = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2)\nmodel = Model(input=input_features, output=output)\nmodel.load_weights('../work/scripts/training/lstm_activity_classification/model_snapshot/lstm_activity_classification_02_e100.hdf5')\nmodel.summary()\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop')",
"Extract the predictions for each video and print the scoring",
"predictions = []\nfor v, features in examples:\n nb_instances = features.shape[0]\n X = features.reshape((nb_instances, 1, 4096))\n model.reset_states()\n prediction = model.predict(X, batch_size=1)\n prediction = prediction.reshape(nb_instances, 201)\n class_prediction = np.argmax(prediction, axis=1)\n predictions.append((v, prediction, class_prediction))",
"Print the global classification results",
"from IPython.display import YouTubeVideo, display\n\nfor v, prediction, class_prediction in predictions:\n print('Video ID: {}\\t\\tGround truth: {}'.format(v.video_id, v.get_activity()))\n class_means = np.mean(prediction, axis=0)\n top_3 = np.argsort(class_means[1:])[::-1][:3] + 1\n scores = class_means[top_3]/np.sum(class_means[1:])\n for index, score in zip(top_3, scores):\n if score == 0.:\n continue\n label = dataset.labels[index][1]\n print('{:.4f}\\t{}'.format(score, label))\n vid = YouTubeVideo(v.video_id)\n display(vid)\n print('\\n')\n \n ",
"Now show the temporal prediction for the activity happening at the video.",
"import matplotlib.pyplot as plt\n%matplotlib inline\nimport matplotlib\nnormalize = matplotlib.colors.Normalize(vmin=0, vmax=201)\n\nfor v, prediction, class_prediction in predictions:\n v.get_video_instances(16, 0)\n ground_truth = np.array([instance.output for instance in v.instances])\n nb_instances = len(v.instances)\n \n print('Video ID: {}\\nMain Activity: {}'.format(v.video_id, v.get_activity()))\n plt.figure(num=None, figsize=(18, 1), dpi=100)\n plt.contourf(np.broadcast_to(ground_truth, (2, nb_instances)), norm=normalize, interpolation='nearest')\n plt.title('Ground Truth')\n plt.show()\n \n plt.figure(num=None, figsize=(18, 1), dpi=100)\n plt.contourf(np.broadcast_to(class_prediction, (2, nb_instances)), norm=normalize, interpolation='nearest')\n plt.title('Prediction')\n plt.show()\n\n print('\\n')\n\nnormalize = matplotlib.colors.Normalize(vmin=0, vmax=1)\n\nfor v, prediction, class_prediction in predictions:\n v.get_video_instances(16, 0)\n ground_truth = np.array([instance.output for instance in v.instances])\n nb_instances = len(v.instances)\n output_index = dataset.get_output_index(v.label)\n \n print('Video ID: {}\\nMain Activity: {}'.format(v.video_id, v.get_activity()))\n\n class_means = np.mean(prediction, axis=0)\n top_3 = np.argsort(class_means[1:])[::-1][:3] + 1\n scores = class_means[top_3]/np.sum(class_means[1:])\n for index, score in zip(top_3, scores):\n if score == 0.:\n continue\n label = dataset.labels[index][1]\n print('{:.4f}\\t{}'.format(score, label))\n \n plt.figure(num=None, figsize=(18, 1), dpi=100)\n plt.contourf(np.broadcast_to(ground_truth/output_index, (2, nb_instances)), norm=normalize, interpolation='nearest')\n plt.title('Ground Truth')\n plt.show()\n \n # print only the positions that predicted the global ground truth category\n temp = np.zeros((nb_instances))\n temp[class_prediction==output_index] = 1\n plt.figure(num=None, figsize=(18, 1), dpi=100)\n plt.contourf(np.broadcast_to(temp, (2, nb_instances)), norm=normalize, interpolation='nearest')\n plt.title('Prediction of the ground truth class')\n plt.show()\n \n plt.figure(num=None, figsize=(18, 1), dpi=100)\n plt.contourf(np.broadcast_to(prediction[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')\n plt.title('Probability for ground truth')\n plt.show()\n\n print('\\n')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mercybenzaquen/foundations-homework | databases_hw/db04/Homework_4.ipynb | mit | [
"Homework #4\nThese problem sets focus on list comprehensions, string operations and regular expressions.\nProblem set #1: List slices and list comprehensions\nLet's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:",
"numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'",
"In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').",
"new_list = numbers_str.split(\",\") \n\nnumbers = [int(item) for item in new_list]\n\nmax(numbers)",
"Great! We'll be using the numbers list you created above in the next few problems.\nIn the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:\n[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]\n\n(Hint: use a slice.)",
"#len(numbers)\nsorted(numbers)[10:]",
"In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:\n[120, 171, 258, 279, 528, 699, 804, 855]",
"sorted([item for item in numbers if item % 3 == 0])",
"Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:\n[2.6457513110645907, 8.06225774829855, 8.246211251235321]\n\n(These outputs might vary slightly depending on your platform.)",
"from math import sqrt\n# your code here\nsquared = []\n\nfor item in numbers:\n if item < 100:\n squared_numbers = sqrt(item)\n squared.append(squared_numbers)\nsquared\n",
"Problem set #2: Still more list comprehensions\nStill looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.",
"planets = [\n {'diameter': 0.382,\n 'mass': 0.06,\n 'moons': 0,\n 'name': 'Mercury',\n 'orbital_period': 0.24,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.949,\n 'mass': 0.82,\n 'moons': 0,\n 'name': 'Venus',\n 'orbital_period': 0.62,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 1.00,\n 'mass': 1.00,\n 'moons': 1,\n 'name': 'Earth',\n 'orbital_period': 1.00,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.532,\n 'mass': 0.11,\n 'moons': 2,\n 'name': 'Mars',\n 'orbital_period': 1.88,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 11.209,\n 'mass': 317.8,\n 'moons': 67,\n 'name': 'Jupiter',\n 'orbital_period': 11.86,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 9.449,\n 'mass': 95.2,\n 'moons': 62,\n 'name': 'Saturn',\n 'orbital_period': 29.46,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 4.007,\n 'mass': 14.6,\n 'moons': 27,\n 'name': 'Uranus',\n 'orbital_period': 84.01,\n 'rings': 'yes',\n 'type': 'ice giant'},\n {'diameter': 3.883,\n 'mass': 17.2,\n 'moons': 14,\n 'name': 'Neptune',\n 'orbital_period': 164.8,\n 'rings': 'yes',\n 'type': 'ice giant'}]",
"Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:\n['Jupiter', 'Saturn', 'Uranus']",
"[item['name'] for item in planets if item['diameter'] > 2]\n#I got one more planet!\n",
"In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79",
"#sum([int(item['mass']) for item in planets])\nsum([item['mass'] for item in planets])",
"Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:\n['Jupiter', 'Saturn', 'Uranus', 'Neptune']",
"import re\n\nplanet_with_giant= [item['name'] for item in planets if re.search(r'\\bgiant\\b', item['type'])]\n\nplanet_with_giant",
"EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:\n['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']\n\nProblem set #3: Regular expressions\nIn the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.",
"import re\npoem_lines = ['Two roads diverged in a yellow wood,',\n 'And sorry I could not travel both',\n 'And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'To where it bent in the undergrowth;',\n '',\n 'Then took the other, as just as fair,',\n 'And having perhaps the better claim,',\n 'Because it was grassy and wanted wear;',\n 'Though as for that the passing there',\n 'Had worn them really about the same,',\n '',\n 'And both that morning equally lay',\n 'In leaves no step had trodden black.',\n 'Oh, I kept the first for another day!',\n 'Yet knowing how way leads on to way,',\n 'I doubted if I should ever come back.',\n '',\n 'I shall be telling this with a sigh',\n 'Somewhere ages and ages hence:',\n 'Two roads diverged in a wood, and I---',\n 'I took the one less travelled by,',\n 'And that has made all the difference.']",
"In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.\nIn the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \\b anchor. Don't overthink the \"two words in a row\" requirement.)\nExpected result:\n['Then took the other, as just as fair,',\n 'Had worn them really about the same,',\n 'And both that morning equally lay',\n 'I doubted if I should ever come back.',\n 'I shall be telling this with a sigh']",
"[item for item in poem_lines if re.search(r'\\b[a-zA-Z]{4}\\b \\b[a-zA-Z]{4}\\b', item)]",
"Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:\n['And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'And having perhaps the better claim,',\n 'Though as for that the passing there',\n 'In leaves no step had trodden black.',\n 'Somewhere ages and ages hence:']",
"[item for item in poem_lines if re.search(r'\\b[a-zA-Z]{5}\\b.?$',item)]",
"Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.",
"all_lines = \" \".join(poem_lines)",
"Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:\n['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']",
"re.findall(r'[I] (\\b\\w+\\b)', all_lines)",
"Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.",
"entrees = [\n \"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95\",\n \"Lavender and Pepperoni Sandwich $8.49\",\n \"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v\",\n \"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v\",\n \"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95\",\n \"Rutabaga And Cucumber Wrap $8.49 - v\"\n]",
"You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.\nExpected output:\n[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',\n 'price': 10.95,\n 'vegetarian': False},\n {'name': 'Lavender and Pepperoni Sandwich ',\n 'price': 8.49,\n 'vegetarian': False},\n {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',\n 'price': 12.95,\n 'vegetarian': True},\n {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',\n 'price': 9.95,\n 'vegetarian': True},\n {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',\n 'price': 19.95,\n 'vegetarian': False},\n {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]\nGreat work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework",
"menu = []\n\nfor item in entrees:\n entrees_dictionary= {}\n match = re.search(r'(.*) .(\\d*\\d\\.\\d{2})\\ ?( - v+)?$', item)\n \n if match:\n name = match.group(1)\n price= match.group(2)\n #vegetarian= match.group(3)\n if match.group(3):\n entrees_dictionary['vegetarian']= True\n else:\n entrees_dictionary['vegetarian']= False\n \n entrees_dictionary['name']= name\n entrees_dictionary['price']= price\n \n menu.append(entrees_dictionary)\n\nmenu\n\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hbjornoy/DataAnalysis | Backup (not final delivery)/Homework 1.ipynb | apache-2.0 | [
"Table of Contents\n<p><div class=\"lev1\"><a href=\"#Task-1.-Compiling-Ebola-Data\"><span class=\"toc-item-num\">Task 1. </span>Compiling Ebola Data</a></div>\n <div class=\"lev1\"><a href=\"#Task-2.-RNA-Sequences\"><span class=\"toc-item-num\">Task 2. </span>RNA Sequences</a></div>\n <div class=\"lev1\"><a href=\"#Task-3.-Class-War-in-Titanic\"><span class=\"toc-item-num\">Task 3. </span>Class War in Titanic</a></div></p>",
"# Imports\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport glob\nimport csv\nimport calendar\nimport webbrowser\nfrom datetime import datetime\n\n# Constants\nDATA_FOLDER = 'Data/'\n",
"Task 1. Compiling Ebola Data\nThe DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.\nUse pandas to import these data files into a single Dataframe.\nUsing this DataFrame, calculate for each country, the daily average per month of new cases and deaths.\nMake sure you handle all the different expressions for new cases and deaths that are used in the reports.",
"'''\nFunctions needed to solve task 1\n'''\n\n#function to import excel file into a dataframe\ndef importdata(path,date):\n allpathFiles = glob.glob(DATA_FOLDER+path+'/*.csv')\n list_data = []\n for file in allpathFiles:\n excel = pd.read_csv(file,parse_dates=[date])\n list_data.append(excel)\n return pd.concat(list_data)\n\n#function to add the month on a new column of a DataFrame\ndef add_month(df):\n copy_df = df.copy()\n months = [calendar.month_name[x.month] for x in copy_df.Date]\n copy_df['Month'] = months\n return copy_df\n\n#founction which loc only the column within a country and a specified month\n#return a dataframe\ndef chooseCountry_month(dataframe,country,descr,month):\n df = dataframe.loc[(dataframe['Country']==country) & (dataframe['Description']==descr)]\n #df = add_month(df)\n df_month = df.loc[(df['Month']==month)]\n return df_month\n\n# Create a dataframe with the number of death, the new cases and the daily infos for a country and a specified month \ndef getmonthresults(dataframe,country,month):\n if country =='Liberia':\n descr_kill ='Total death/s in confirmed cases'\n descr_cases ='Total confirmed cases'\n if country =='Guinea':\n descr_kill ='Total deaths of confirmed'\n descr_cases ='Total cases of confirmed'\n if country == 'Sierra Leone': \n descr_kill ='death_confirmed'\n descr_cases ='cum_confirmed'\n \n df_kill = chooseCountry_month(dataframe,country,descr_kill,month)\n df_cases = chooseCountry_month(dataframe,country,descr_cases,month)\n \n #calculate the number of new cases and of new deaths for the all month\n res_kill = int(df_kill.iloc[len(df_kill)-1].Totals)-int(df_kill.iloc[0].Totals)\n res_cases = int(df_cases.iloc[len(df_cases)-1].Totals)-int(df_cases.iloc[0].Totals)\n #calculate the number of days counted which is last day of register - first day of register\n nb_day = df_kill.iloc[len(df_kill)-1].Date.day-df_kill.iloc[0].Date.day \n \n\n # Sometimes the values in the dataframe are wrong due to the excelfiles which are not all the same!\n # We then get negative results. Therefor we replace them all by NaN ! \n if(res_cases < 0)&(res_kill <0):\n monthreport = pd.DataFrame({'New cases':[np.nan],'Deaths':[np.nan],'daily average of New cases':[np.nan],'daily average of Deaths':[np.nan],'month':[month],'Country':[country]})\n elif(res_cases >= 0) &( res_kill <0):\n monthreport = pd.DataFrame({'New cases':[res_cases],'Deaths':[np.nan],'daily average of New cases':[res_cases/nb_day],'daily average of Deaths':[np.nan],'month':[month],'Country':[country]})\n elif(res_cases < 0) & (res_kill >= 0):\n monthreport = pd.DataFrame({'New cases':[np.nan],'Deaths':[res_kill],'daily average of New cases':[np.nan],'daily average of Deaths':[res_kill/nb_day],'month':[month],'Country':[country]})\n elif(nb_day == 0):\n monthreport = pd.DataFrame({'New cases':'notEnoughdatas','Deaths':'notEnoughdatas','daily average of New cases':'notEnoughdatas','daily average of Deaths':'notEnoughdatas','month':[month],'Country':[country]})\n else: \n monthreport = pd.DataFrame({'New cases':[res_cases],'Deaths':[res_kill],'daily average of New cases':[res_cases/nb_day],'daily average of Deaths':[res_kill/nb_day],'month':[month],'Country':[country]})\n return monthreport\n\n#check if the month and the country is in the dataframe df\ndef checkData(df,month,country):\n check = df.loc[(df['Country']==country)& (df['Month']== month)]\n return check\n\n#return a dataframe with all the infos(daily new cases, daily death) for each month and each country\ndef getResults(data):\n Countries = ['Guinea','Liberia','Sierra Leone']\n Months = ['January','February','March','April','May','June','July','August','September','October','November','December']\n results=[]\n compteur =0\n for country in Countries:\n for month in Months:\n if not(checkData(data,month,country).empty) : #check if the datas for the month and country exist \n res = getmonthresults(data,country,month)\n results.append(res) \n return pd.concat(results)\n \n\n\n# import data from guinea\npath_guinea = 'Ebola/guinea_data/'\ndata_guinea = importdata(path_guinea,'Date')\n\n# set the new order / change the columns / keep only the relevant datas / add the name of the country\ndata_guinea = data_guinea[['Date', 'Description','Totals']]\ndata_guinea['Country'] = ['Guinea']*len(data_guinea)\n\n#search for New cases and death!! \n#descr(newcases): \"Total cases of confirmed\" // descr(deaths): \"Total deaths of confirmed\"\ndata_guinea = data_guinea.loc[(data_guinea.Description=='Total cases of confirmed')|(data_guinea.Description=='Total deaths of confirmed')]\n\n \n#import data from liberia\npath_liberia = 'Ebola/liberia_data/'\ndata_liberia = importdata(path_liberia,'Date')\n# set the new order / change the columns / keep only the relevant datas / add the name of the country\ndata_liberia = data_liberia[['Date', 'Variable','National']]\ndata_liberia['Country'] = ['Liberia']*len(data_liberia)\n\n#search for New cases and death!! \n#descr(newcases): \"Total confirmed cases\" // descr(deaths): \"Total death/s in confirmed cases\" \ndata_liberia = data_liberia.loc[(data_liberia.Variable=='Total confirmed cases')|(data_liberia.Variable=='Total death/s in confirmed cases')]\n\n#change the name of the columns to be able merge the 3 data sets\ndata_liberia = data_liberia.rename(columns={'Date': 'Date', 'Variable': 'Description','National':'Totals'})\n\n \n#import data from sierra leonne\npath_sl = 'Ebola/sl_data/'\ndata_sl = importdata(path_sl,'date')\n# set the new order / change the columns / keep only the relevant datas / add the name of the country\ndata_sl = data_sl[['date', 'variable','National']]\ndata_sl['Country'] = ['Sierra Leone']*len(data_sl)\n\n#search for new cases and death \n#descr(newcases): \"cum_confirmed\" // descr(deaths): \"death_confirmed\"\ndata_sl = data_sl.loc[(data_sl.variable=='cum_confirmed')|(data_sl.variable=='death_confirmed')]\n#change the name of the columns to be able merge the 3 data sets\ndata_sl = data_sl.rename(columns={'date': 'Date', 'variable': 'Description','National':'Totals'})\n\n\n#merge the 3 dataframe into ONE which we'll apply our analysis\ndataFrame = [data_guinea,data_liberia,data_sl]\ndata = pd.concat(dataFrame)\n\n# Replace the NaN by 0;\ndata = data.fillna(0)\n#add a column with the month\ndata = add_month(data)\n\n#get the results from the data set -> see the function\nresults = getResults(data)\n\n#print the resuults\nresults",
"Task 2. RNA Sequences\nIn the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each. \nUse pandas to import the first 9 spreadsheets into a single DataFrame.\nThen, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.\nMake sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.",
"Sheet10_Meta = pd.read_excel(DATA_FOLDER +'microbiome/metadata.xls') \nallFiles = glob.glob(DATA_FOLDER + 'microbiome' + \"/MID*.xls\")\nallFiles",
"Creating and filling the DataFrame\nIn order to iterate only once over the data folder, we will attach the metadata to each excel spreadsheet right after creating a DataFrame with it. This will allow the code to be shorter and clearer, but also to iterate only once on every line and therefore be more efficient.",
"#Creating an empty DataFrame to store our data and initializing a counter.\nCombined_data = pd.DataFrame()\nK = 0\nwhile (K < int(len(allFiles))):\n \n #Creating a DataFrame and filling it with the excel's data\n df = pd.read_excel(allFiles[K], header=None)\n \n #Getting the metadata of the corresponding spreadsheet\n df['BARCODE'] = Sheet10_Meta.at[int(K), 'BARCODE']\n df['GROUP'] = Sheet10_Meta.at[int(K), 'GROUP']\n df['SAMPLE'] = Sheet10_Meta.at[int(K),'SAMPLE']\n \n #Append the recently created DataFrame to our combined one\n Combined_data = Combined_data.append(df)\n \n K = K + 1\n \n#Renaming the columns with meaningfull names\nCombined_data.columns = ['Name', 'Value','BARCODE','GROUP','SAMPLE']\nCombined_data.head()",
"3. Cleaning and reindexing\n\nAt first we get rid of the NaN value, we must replace them by \"unknown\". In order to have a more meaningful and single index, we will reset it to be the name of the RNA sequence.",
"#Replacing the NaN values with unknwown\nCombined_data = Combined_data.fillna('unknown')\n\n#Reseting the index\nCombined_data = Combined_data.set_index('Name')\n\n#Showing the result\nCombined_data",
"Task 3. Class War in Titanic\nUse pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.\nFor each of the following questions state clearly your assumptions and discuss your findings:\n\nDescribe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical. \nPlot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals. \nCalculate the proportion of passengers by cabin floor. Present your results in a pie chart.\nFor each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.\nCalculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.\nCreate 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.\n\nQuestion 3.1\nDescribe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.\nAssumptions: \n - \"For each exercise, please provide both a written explanation of the steps you will apply to manipulate the data, and the corresponding code.\" We assume that \"written explanation can come in the form of commented code as well as text\"\n - We assume that we must not describe the value range of attributes that contain string as we dont feel the length of strings or ASCI-values don't give any insight",
"''' \nHere is a sample of the information in the titanic dataframe\n''' \n\n# Importing titanic.xls info with Pandas\ntitanic = pd.read_excel('Data/titanic.xls')\n\n# printing only the 30 first and last rows of information\nprint(titanic.head)\n\n'''\nTo describe the INTENDED values and types of the data we will show you the titanic.html file that was provided to us\nNotice:\n - 'age' is of type double, so someone can be 17.5 years old, mostly used with babies that are 0.x years old\n - 'cabin' is stored as integer, but it har characters and letters\n - By this model, embarked is stored as an integer, witch has to be interpreted as the 3 different embarkation ports\n - It says that 'boat' is stored as a integer even though it has spaces and letters, it should be stored as string\n \nPS: it might be that the information stored as integer is supposed to be categorical data,\n ...because they have a \"small\" amount of valid options\n''' \n\n# Display html info in Jupyter Notebook\nfrom IPython.core.display import display, HTML\nhtmlFile = 'Data/titanic.html'\ndisplay(HTML(htmlFile))\n\n\n''' \nThe default types of the data after import:\nNotice:\n - the strings and characters are imported as objects\n - 'survived' is imported as int instead of double (which is in our opinion better since it's only 0 and 1\n - 'sex' is imported as object not integer because it is a string\n'''\n\ntitanic.dtypes\n\n''' \nBelow you can see the value range of the different numerical values.\n\nname, sex, ticket, cabin, embarked, boat and home.dest is not included because they can't be quantified numerically.\n''' \n\ntitanic.describe()\n\n\n'''\nAdditional information that is important to remember when manipulation the data\nis if/where there are NaN values in the dataset\n'''\n\n# This displays the number of NaN there is in different attributes\nprint(pd.isnull(titanic).sum())\n\n'''\nSome of this data is missing while some is meant to describe 'No' or something of meaning.\nExample:\n Cabin has 1014 NaN in its column, it might be that every passenger had a cabin and the data is missing.\n Or it could mean that most passengers did not have a cabin or a mix. The displayed titanic.html file \n give us some insight if it is correct. It says that there are 0 NaN in the column. This indicates that\n there are 1014 people without a cabin. Boat has also 823 NaN's, while the titanic lists 0 NaN's. \n It is probably because most of those who died probably weren't in a boat.\n'''\n\n'''\nWhat attributes should be stored as categorical information?\n\nCategorical data is essentially 8-bit integers which means it can store up to 2^8 = 256 categories\nBenefit is that it makes memory usage lower and it has a performance increase in calculations.\n'''\n\nprint('Number of unique values in... :')\nfor attr in titanic:\n print(\" {attr}: {u}\".format(attr=attr, u=len(titanic[attr].unique())))\n\n'''\nWe think it will be smart to categorize: 'pclass', 'survived', 'sex', 'cabin', 'embarked' and 'boat'\nbecause they have under 256 categories and don't have a strong numerical value like 'age'\n'survived' is a bordercase because it might be more practical to work with integers in some settings\n'''\n\n# changing the attributes to categorical data\ntitanic.pclass = titanic.pclass.astype('category')\ntitanic.survived = titanic.survived.astype('category')\ntitanic.sex = titanic.sex.astype('category')\ntitanic.cabin = titanic.cabin.astype('category')\ntitanic.embarked = titanic.embarked.astype('category')\ntitanic.boat = titanic.boat.astype('category')\n\n#Illustrate the change by printing out the new types\ntitanic.dtypes",
"Question 3.2\n\"Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals. \"\nassumptions:",
"\n#Plotting the ratio different classes(1st, 2nd and 3rd class) the passengers have\npc = titanic.pclass.value_counts().sort_index().plot(kind='bar')\npc.set_title('Travel classes')\npc.set_ylabel('Number of passengers')\npc.set_xlabel('Travel class')\npc.set_xticklabels(('1st class', '2nd class', '3rd class'))\nplt.show(pc)\n\n#Plotting the amount of people that embarked from different cities(C=Cherbourg, Q=Queenstown, S=Southampton)\nem = titanic.embarked.value_counts().sort_index().plot(kind='bar')\nem.set_title('Ports of embarkation')\nem.set_ylabel('Number of passengers')\nem.set_xlabel('Port of embarkation')\nem.set_xticklabels(('Cherbourg', 'Queenstown', 'Southampton'))\nplt.show(em)\n\n#Plotting what sex the passengers are\nsex = titanic.sex.value_counts().plot(kind='bar')\nsex.set_title('Gender of the passengers')\nsex.set_ylabel('Number of Passengers')\nsex.set_xlabel('Gender')\nsex.set_xticklabels(('Female', 'Male'))\nplt.show(sex)\n\n#Plotting agegroup of passengers\nbins = [0,10,20,30,40,50,60,70,80]\nage_grouped = pd.DataFrame(pd.cut(titanic.age, bins))\nag = age_grouped.age.value_counts().sort_index().plot.bar()\nag.set_title('Age of Passengers ')\nag.set_ylabel('Number of passengers')\nag.set_xlabel('Age groups')\nplt.show(ag)\n",
"Question 3.3\nCalculate the proportion of passengers by cabin floor. Present your results in a pie chart.\nassumptions: \n- Because we are tasked with categorizing persons by the floor of their cabin it was problematic that you had cabin input: \"F E57\" and \"F G63\". There were only 7 of these instances with conflicting cabinfloors. We also presumed that the was a floor \"T\". Even though there was only one instance, so it might have been a typo.\n- We assume that you don't want to include people without cabinfloor",
"'''\nParsing the cabinfloor, into floors A, B, C, D, E, F, G, T and display in a pie chart\n\n'''\n#Dropping NaN (People without cabin)\ncabin_floors = titanic.cabin.dropna()\n\n# removes digits and spaces\ncabin_floors = cabin_floors.str.replace(r'[\\d ]+', '')\n# removes duplicate letters and leave unique (CC -> C) (FG -> G)\ncabin_floors = cabin_floors.str.replace(r'(.)(?=.*\\1)', '')\n# removes ambigous data from the dataset (FE -> NaN)(FG -> NaN)\ncabin_floors = cabin_floors.str.replace(r'([A-Z]{1})\\w+', 'NaN' )\n\n# Recategorizing (Since we altered the entries, we messed with the categories)\ncabin_floors = cabin_floors.astype('category')\n# Removing NaN (uin this case ambigous data)\ncabin_floors = cabin_floors.cat.remove_categories('NaN')\ncabin_floors = cabin_floors.dropna()\n\n# Preparing data for plt.pie\nnumberOfCabinPlaces = cabin_floors.count()\ngrouped = cabin_floors.groupby(cabin_floors).count()\nsizes = np.array(grouped)\nlabels = np.array(grouped.index)\n\n# Plotting the pie chart\nplt.pie(sizes, labels=labels, autopct='%1.1f%%', pctdistance=0.75, labeldistance=1.1)\nprint(\"There are {cabin} passengers that have cabins and {nocabin} passengers without a cabin\"\n .format(cabin=numberOfCabinPlaces, nocabin=(len(titanic) - numberOfCabinPlaces)))",
"Question 3.4\nFor each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.\nassumptions:",
"# function that returns the number of people that survived and died given a specific travelclass\ndef survivedPerClass(pclass):\n survived = len(titanic.survived[titanic.survived == 1][titanic.pclass == pclass])\n died = len(titanic.survived[titanic.survived == 0][titanic.pclass == pclass])\n return [survived, died]\n\n# Fixing the layout horizontal\nthe_grid = plt.GridSpec(1, 3)\nlabels = [\"Survived\", \"Died\"]\n\n# Each iteration plots a pie chart\nfor p in titanic.pclass.unique():\n sizes = survivedPerClass(p)\n plt.subplot(the_grid[0, p-1], aspect=1 )\n plt.pie(sizes, labels=labels, autopct='%1.1f%%')\n \nplt.show()",
"Question 3.5\n\"Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.\"\nassumptions: \n 1. By \"proportions\" We assume it is a likelyhood-percentage of surviving",
"# group by selected data and get a count for each category\nsurvivalrate = titanic.groupby(['pclass', 'sex', 'survived']).size()\n\n# calculate percentage\nsurvivalpercentage = survivalrate.groupby(level=['pclass', 'sex']).apply(lambda x: x / x.sum() * 100)\n\n# plotting in a histogram\nhistogram = survivalpercentage.filter(like='1', axis=0).plot(kind='bar')\nhistogram.set_title('Proportion of the passengers that survived by travel class and sex')\nhistogram.set_ylabel('Percent likelyhood of surviving titanic')\nhistogram.set_xlabel('class/gender group')\nplt.show(histogram)",
"Question 3.6\n\"Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.\"\nassumptions: \n1. By \"proportions\" we assume it is a likelyhood-percentage of surviving\n2. To create 2 equally populated age categories; we will find the median and round up from the median to nearest whole year difference before splitting.",
"#drop NaN rows\nage_without_nan = titanic.age.dropna()\n\n#categorizing\nage_categories = pd.qcut(age_without_nan, 2, labels=[\"Younger\", \"Older\"])\n\n#Numbers to explain difference\nmedian = int(np.float64(age_without_nan.median()))\namount = int(age_without_nan[median])\nprint(\"The Median age is {median} years old\".format(median = median))\nprint(\"and there are {amount} passengers that are {median} year old \\n\".format(amount=amount, median=median))\n\nprint(age_categories.groupby(age_categories).count())\nprint(\"\\nAs you can see the pd.qcut does not cut into entirely equal sized bins, because the age is of a discreet nature\")\n\n\n# imported for the sake of surpressing some warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# extract relevant attributes\ncsas = titanic[['pclass', 'sex', 'age', 'survived']]\ncsas.dropna(subset=['age'], inplace=True)\n\n# Defining the categories\ncsas['age_group'] = csas.age > csas.age.median()\ncsas['age_group'] = csas['age_group'].map(lambda age_category: 'older' if age_category else \"younger\")\n\n# Converting to int to make it able to aggregate and give percentage\ncsas.survived = csas.survived.astype(int)\n\ng_categories = csas.groupby(['pclass', 'age_group', 'sex'])\nresult = pd.DataFrame(g_categories.survived.mean()).rename(columns={'survived': 'survived proportion'})\n\n# reset current index and spesify the unique index\nresult.reset_index(inplace=True)\nunique_index = result.pclass.astype(str) + ': ' + result.age_group.astype(str) + ' ' + result.sex.astype(str)\n\n# Finalize the unique index dataframe\nresult_w_unique = result[['survived proportion']]\nresult_w_unique.set_index(unique_index, inplace=True)\nprint(result_w_unique)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lmoresi/UoM-VIEPS-Intro-to-Python | Notebooks/Introduction/2 - Introduction to ipython.ipynb | mit | [
"ipython\nipython is an interactive version of the python interpreter. It provides a number of extras which are helpful when writing code. ipython code is almost always python code, and the differences are generally only important when editing a code in a live (interactive) environment. \nThe jupyter notebook is a fine example of an interactive environment - you are changing the code as it runs and checking answers as you go. Because you may have a lot of half-completed results in an interactive script, you probably want to make as few mistakes as you can. This is the purpose of ipython.\nipython provides access to the help / documentation system, provides tab completion of variable and function names, allows you see see what methods live inside a module ...",
"## Try the autocomplete ... it works on functions that are in scope\n\n# pr\n\n# it also works on variables\n\n# long_but_helpful_variable_name = 1\n\n# long_b",
"It works on modules to list the available methods and variables. Take the math module, for example:",
"import math\n\n# math.is # Try completion on this\n\nhelp(math.isinf)\n\n# try math.isinf() and hit shift-tab while the cursor is between the parentheses \n# you should see the same help pop up.\n\n# math.isinf()",
"It works on functions that take special arguments and tells you what you need to supply.\nTry this and try tabbing in the parenthesis when you use this function yourself:",
"import string\nstring.capwords(\"the quality of mercy is not strained\")\n\n# string.capwords()",
"It also provides special operations that allow you to drill down into the underlying shell / filesystem (but these are not standard python code any more).",
"# execute simple unix shell commands \n\n!ls\n\n!echo \"\"\n\n!pwd",
"Another way to do this is to use the cell magic functionality to direct the notebook to change the cell to something different (here everything in the cell is interpreted as a unix shell )",
"%%sh \n\nls -l\n\necho \"\"\n\npwd",
"I don't advise using this too often as the code becomes more difficult to convert to python. \n\nA % is a one-line magic function that can go anywhere in the cell. \nA %% is a cell-wide function",
"%magic # to see EVERYTHING in the magic system !",
"Useful magic functions:\n\n%matplotlib inline makes plots appear in the notebook and not an external window\n%run FILE runs the contents of the file in place of the given cell \n%%timeit times how long the cell takes to run\n\n\nYou can also run ipython in the terminal / shell on this machine. You will see that some of the interactivity still works in a text environment but not all of the pop up help is as helpful as in the notebooks.\n<a href=\"/terminals/1\"> Terminal </a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joshnsolomon/phys202-2015-work | assignments/assignment04/MatplotlibExercises.ipynb | mit | [
"Visualization 1: Matplotlib Basics Exercises",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Scatter plots\nLearn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.\n\nGenerate random data using np.random.randn.\nStyle the markers (color, size, shape, alpha) appropriately.\nInclude an x and y label and title.",
"plt.figure(figsize=(10,8))\nplt.scatter(np.random.randn(100),np.random.randn(100),s=50,c='b',marker='d',alpha=.7)\nplt.xlabel('x-coordinate')\nplt.ylabel('y-coordinate')\nplt.title('100 Random Points')",
"Histogram\nLearn how to use Matplotlib's plt.hist function to make a 1d histogram.\n\nGenerate randpom data using np.random.randn.\nFigure out how to set the number of histogram bins and other style options.\nInclude an x and y label and title.",
"plt.figure(figsize=(10,8))\np=plt.hist(np.random.randn(100000),bins=50,color='g')\nplt.xlabel('value')\nplt.ylabel('frequency')\nplt.title('Distrobution 100000 Random Points with mean of 0 and variance of 1')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ghvn7777/ghvn7777.github.io | content/fluent_python/15_with.ipynb | apache-2.0 | [
"本章讨论其他语言不常见的流程控制,用户可能会忽略这些特性:\n\nwith 语句和上下文管理器\nfor while 和 try 语句的 else 子句\n\nwith 语句会设置一个临时的上下文,交给上下文管理器对象控制,并负责清理上下文。这么做能避免错误并减少样板代码,因此 API 更安全,更易于使用。除了自动关闭文件之外,with 块还有很多用途\nelse 子句和 with 没关系,不过这两个都内容比较短,所以放到了一个逻辑\n先做这个,再做那个: if 之外的 else 块\nelse 子句不仅能在 if 语句中使用,还能在 for,while,try 语句中使用\nelse 子句行为如下:\nfor: 仅当 for 循环运行完毕时(即 for 循环没有被 break 语句终止)才运行 else\ntry: 仅当 try 块中没有异常时候才运行 else 块,else 子句抛出的异常不会由前面的 except 子句处理\n在所有情况下,如果异常或者 return, break 或 continue 语句导致控制权跳到了复合语句之外,else 也会被跳过\nfor 循环用 else 如下:",
"# for item in my_list:\n# if item.flavor == 'banana':\n# break\n# else:\n# raise ValueError('No banana flavor found!')",
"一开始你可能觉得没必要在 try/except 中使用 else 子句,毕竟下面代码中只有 dangerous_cal() 不抛出异常 after_call() 才会执行",
"# try:\n# dangerous_call()\n# after_call()\n# except OSError:\n# log('OSError...')",
"然而,after_call() 不应该放在 try 块中。为了清晰准确,try 块应该只抛出预期异常的语句,因此像下面这样写更好:",
"# try:\n# dangerous_call()\n# except OSError:\n# log('OSError...')\n# else:\n# after_call()",
"现在很明确,try 为了捕获的是 dangerous_call() 的异常。\nPython 中,try/except 不仅用于处理错误,还用于控制流程,为此,官方定义了几个缩略词:\nEAFP:\n 取得原谅比获得许可容易(easier to ask for forgiveness than\npermission)。这是一种常见的 Python 编程风格,先假定存在有效\n的键或属性,如果假定不成立,那么捕获异常。这种风格简单明\n快,特点是代码中有很多 try 和 except 语句。与其他很多语言一\n样(如 C 语言),这种风格的对立面是 LBYL 风格。\nLBYL\n 三思而后行(look before you leap)。这种编程风格在调用函数\n或查找属性或键之前显式测试前提条件。与 EAFP 风格相反,这种\n风格的特点是代码中有很多 if 语句。在多线程环境中,LBYL 风\n格可能会在“检查”和“行事”的空当引入条件竞争。例如,对 if\nkey in mapping: return mapping[key] 这段代码来说,如果\n在测试之后,但在查找之前,另一个线程从映射中删除了那个键,\n那么这段代码就会失败。这个问题可以使用锁或者 EAFP 风格解\n决。\n如果选择使用 EAFP 风格,那就要更深入地了解 else 子句,并在 try/except 中合理使用\n上下文管理器和 with 块\n上下文管理器对象存在的目的是管理 with 语句,就像迭代器存在是为了管理 for 语句。\nwith 语句目的是为了简化 try/finally 模式。上下文管理器协议包含 __enter__ 和 __exit__ 方法,with 开始时,会调用 __enter__ 方法,结束时候会调用 __exit__ 方法\n最常见的是打开文件:",
"with open('with.ipynb') as fp:\n src = fp.read(60)\nlen(src)\n\nfp\n\nfp.closed, fp.encoding\n\n# fp 虽然可用,但不能执行 I/O 操作,\n# 因为在 with 末尾,调用 TextIOWrapper.__exit__ 关闭了文件\nfp.read(60) ",
"with 的 as 子句是可选的,对 open 来说,必须加 as 子句,以便获取文件的引用。不过,有些上下文管理器会返回 None,因为没有什么有用的对象能提供给用户\n下面是一个精心制作的上下文管理器执行操作,以此强调上下文管理器与 __enter__ 方法返回的对象之间的区别",
"class LookingGlass:\n def __enter__(self): # enter 只有一个 self 参数\n import sys\n self.original_write = sys.stdout.write # 保存供日后使用\n sys.stdout.write = self.reverse_write # 打猴子补丁,换成自己方法\n return 'JABBERWOCKY' # 返回的字符串讲存入 with 语句的 as 后的变量\n \n def reverse_write(self, text): #取代 sys.stdout.write,反转 text\n self.original_write(text[::-1])\n \n # 正常传的参数是 None, None, None,有异常传如下异常信息\n def __exit__(self, exc_type, exc_value, traceback):\n import sys # 重复导入不会消耗很多资源,Python 会缓存导入模块\n sys.stdout.write = self.original_write # 还原 sys.stdout.write 方法\n if exc_type is ZeroDivisionError: # 如果有除 0 异样,打印消息\n print('Please DO NOT divide by zero')\n return True # 返回 True 告诉解释器已经处理了异常\n# 如果 __exit__ 方法返回 None,或者 True 之外的值,with 块中的任何异常都会向上冒泡\n\nwith LookingGlass() as what:\n print('Alice, Kitty and Snowdrop') #打印出的内容是反向的\n print(what)\n\n# with 执行完毕,可以看出 __enter__ 方法返回的值 -- 即存储在 what 变量中的值是 'JABBERWOCKY' \nwhat \n\nprint('Back to normal') # 输出不再是反向的了",
"在实际应用中,如果程序接管了标准输出,可能会把 sys.stdout 换成类似文件的其他对象,然后再切换成原来的版本。contextlib.redirect_stdout 上下文管理器就是这么做的\n\n解释器调用 enter 方法时,除了隐式的 self 之外,不会传入任何参数,传给 __exit__ 的三个参数如下:\nexc_type: 异常类(例如 ZeroDivisionError)\nexc_value: 异常实例。有时好有参数传给异常构造方法,例如错误消息,参数可以通过 exc_value.args 获取\ntraceback: traceback 对象\n上下文管理器具体工作方式如下:",
"# In [2]: manager = LookingGlass()\n# ...: manager\n# ...: \n# Out[2]: <__main__.LookingGlass at 0x7f586d4aa1d0>\n\n# In [3]: monster = manager.__enter__()\n\n# In [4]: monster == 'JABBERWOCKY'\n# Out[4]: eurT\n\n# In [5]: monster\n# Out[5]: 'YKCOWREBBAJ'\n\n# In [6]: manager.__exit__(None, None, None)\n\n# In [7]: monster\n# Out[7]: 'JABBERWOCKY'",
"上面在命令行执行的,因为在 jupyter notebook 的输出有时候有莫名其妙的 bug\ncontextlib 模块中的实用工具\n自定义上下文管理器类之前,先看一下 Python 标准库文档中的 contextlib。除了前面提到的 redirect_stdout 函数,contextlib 模块中还有一些类和其它函数,实用范围更广\nclosing: 如过对象提供了 close() 方法,但没有实现 __enter__/__exit__ 协议,可以实用这个函数构建上下文管理器\nsuppress: 构建临时忽略指定异常的上下文管理器\n@contextmanager: 这个装饰器把简单的生成器函数变成上下文管理器,这样就不用创建类去实现管理协议了\nContextDecorator: 这是个基类,用于定义基于类的上下文管理器。这种上下文管理器也能用于装饰函数,在受管理的上下文中运行整个函数\nExitStack: 这个上下文管理器能进入多个上下文管理器,with 块结束时,ExitStack 按照后进先出的顺序调用栈中各个上下文管理器的 __exit__ 方法。如果事先不知道 with 块要进入多少个上下文管理器,可以使用这个类。例如同时打开任意一个文件列表中的所有文件\n这些工具中使用最广泛的是 @contextmanager 装饰器,因此要格外小心,这个装饰器也有迷惑人的一面,因为它与迭代无关,却使用 yield 语句,由此可以引出协程\n使用 @contextmanager\n@contextmanager 装饰器能减少创建上下文管理器的样板代码量,因为不用编写一个完整的类,定义 __enter__ 和 __exit__ 方法,而只需实现一个有 yield 语句的生成器,生成想让 __enter__ 方法返回的值\n在使用 @contextmanager 装饰器能减少创建上下文管理器的样板代码量,因为不用编写一个完整的类,定义 __enter__ 和 __exit__ 方法,而只需实现有一个 yield 语句的生成器,生成想让 __enter__ 方法返回的值\n在使用 @contextmanager 装饰器的生成器中,yield 语句的作用是把函数的定义体分成两个部分:yield 语句前面所有代码在 with 块开始时(即解释器调用 __enter__ 方法时)执行,yield 语句后面的代码在 with 块结束时(即调用 __exit__ 方法时)执行",
"import contextlib\n\[email protected]\ndef looking_glass():\n import sys\n original_write = sys.stdout.write\n \n def reverse_write(text):\n original_write(text[::-1])\n \n sys.stdout.write = reverse_write\n # 产生一个值,这个值会绑定到 with 语句的 as 子句后的目标变量上\n # 执行 with 块中的代码时,这个函数会在这一点暂停\n yield 'JABBERWOCKY' \n # 控制权一旦跳出 with 块,继续执行 yield 语句后的代码\n sys.stdout.write = original_write\n\nwith looking_glass() as what:\n print('Alice, Kitty and Snowdrop')\n print(what)",
"其实,contextlib.contextmanager 装饰器会把函数包装成实现 __enter__ 和 __exit__ 方法的类\n这个类的 __enter__ 作用如下:\n\n调用生成器函数,保存生成器对象(这里称为 gen)\n调用 next(gen),执行到 yield 关键字位置\n返回 next(gen) 产生的值,以便把产生的值绑定到 with/as 语句中目标变量上\n\nwith 块终止时,__exit__ 方法会做以下几件事\n\n\n检查有没有把异常传给 exc_type, 如果有,调用 gen.throw(exception), 在生成器函数定义体中包含 yield 关键字的那一行跑出异常\n\n\n否则,调用 next(gen),继续执行生成器函数体中 yield 语句之后的代码\n\n\n上面的例子其实有一个严重的错误,如果在 with 块中抛出了异常,Python 解释器会将其捕获,然后在 looking_glass 函数的 yield 表达式再次跑出,但是,那里没有处理错误的代码,因此 looking_glass 函数会终止,永远无法恢复成原来的 sys.stdout.write 方法,导致系统处于无效状态,下面添加了一些代码,用于处理 ZeroDivisionError 异常,这样就比较健壮了",
"import contextlib\n\[email protected]\ndef looking_glass():\n import sys\n original_write = sys.stdout.write\n \n def reverse_write(text):\n original_write(text[::-1])\n \n sys.stdout.write = reverse_write\n msg = ''\n try:\n yield 'JABBERWOCKY' \n except ZeroDivisionError:\n msg = 'Please DO NOT divide by zero'\n finally:\n sys.stdout.write = original_write\n if msg:\n print(msg)",
"前面说过,为了告诉解释器异常已经处理了,__exit__ 方法返回 True,此时解释器会压制异常。如果 __exit__ 方法没有显式返回一个值,那么解释器得到的是 None,然后向上冒泡异常。使用 @contextmanager 装饰器时,默认行为是相反的,装饰器提供的 __exit__ 方法假定发给生成器的所有异常都得到处理了,因此应该压制异常。如果不想让 @contextmanager 压制异常,必须在装饰器的函数中显式重新跑出异常\n\n把异常发给生成器的方式是使用 throw 方法,下章讲\n这样的约定的原因是,创建上下文时,生成器无法返回值,只能产出值。不过现在可以返回值了,见下章\n使用 @contextmanager 装饰器时,要把 yield 语句放到 try/finally 语句中(或者放在 with 语句中),这是无法避免的,因为我们永远不知道上下文管理器用户会在 with 块中做什么\n\n除了标准库中举得例子外,Martijin Pieters 实现原地文件重写上下文管理器是 @contextmanager 不错的使用实例,如下:",
"# import csv\n# with inplace(csvfilename, 'r', newline='') as (infh, outfh):\n# reader = csv.reader(infh)\n# writer = csv.writer(outfh)\n# for row in reader:\n# row += ['new', 'columns']\n# writer.writerow(row)",
"inplace 函数是个上下文管理器,为同一个文件提供了两个句柄(这个示例中的 infh 和 outfh),以便同时读写同一个文件。这比标准库中的 fileinput.input 函数更易用\n注意,在 @contextmanager 装饰器装饰的生成器中,yield 与迭代没有任何关系。在本节所举的示例中,生成器函数的作用更像是协程:执行到某一点时暂停,让客户代码运行,直到客户让协程继续做事。下章会全面讨论协程。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pucdata/pythonclub | sessions/09-numba_cython/Faster_computations.ipynb | gpl-3.0 | [
"Prerequisites\nIn order to run these examples, it is recommended to use gcc as the default compiler, with OpenMP installed.\nNumba and Cython can be installed easily with conda:\nconda install numba\nconda install cython\nThe include path of the NumPy C header files might have to be added to the .bashrc (Linux) or .bash_profile (Mac) files to make Numba run:",
"import numpy as np\nprint \"export CFLAGS=\\\"-I\",np.__path__[0]+'/core/include/ $CFLAGS\\\"'",
"Append the output of the print command above\nto the .bashrc (Linux) or .bash_profile (Mac) file in the default user library, if Numba does not work out of the box.\nFaster computations in Python\nPython's dynamically typed nature makes it easy to quickly write code\nthat works, however, this comes at the cost of execution speed, as\neach time an operation is executed on a variable, its type has to be\nchecked by the interpreter, in order to execute the appropriate subroutine\nfor the given combination of variable type and operation.\nThe speed of computations can be greatly increased by utilizing NumPy,\nwhere the data are stored in homogeneous C arrays inside array objects.\nNumPy also provides specialized commands to do calculations quickly on\nthese arrays.\nIn this example, we will compare different implementations of a truncated\nFourier sum, which is calculated for a number of different positions.\nIn astronomical situations, these positions can be, for example, times of\nmeasurement of the magnitude of a star. The sum has the form:\n$m_i (t_i) = \\sum_{j=1}^{n} A_j \\cdot \\sin( 2 \\cdot \\pi \\cdot f_j \\cdot t_i +\\phi_j )$,\nwhere $m_i$ is the $i$th magnitude of the star at time $t_i$, $n$ is the\nnumber of Fourier terms, $A_j$ is the amplitude, $f_j$ is the frequency, and $\\phi_j$ is the phase\nof the $j$th Fourier term.\nPreparation\nFirst, we import the packages that we will be using, and prepare the data.\nWe store the $t_i$, $A_j$, $f_j$ and $\\phi_j$ parameters in NumPy arrays.\nWe also define two functions, one which does the above sum using two for cycles,\nand another one exploiting array operations of NumPy.\nFurthermore, we prepare for the usage of Cython within the Notebook by loading\nthe Cython magic commands into it.",
"import numpy as np\nfrom numba import jit, autojit\n%load_ext Cython\n\n\ntimes=np.arange(0,70,0.01)\nprint \"The size of the time array:\", times.size\n\nfreq = np.arange(0.1,6.0,0.1)*1.7763123\nfreq[-20:] = freq[:20]*0.744\namp = 0.05/(freq**2)+0.01\nphi = freq\n\ndef fourier_sum_naive(times, freq, amp, phi):\n mags = np.zeros_like(times)\n for i in xrange(times.size):\n for j in xrange(freq.size):\n mags[i] += amp[j] * np.sin( 2 * np.pi * freq[j] * times[i] + phi[j] )\n \n return mags\n \ndef fourier_sum_numpy(times, freq, amp, phi):\n return np.sum(amp.T.reshape(-1,1) * np.sin( 2 * np.pi * freq.T.reshape(-1,1) * times.reshape(1,-1) + phi.T.reshape(-1,1)), axis=0)\n",
"Numba\nWe use the autojit function from Numba to prepare the translation of the Python function to machine code.\nBy default usage, the functions get translated during runtime (in a Just-In-Time JIT manner), when the\nfirst call is made to the function. Numba produces optimized machine code, taking into account the type of\ninput the function receives when called for the first time.\nAlternatively, the function can be defined like normal, but preceded by the @jit decorator, in order to\nnotify Numba about the functions to optimize, as show in the commented area below.\nNote that Numba can be called eagerly, telling it the type of the expected variable, as well as the return\ntype of the function. This can be used to fine-tune the behavior of the function. See more in the Numba\ndocumentation:\nhttp://numba.pydata.org/numba-doc/dev/user/jit.html\nNote that functions can also be compiled ahead of time. For more information, see:\nhttp://numba.pydata.org/numba-doc/0.32.0/reference/aot-compilation.html",
"fourier_sum_naive_numba = autojit(fourier_sum_naive)\nfourier_sum_numpy_numba = autojit(fourier_sum_numpy)\n\n#@jit\n#def fourier_sum_naive_numba(times, freq, amp, phi):\n# mags = np.zeros_like(times)\n# for i in xrange(times.size):\n# for j in xrange(freq.size):\n# mags[i] += amp[j] * np.sin( 2 * np.pi * freq[j] * times[i] + phi[j] )\n# \n# return mags\n\n#@jit()\n#def fourier_sum_numpy_numba(times, freq, amp, phi):\n# return np.sum(amp.T.reshape(-1,1) * np.sin( 2 * np.pi * freq.T.reshape(-1,1) * times.reshape(1,-1) + phi.T.reshape(-1,1)), axis=0)\n",
"Cython\nCython works different than Numba: It produces C code that gets compiled before calling the function.\nNumPy arrays store the data internally in simple C arrays, which can be accessed with the\nTyped Memoryview feature of Cython. This allows operations on these arrays that completely bypass Python.\nWe can also import C functions, as we could writing pure C by importing the corresponding header files.\nIn the example implementation of the Fourier sum below, we use define two funtions.\nThe first one handles interactions with Python, while the second one handles the actual calculations.\nNote that we also pass a temp array, in order to provide a reserved space for the function to work\non, eliminating the need to create a NumPy array within the function.\nImportant note\nNormal usage of Cython involves creating a separate .pyx file, with the corresponding Cython code inside,\nwhich then gets translated into a source object (.so on Unix-like systems, .dll on Windows), which can\nbe imported into Python like a normal .py file. See the Cython documentation for more information:\nhttp://docs.cython.org/en/latest/src/quickstart/build.html",
"%%cython -a\n\ncimport cython\nimport numpy as np\nfrom libc.math cimport sin, M_PI\n\ndef fourier_sum_cython(times, freq, amp, phi, temp):\n return np.asarray(fourier_sum_purec(times, freq, amp, phi, temp))\n\[email protected](False)\[email protected](False)\ncdef fourier_sum_purec(double[:] times, double[:] freq, double[:] amp, double[:] phi, double[:] temp):\n cdef int i, j, irange, jrange\n irange=len(times)\n jrange=len(freq)\n for i in xrange(irange):\n temp[i]=0\n for j in xrange(jrange):\n temp[i] += amp[j] * sin( 2 * M_PI * freq[j] * times[i] + phi[j] )\n return temp",
"We called the cython command with the -a argument, that makes it produce an html summary of the translated code.\nWhite parts show code that don't interact with Python at all. Optimizing Cython involves minimizing the code\nthat is \"yellow\", making the code \"whiter\", that is, executing more code in C.\nCython + OpenMP\nWe can parallelize the execution of the code using OpenMP in the parts where the code is executed\ncompletely outside of Python, but we need to release the Global Interpreter Lock (GIL) first.\nThe prange command replaces the range or xrange command in the for cycle we would like to execute in\nparallel. We can also call OpenMP functions, for example, to get the number of processor cores of the\nsystem.\nNote that the number of threads, the scheduler and the chunksize can have a large effect on the\nperformance of the code. While optimizing, you should try different chunksizes (the default is 1).",
"%%cython --compile-args=-fopenmp --link-args=-fopenmp --force -a\n\ncimport cython\ncimport openmp\nimport numpy as np\nfrom libc.math cimport sin, M_PI\nfrom cython.parallel import parallel, prange\n\ndef fourier_sum_cython_omp(times, freq, amp, phi, temp):\n return np.asarray(fourier_sum_purec_omp(times, freq, amp, phi, temp))\n\[email protected](False)\[email protected](False)\ncdef fourier_sum_purec_omp(double[:] times, double[:] freq, double[:] amp, double[:] phi, double[:] temp):\n cdef int i, j, irange, jrange\n irange=len(times)\n jrange=len(freq)\n #print openmp.omp_get_num_procs()\n with nogil, parallel(num_threads=4):\n for i in prange(irange, schedule='dynamic', chunksize=10):\n temp[i]=0\n for j in xrange(jrange):\n temp[i] += amp[j] * sin( 2 * M_PI * freq[j] * times[i] + phi[j] )\n return temp ",
"Comparison\nFinally, we compare the execution times of the implementations of the funtions.",
"temp=np.zeros_like(times)\n\namps_naive = fourier_sum_naive(times, freq, amp, phi)\namps_numpy = fourier_sum_numpy(times, freq, amp, phi)\namps_numba1 = fourier_sum_naive_numba(times, freq, amp, phi)\namps_numba2 = fourier_sum_numpy_numba(times, freq, amp, phi)\namps_cython = fourier_sum_cython(times, freq, amp, phi, temp)\namps_cython_omp = fourier_sum_cython_omp(times, freq, amp, phi, temp)\n\n%timeit -n 5 -r 5 fourier_sum_naive(times, freq, amp, phi)\n%timeit -n 10 -r 10 fourier_sum_numpy(times, freq, amp, phi)\n%timeit -n 10 -r 10 fourier_sum_naive_numba(times, freq, amp, phi)\n%timeit -n 10 -r 10 fourier_sum_numpy_numba(times, freq, amp, phi)\n%timeit -n 10 -r 10 fourier_sum_cython(times, freq, amp, phi, temp)\n%timeit -n 10 -r 10 fourier_sum_cython_omp(times, freq, amp, phi, temp)\n\n\n\nimport matplotlib.pylab as plt\n\nprint amps_numpy-amps_cython\n\nfig=plt.figure()\nfig.set_size_inches(16,12)\n\nplt.plot(times,amps_naive ,'-', lw=2.0)\n#plt.plot(times,amps_numpy - 2,'-', lw=2.0)\nplt.plot(times,amps_numba1 - 4,'-', lw=2.0)\n#plt.plot(times,amps_numba2 - 6,'-', lw=2.0)\nplt.plot(times,amps_cython - 8,'-', lw=2.0)\n#plt.plot(times,amps_cython_omp -10,'-', lw=2.0)\n\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
calroc/joypy | docs/4. Replacing Functions in the Dictionary.ipynb | gpl-3.0 | [
"Preamble",
"from notebook_preamble import D, J, V",
"A long trace",
"V('[23 18] average')",
"Replacing sum and size with \"compiled\" versions.\nBoth sum and size are catamorphisms, they each convert a sequence to a single value.",
"J('[sum] help')\n\nJ('[size] help')",
"We can use \"compiled\" versions (they're not really compiled in this case, they're hand-written in Python) to speed up evaluation and make the trace more readable. The sum function is already in the library. It gets shadowed by the definition version above during initialize().",
"from joy.library import SimpleFunctionWrapper, primitives\nfrom joy.utils.stack import iter_stack\n\n\n@SimpleFunctionWrapper\ndef size(stack):\n '''Return the size of the sequence on the stack.'''\n sequence, stack = stack\n n = 0\n for _ in iter_stack(sequence):\n n += 1\n return n, stack\n\n\nsum_ = next(p for p in primitives if p.name == 'sum')",
"Now we replace them old versions in the dictionary with the new versions and re-evaluate the expression.",
"old_sum, D['sum'] = D['sum'], sum_\nold_size, D['size'] = D['size'], size",
"You can see that size and sum now execute in a single step.",
"V('[23 18] average')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_linear_model_patterns.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Linear classifier on sensor data with plot patterns and filters\nDecoding, a.k.a MVPA or supervised machine learning applied to MEG and EEG\ndata in sensor space. Fit a linear classifier with the LinearModel object\nproviding topographical patterns which are more neurophysiologically\ninterpretable [1] than the classifier filters (weight vectors).\nThe patterns explain how the MEG and EEG data were generated from the\ndiscriminant neural sources which are extracted by the filters.\nNote patterns/filters in MEG data are more similar than EEG data\nbecause the noise is less spatially correlated in MEG than EEG.\n[1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,\nBlankertz, B., & Bießmann, F. (2014). On the interpretation of\nweight vectors of linear models in multivariate neuroimaging.\nNeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067",
"# Authors: Alexandre Gramfort <[email protected]>\n# Romain Trachel <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\n\n# import a linear classifier from mne.decoding\nfrom mne.decoding import LinearModel\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.2, 0.5\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(2, None, method='iir') # replace baselining with high-pass\nevents = mne.read_events(event_fname)\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n decim=4, baseline=None, preload=True)\n\nlabels = epochs.events[:, -1]\n\n# get MEG and EEG data\nmeg_epochs = epochs.copy().pick_types(meg=True, eeg=False)\nmeg_data = meg_epochs.get_data().reshape(len(labels), -1)\neeg_epochs = epochs.copy().pick_types(meg=False, eeg=True)\neeg_data = eeg_epochs.get_data().reshape(len(labels), -1)",
"Decoding in sensor space using a LogisticRegression classifier",
"clf = LogisticRegression()\nsc = StandardScaler()\n\n# create a linear model with LogisticRegression\nmodel = LinearModel(clf)\n\n# fit the classifier on MEG data\nX = sc.fit_transform(meg_data)\nmodel.fit(X, labels)\n# plot patterns and filters\nmodel.plot_patterns(meg_epochs.info, title='MEG Patterns')\nmodel.plot_filters(meg_epochs.info, title='MEG Filters')\n\n# fit the classifier on EEG data\nX = sc.fit_transform(eeg_data)\nmodel.fit(X, labels)\n# plot patterns and filters\nmodel.plot_patterns(eeg_epochs.info, title='EEG Patterns')\nmodel.plot_filters(eeg_epochs.info, title='EEG Filters')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
picklecai/OMOOC2py | _src/exercise/day1.ipynb | mit | [
"1. _ builtin _ 模块\n1.1 apply:使用元组或字典中的参数调用函数\n\nPython 允许你实时地创建函数参数列表. 只要把所有的参数放入一个元组中,然后通过内建的 apply 函数调用函数.",
"def function(a,b):\n print a, b\n\napply(function, (\"wheather\", \"Canada?\"))\n\napply(function, (1, 3+5))",
"效果等同于:",
"function(\"wheather\", \"Canada?\")\n\nfunction(1, 3+5)",
"那为什么要使用apply呢?",
"apply(function, (), {\"a\":\"35cm\", \"b\":\"12cm\"})\n\napply(function, (\"v\",), {\"b\":\"love\"})\n\napply(function, ( ,\"v\"), {\"a\":\"hello\"})",
"何谓“关键字参数”? \napply使用字典传递关键字参数,实际上就是字典的键是函数的参数名,字典的值是函数的实际参数值。(相对于形参和实参)\n根据上面的例子看,如果部分传递,只能传递后面的关键字参数,不能传递前面的。????\n\napply 函数的一个常见用法是把构造函数参数从子类传递到基类, 尤其是构造函数需要接受很多参数的时候.\n\n子类和基类是什么概念?",
"class Rectangle:\n def __init__(self, color=\"white\", width=10, height=10):\n print \"Create a \", color, self, \"sized\", width, \"X\", height\n\nclass RoundRectangle:\n def __init__(self, **kw):\n apply(Rectangle.__init__, (self,), kw)\n\nrect = Rectangle(color=\"green\", width=200, height=156)\n\nrect = RoundRectangle(color=\"brown\", width=20, height=15)",
"第二个函数不知道如何使用 ????\n修改,子类要以父类为参数!!!",
"class RoundRectangle(Rectangle):\n def __init__(self, **kw):\n apply(Rectangle.__init__, (self,), kw)\n\nrect2 = RoundRectangle(color= \"blue\", width=23, height=10)",
"使用 * 来标记元组, ** 来标记字典.\n\napply的第一个参数是函数名,第二个参数是元组,第三个参数是字典。所以用上面这个表达最好不过。",
"args = (\"er\",)\nkwargs = {\"b\":\"haha\"}\nfunction(*args, **kwargs)\n\napply(function, args, kwargs)",
"以上等价。\n用这个意思引申:",
"kw = {\"color\":\"brown\", \"width\":123, \"height\": 34}\nrect3 = RoundRectangle(**kw)\n\nrect4 = Rectangle(**kw)\n\narg=(\"yellow\", 45, 23)\nrect5 = Rectangle(*arg)",
"1.2 import",
"import glob, os\n\nmodules =[]\n\nfor module_file in glob.glob(\"*-plugin.py\"):\n try:\n module_name, ext = os.path.splitext(os.path.basename(module_file))\n module = __import__(module_name)\n modules.append(module)\n except ImportError:\n pass #ignore broken modules\n\nfor module in modules:\n module.hello()\nexample-plugin says hello\n\ndef hello():\n print \"example-plugin says hello\"\n\ndef getfunctionname(module_name, function_name):\n module = __import__(module_name)\n return getattr(module, function_name)\n\nprint repr(getfunctionname(\"dumbdbm\",\"open\"))",
"3. os模块",
"import os\nimport string\n\ndef replace(file, search_for, replace_with):\n back = os.path.splitext(file)[0] + \".bak\"\n temp = os.path.splitext(file)[0] + \".tmp\"\n \n try:\n os.remove(temp)\n except os.error:\n pass\n \n fi = open(file)\n fo = open(temp, \"w\")\n \n for s in fi.readlines():\n fo.write(string.replace(s, search_for, replace_with))\n \n fi.close()\n fo.close()\n \n try:\n os.remove(back)\n except os.error:\n pass\n \n os.rename(file, back)\n os.rename(temp, file)\n\nfile = \"samples/sample.txt\"\n\nreplace(file, \"hello\", \"tjena\")\n\nreplace(file, \"tjena\", \"hello\")",
"os.path.splitext:切扩展名\nos.remove:remove a file。上面的程序里为什么要remove呢?",
"def replace1(file, search_for, replace_with):\n back = os.path.splitext(file)[0] + \".bak\"\n temp = os.path.splitext(file)[0] + \".tmp\"\n \n try:\n os.remove(temp)\n except os.error:\n pass\n \n fi = open(file)\n fo = open(temp, \"w\")\n \n for s in fi.readlines():\n fo.write(string.replace(s, search_for, replace_with))\n \n fi.close()\n fo.close()\n \n try:\n os.remove(back)\n except os.error:\n pass\n\n \n os.rename(file, back)\n os.rename(temp, file)\n\nreplace1(file, \"hello\", \"tjena\")\n\nreplace1(file, \"tjena\", \"hello\")\n\ndoc = os.path.splitext(file)[0] + \".doc\"\n\nfor file in os.listdir(\"samples\"):\n print file\n\ncwd = os.getcwd()\n\nprint 1, cwd\n\nos.chdir(\"samples\")\n\nprint 2, os.getcwd()\n\nos.chdir(os.pardir)\nprint 3, os.getcwd()",
"5. stat模块",
"import stat\n\nimport os, time\n\nst = os.stat(\"samples/sample.txt\")",
"os.stat是将文件的相关属性读出来,然后用stat模块来处理,处理方式有多重,就要看看stat提供了什么了。\n6. string模块",
"import string\n\ntext = \"Monty Python's Flying Circus\"\n\nprint \"upper\", \"=>\", string.upper(text)\n\nprint \"lower\", \"=>\", string.lower(text)\n\nprint \"split\", \"=>\", string.split(text)",
"分列变成了list",
"print \"join\", \"=>\", string.join(string.split(text))",
"join和split相反。",
"print \"replace\", \"=>\", string.replace(text, \"Python\", \"Cplus\")",
"replace的参数结构是:\n1. 整字符串 2. 将被替换的字符串 3. 替换后的字符串",
"print \"find\", \"=>\", string.find(text, \"Python\")\n\nprint \"find\", \"=>\", string.find(text, \"Python\"), string.find(text, \"Cplus\")\n\nprint text",
"上面replace的结果,没有影响原始的text。\nfind时,能找到就显示位置,不能找到就显示-1",
"print \"count\", \"=>\", string.count(text,\"n\")",
"和数学运算一样,这些方法也分一元的和多元的: \nupper, lower, split, join 都是一元的。其中join的对象是一个list。\nreplace, find, count则需要除了text之外的参数。replace需要额外两个,用以指明替换关系。find只需要一个被查找对象。count则需要一个需要计数的字符。\n特别注意: replace不影响原始字符串对象。(好奇怪)",
"print string.atoi(\"23\")\n\ntype(string.atoi(\"23\"))\n\nint(\"234\")\n\ntype(int(\"234\"))\n\ntype(float(\"334\"))\n\nfloat(\"334\")\n\nstring.atof(\"456\")",
"7. re模块",
"import re\n\ntext = \"The Attila the Hun Show\"\n\nm = re.match(\".\", text)\nif m:\n print repr(\".\"), \"=>\", repr(m.group(0))",
"8. math模块和cmath模块",
"import math\n\nmath.pi\n\nmath.e\n\nprint math.hypot(3,4)\n\nmath.sqrt(25)\n\nimport cmath\n\nprint cmath.sqrt(-1)",
"10. operator模块",
"import operator\n\noperator.add(3,5)\n\nseq = 1,5,7,9\n\nreduce(operator.add,seq)\n\nreduce(operator.sub, seq)\n\nreduce(operator.mul, seq)\n\nfloat(reduce(operator.div, seq))",
"operator下的这四则运算,都是针对两个数进行的,即参数只能是两个。为了能对多个数进行连续运算,就需要用reduce,它的意思是两个运算后,作为一个数,与下一个继续进行两个数运算,直到数列终。感觉和apply作用有点差不多,第一个参数是函数,第二个参数是数列(具体参数)。",
"operator.concat(\"ert\", \"erui\")\n\noperator.getitem(seq,1)\n\noperator.indexOf(seq, 5)",
"getitem和indexOf为一对逆运算,前者求指定位置上的值,后者求给定值的位置。注意,后者是大写的字母o。",
"operator.sequenceIncludes(seq, 5)",
"判断序列中是否包含某个值。结果是布尔值。",
"import UserList\n\ndef dump(data):\n print data,\":\"\n print type(data),\"=>\",\n if operator.isCallable(data):\n print \"is a CALLABLE data.\"\n if operator.isMappingType(data):\n print \"is a MAP data.\"\n if operator.isNumberType(data):\n print \"is a NUMBER data.\"\n if operator.isSequenceType(data):\n print \"is a SEQUENCE data.\"\n\ndump(0)\n\ndump([3,4,5,6])\n\ndump(\"weioiuernj\")\n\ndump({\"a\":\"155cm\", \"b\":\"187cm\"})\n\ndump(len)\n\ndump(UserList)\n\ndump(UserList.UserList)\n\ndump(UserList.UserList())",
"15. types模块",
"import types\n\ndef check(object):\n if type(object) is types.IntType:\n print \"INTEGER\",\n if type(object) is types.FloatType:\n print \"FLOAT\",\n if type(object) is types.StringType:\n print \"STRING\",\n if type(object) is types.ClassType:\n print \"CLASS\",\n if type(object) is types.InstanceType:\n print \"INSTANCE\",\n print\n\ncheck(0)\n\ncheck(0.0)\n\ncheck(\"picklecai\")\n\nclass A:\n pass\n\ncheck(A)\n\na = A()\n\ncheck(a)",
"types 模块在第一次引入的时候会破坏当前的异常状态. 也就是说, 不要在异常处理语句块中导入该模块 ( 或其他会导入它的模块 ) .\n16. gc模块\ngc 模块提供了到内建循环垃圾收集器的接口。",
"import gc\n\nclass Node:\n def __init__(self, name):\n self.name = name\n self.patrent = None\n self.children = []\n \n def addchild(self, node):\n node.parent = self\n self.children.append(node)\n \n def __repr__(self):\n return \"<Node %s at %x\" % (repr(self.name), id(self))\n \nroot = Node(\"monty\")\n\nroot.addchild(Node(\"eric\"))\nroot.addchild(Node(\"john\"))\nroot.addchild(Node(\"michael\"))\n\nroot.__init__(\"eric\")\n\nroot.__repr__()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cathalmccabe/PYNQ | boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb | bsd-3-clause | [
"Boolean Generator\nThis notebook will show how to use the boolean generator to generate a boolean combinational function. The function that is implemented is a 2-input XOR.\nStep 1: Download the logictools overlay",
"from pynq.overlays.logictools import LogicToolsOverlay\n\nlogictools_olay = LogicToolsOverlay('logictools.bit')",
"Step 2: Specify the boolean function of a 2-input XOR\nThe logic is applied to the on-board pushbuttons and LED, pushbuttons PB0 and PB3 are set as inputs and LED LD2 is set as an output",
"function = ['LD2 = PB3 ^ PB0']",
"Step 3: Instantiate and setup of the boolean generator object.\nThe logic function defined in the previous step is setup using the setup() method",
"boolean_generator = logictools_olay.boolean_generator\nboolean_generator.setup(function)",
"Find the On-board pushbuttons and LEDs\n\nStep 4: Run the boolean generator verify operation",
"boolean_generator.run()",
"Verify the operation of the XOR function\n| PB0 | PB3 | LD2 |\n|:---:|:---:|:---:|\n| 0 | 0 | 0 |\n| 0 | 1 | 1 |\n| 1 | 0 | 1 |\n| 1 | 1 | 0 |\nStep 5: Stop the boolean generator",
"boolean_generator.stop()",
"Step 6: Re-run the entire boolean function generation in a single cell\nNote: The boolean expression format can be list or dict. We had used a list in the example above. We will now use a dict.\n<font color=\"DodgerBlue\">Alternative format:</font> \npython\nfunction = {'XOR_gate': 'LD2 = PB3 ^ PB0'}",
"from pynq.overlays.logictools import LogicToolsOverlay\n\nlogictools_olay = LogicToolsOverlay('logictools.bit')\nboolean_generator = logictools_olay.boolean_generator\n\nfunction = {'XOR_gate': 'LD2 = PB3 ^ PB0'}\n\nboolean_generator.setup(function)\nboolean_generator.run()",
"Stop the boolean generator",
"boolean_generator.stop()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amccaugh/phidl | docs/tutorials/quickstart.ipynb | mit | [
"Quick start\nPHIDL allows you to create complex designs from simple shapes, and can output the result as GDSII files. The basic element of PHIDL is the Device, which is just a GDS cell with some additional functionality (for those unfamiliar with GDS designs, it can be thought of as a blank area to which you can add polygon shapes). The polygon shapes can also have Ports on them--these allow you to snap shapes together like Lego blocks. You can either hand-design your own polygon shapes, or there is a large library of pre-existing shapes you can use as well.\nBrief introduction\nThis first section is an extremely short tutorial meant to give you an idea of what PHIDL can do. For a more detailed tutorial, please read the following \"The basics of PHIDL\" section and the other tutorials.\nWe'll start with some boilerplate imports:",
"from phidl import Device\nfrom phidl import quickplot as qp # Rename \"quickplot()\" to the easier \"qp()\"\nimport phidl.geometry as pg",
"Then let's create a blank Device (essentially an empty GDS cell with some special features)",
"D = Device('mydevice')",
"Next let's add a custom polygon using lists of x points and y points. You can also add polygons pair-wise like [(x1,y1), (x2,y2), (x3,y3), ... ]. We'll also image the shape using the handy quickplot() function (imported here as qp())",
"xpts = (0,10,10, 0)\nypts = (0, 0, 5, 3)\npoly1 = D.add_polygon( [xpts, ypts], layer = 0)\n\nqp(D) # quickplot it!",
"You can also create new geometry using the built-in geometry library:",
"T = pg.text('Hello!', layer = 1)\nA = pg.arc(radius = 25, width = 5, theta = 90, layer = 3)\n\nqp(T) # quickplot it!\nqp(A) # quickplot it!",
"We can easily add these new geometries to D, which currently contains our custom polygon. (For more details about references see below, or the tutorial called \"Understanding References\".)",
"text1 = D.add_ref(T) # Add the text we created as a reference\narc1 = D.add_ref(A) # Add the arc we created\n\nqp(D) # quickplot it!",
"Now that the geometry has been added to D, we can move and rotate everything however we want:",
"text1.movey(5)\ntext1.movex(-20)\narc1.rotate(-90)\narc1.move([10,22.5])\npoly1.ymax = 0\n\nqp(D) # quickplot it!",
"We can also connect shapes together using their Ports, allowing us to snap shapes together like Legos. Let's add another arc and snap it to the end of the first arc:",
"arc2 = D.add_ref(A) # Add a second reference the arc we created earlier\narc2.connect(port = 1, destination = arc1.ports[2])\n\nqp(D) # quickplot it!",
"That's it for the very basics! Keep reading for a more detailed explanation of each of these, or see the other tutorials for topics such as using Groups, creating smooth Paths, and more.\nThe basics of PHIDL\nThis is a longer tutorial meant to explain the basics of PHIDL in a little more depth. Further explanation can be found in the other tutorials as well.\nPHIDL allows you to create complex designs from simple shapes, and can output the result as GDSII files. The basic element of PHIDL is the Device, which can be thought of as a blank area to which you can add polygon shapes. The polygon shapes can also have Ports on them--these allow you to snap shapes together like Lego blocks. You can either hand-design your own polygon shapes, or there is a large library of pre-existing shapes you can use as well.\nCreating a custom shape\nLet's start by trying to make a rectangle shape with ports on either end.",
"import numpy as np\nfrom phidl import quickplot as qp\nfrom phidl import Device\nimport phidl.geometry as pg\n\n\n# First we create a blank device `R` (R can be thought of as a blank \n# GDS cell with some special features). Note that when we\n# make a Device, we usually assign it a variable name with a capital letter\nR = Device('rect')\n\n# Next, let's make a list of points representing the points of the rectangle\n# for a given width and height\nwidth = 10\nheight = 3\npoints = [(0, 0), (width, 0), (width, height), (0, height)]\n\n# Now we turn these points into a polygon shape using add_polygon()\nR.add_polygon(points)\n\n# Let's use the built-in \"quickplot\" function to display the polygon we put in D\nqp(R)",
"Next, let's add Ports to the rectangle which will allow us to connect it to other shapes easily",
"# Ports are defined by their width, midpoint, and the direction (orientation) they're facing\n# They also must have a name -- this is usually a string or an integer\nR.add_port(name = 'myport1', midpoint = [0,height/2], width = height, orientation = 180)\nR.add_port(name = 'myport2', midpoint = [width,height/2], width = height, orientation = 0)\n\n# The ports will show up when we quickplot() our shape\nqp(R) # quickplot it!",
"We can check to see that our Device has ports in it using the print command:",
"print(R)",
"Looks good!\nLibrary & combining shapes\nSince this Device is finished, let's create a new (blank) Device and add several shapes to it. Specifically, we will add an arc from the built-in geometry library and two copies of our rectangle Device. We'll then then connect the rectangles to both ends of the arc. The arc() function is contained in the phidl.geometry library which as you can see at the top of this example is imported with the name pg.\nThis process involves adding \"references\". These references allow you to create a Device shape once, then reuse it many times in other Devices.",
"# Create a new blank Device\nE = Device('arc_with_rectangles')\n\n# Also create an arc from the built-in \"pg\" library\nA = pg.arc(width = 3)\n\n# Add a \"reference\" of the arc to our blank Device\narc_ref = E.add_ref(A)\n\n# Also add two references to our rectangle Device\nrect_ref1 = E.add_ref(R)\nrect_ref2 = E.add_ref(R)\n\n# Move the shapes around a little\nrect_ref1.move([-10,0])\nrect_ref2.move([-5,10])\n\nqp(E) # quickplot it!",
"Now we can see we have added 3 shapes to our Device \"E\": two references to our rectangle Device, and one reference to the arc Device. We can also see that all the references have Ports on them, shown as the labels \"myport1\", \"myport2\", \"1\" and \"2\".\nNext, let's snap everything together like Lego blocks using the connect() command.",
"# First, we recall that when we created the references above we saved\n# each one its own variable: arc_ref, rect_ref1, and rect_ref2\n# We'll use these variables to control/move the reference shapes.\n\n# First, let's move the arc so that it connects to our first rectangle.\n# In this command, we tell the arc reference 2 things: (1) what port\n# on the arc we want to connect, and (2) where it should go\narc_ref.connect(port = 1, destination = rect_ref1.ports['myport2'])\n\nqp(E) # quickplot it!\n\n# Then we want to move the second rectangle reference so that\n# it connects to port 2 of the arc\nrect_ref2.connect('myport1', arc_ref.ports[2])\n\nqp(E) # quickplot it!",
"Looks great!\nGoing a level higher\nNow we've made a (somewhat) complicated bend-shape from a few simple shapes. But say we're not done yet -- we actually want to combine together 3 of these bend-shapes to make an even-more complicated shape. We could recreate the geometry 3 times and manually connect all the pieces, but since we already put it together once it will be smarter to just reuse it multiple times.\nWe will start by abstracting this bend-shape. As shown in the quickplot, there are ports associated with each reference in our bend-shape Device E: \"myport1\", \"myport2\", \"1\", and \"2\". But when working with this bend-shape, all we really care about is the 2 ports at either end -- \"myport1\" from rect_ref1 and \"myport2\" from rect_ref2. It would be simpler if we didn't have to keep track of all of the other ports.\nFirst, let's look at something: let's see if our bend-shape Device E has any ports in it:",
"print(E)",
"It has no ports apparently! Why is that, when we clearly see ports in the quickplots above?\nThe answer is that Device E itself doesn't have ports -- the references inside E do have ports, but we never actually added ports to E. Let's fix that now, adding a port at either end, setting the names to the integers 1 and 2.",
"# Rather than specifying the midpoint/width/orientation, we can instead\n# copy ports directly from the references since they're already in the right place\nE.add_port(name = 1, port = rect_ref1.ports['myport1'])\nE.add_port(name = 2, port = rect_ref2.ports['myport2'])\n\nqp(E) # quickplot it!",
"If we look at the quickplot above, we can see that there are now red-colored ports on both ends. Ports that are colored red are owned by the Device, ports that are colored blue-green are owned by objects inside the Device. This is good! Now if we want to use this bend-shape, we can interact with its ports named 1 and 2. \nLet's go ahead and try to string 3 of these bend-shapes together:",
"# Create a blank Device\nD = Device('triple-bend')\n\n# Add 3 references to our bend-shape Device `E`:\nbend_ref1 = D.add_ref(E) # Using the function add_ref()\nbend_ref2 = D << E # Using the << operator which is identical to add_ref()\nbend_ref3 = D << E\n\n# Let's mirror one of them so it turns right instead of left\nbend_ref2.mirror()\n\n# Connect each one in a series\nbend_ref2.connect(1, bend_ref1.ports[2])\nbend_ref3.connect(1, bend_ref2.ports[2])\n\n# Add ports so we can use this shape at an even higher-level\nD.add_port(name = 1, port = bend_ref1.ports[1])\nD.add_port(name = 2, port = bend_ref3.ports[2])\n\nqp(D) # quickplot it!",
"Saving as a GDSII file\nSaving the design as a GDS file is simple -- just specify the Device you'd like to save and run the write_gds() function:",
"D.write_gds('triple-bend.gds')",
"Some useful notes about writing GDS files:\n\nThe default unit is 1e-6 (micrometers aka microns), with a precision of 1e-9 (nanometer resolution)\nPHIDL will automatically handle naming of all the GDS cells to avoid name-collisions.\nUnless otherwise specified, the top-level GDS cell will be named \"toplevel\"\n\nAll of these parameters can be modified using the appropriate arguments of write_gds():",
"D.write_gds(filename = 'triple-bend.gds', # Output GDS file name\n unit = 1e-6, # Base unit (1e-6 = microns)\n precision = 1e-9, # Precision / resolution (1e-9 = nanometers)\n auto_rename = True, # Automatically rename cells to avoid collisions\n max_cellname_length = 28, # Max length of cell names\n cellname = 'toplevel' # Name of output top-level cell\n )"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
catalystcomputing/DSIoT-Python-sessions | Session2/code/01 Working with text.ipynb | apache-2.0 | [
"Working with text\nSample from http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html\nThe data set is '20 newsgroups dataset' a dataset used for testing machine learning accuracy described at:\n20 newsgroups dataset website.\nWe will be using this data to show scikit learn.\nTo make the samples run more quickly we will be limiting the example data set to just 4 categories.",
"categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']",
"Load in the training set of data",
"from sklearn.datasets import fetch_20newsgroups\ntwenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42)\n\ntwenty_train.target_names",
"Note target names not in same order as in the categories array\nCount of documents",
"len(twenty_train.data)",
"Show the first 8 lines of text from one of the documents formated with line breaks",
"print(\"\\n\".join(twenty_train.data[0].split(\"\\n\")[:8]))",
"Path to file on your machine",
"twenty_train.filenames[0]",
"Show the the targets categories of first 10 documents. As a list and show there names.",
"print(twenty_train.target[:10])\nfor t in twenty_train.target[:10]:\n print(twenty_train.target_names[t])",
"Lets look at a document in the training data.",
"print(\"\\n\".join(twenty_train.data[0].split(\"\\n\")))",
"Extracting features from text files\nSo for machine learning to be used text must be turned into numerical feature vectors.\nWhat is a feature vector?\n\nEach word is assigned an integer identifier\nEach document is assigned an integer identifier\n\nThe results are stored in scipy.sparse matrices.\nTokenizing text with scikit-learn\nUsing CountVectorizer we load the training data into a spare matrix.\nWhat is a spare matrix?",
"from sklearn.feature_extraction.text import CountVectorizer\ncount_vect = CountVectorizer()\nX_train_counts = count_vect.fit_transform(twenty_train.data)\nX_train_counts.shape\n\nX_train_counts.__class__",
"Using a CountVectorizer method we can get the integer identifier of a word.",
"count_vect.vocabulary_.get(u'application')",
"With this identifier we can get the count of the word in a given document.",
"print(\"Word count for application in first document: {0} and last document: {1} \").format(\n X_train_counts[0, 5285], X_train_counts[2256, 5285])\n\ncount_vect.vocabulary_.get(u'subject')\n\nprint(\"Word count for email in first document: {0} and last document: {1} \").format(\n X_train_counts[0, 31077], X_train_counts[2256, 31077])\n\ncount_vect.vocabulary_.get(u'to')\n\nprint(\"Word count for email in first document: {0} and last document: {1} \").format(\n X_train_counts[0, 32493], X_train_counts[2256, 32493])",
"What are two problems with using a word count in a document?\nFrom occurrences to frequencies\n$\\text{Term Frequencies tf} = \\text{occurrences of each word} / \\text{total number of words}$\ntf-idf is \"Term Frequencies times Inverse Document Frequency\"\nCalculating tfidf",
"from sklearn.feature_extraction.text import TfidfTransformer\ntf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)\nX_train_tfidf_2stage = tf_transformer.transform(X_train_counts)\nX_train_tfidf_2stage.shape",
".fit(..) to fit estimator to the data\n.transform(..) to transform the count matrix to tf-idf\nIt is possible to merge the fit and transform stages using .fit_transform(..)\nCalculate tfidf",
"tfidf_transformer = TfidfTransformer()\nX_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)\nX_train_tfidf.shape\n\nprint(\"In first document tf-idf for application: {0} subject: {1} to: {2}\").format(\n X_train_tfidf[0, 5285], X_train_tfidf[0, 31077], X_train_tfidf[0, 32493])",
"Training a classifier\nSo we now have features. We can train a classifier to try to predict the category of a post. First we will try the\nnaïve Bayes classifier.",
"from sklearn.naive_bayes import MultinomialNB\nclf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)",
"Here tfidf_transformer is used to classify",
"docs_new = ['God is love', 'Heart attacks are common', 'Disbelief in a proposition', 'Disbelief in a proposition means that one does not believe it to be true', 'OpenGL on the GPU is fast']\nX_new_counts = count_vect.transform(docs_new)\nX_new_tfidf = tfidf_transformer.transform(X_new_counts)\n\npredicted = clf.predict(X_new_tfidf)\n\nfor doc, category in zip(docs_new, predicted):\n print('%r => %s' % (doc, twenty_train.target_names[category]))",
"We can see it get some right but not all.\nBuilding a pipeline\nHere we can put all the stages together in a pipeline. The names 'vect', 'tfidf' and 'clf' are arbitrary.",
"from sklearn.pipeline import Pipeline\ntext_clf_bayes = Pipeline([('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB()),\n])\n\ntext_clf_bayes_fit = text_clf_bayes.fit(twenty_train.data, twenty_train.target)",
"Evaluation",
"import numpy as np\ntwenty_test = fetch_20newsgroups(subset='test',\n categories=categories, shuffle=True, random_state=42)\ndocs_test = twenty_test.data\npredicted_bayes = text_clf_bayes_fit.predict(docs_test)\nnp.mean(predicted_bayes == twenty_test.target) ",
"Try a support vector machine instead",
"from sklearn.linear_model import SGDClassifier\ntext_clf_svm = Pipeline([('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', SGDClassifier(loss='hinge', penalty='l2',\n alpha=1e-3, n_iter=5, random_state=42)),])\ntext_clf_svm_fit = text_clf_svm.fit(twenty_train.data, twenty_train.target)\npredicted_svm = text_clf_svm_fit.predict(docs_test)\nnp.mean(predicted_svm == twenty_test.target) ",
"We can see the support vector machine got a higher number than naïve Bayes. What does it mean? We move on to metrics.\nUsing metrics\nClassification report & Confusion matix\nHere we will use a simple example to show classification reports and confusion matrices.\n\ny_true is the test data\ny_pred is the prediction",
"from sklearn import metrics\n\ny_true = [\"cat\", \"ant\", \"cat\", \"cat\", \"ant\", \"bird\", \"bird\"]\ny_pred = [\"ant\", \"ant\", \"cat\", \"cat\", \"ant\", \"cat\", \"bird\"]\nprint(metrics.classification_report(y_true, y_pred,\n target_names=[\"ant\", \"bird\", \"cat\"]))",
"Here we can see that the predictions:\n- found ant 3 times and should have found it twice hence precision of 0.67.\n- never predicted ant when shouldn't have hence recall of 1.\n- f1 source is the mean of precision and recall\n- support of 2 meaning there were 2 in the true data set.\nhttp://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html\nConfusion matix",
"metrics.confusion_matrix(y_true, y_pred, labels=[\"ant\", \"bird\", \"cat\"])",
"In the confusion_matrix the labels give the order of the rows.\n\nant was correctly categorised twice and was never miss categorised\nbird was correctly categorised once and was categorised as cat once\ncat was correctly categorised twice and was categorised as an ant once",
"metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)",
"Back to '20 newsgroups dataset'",
"print(metrics.classification_report(twenty_test.target, predicted_svm,\n target_names=twenty_test.target_names))",
"We can see where the 91% score came from.",
"# We got the evaluation score this way before:\nprint(np.mean(predicted_svm == twenty_test.target))\n# We get the same results using metrics.accuracy_score\nprint(metrics.accuracy_score(twenty_test.target, predicted_svm, normalize=True, sample_weight=None))",
"Now lets see the confusion matrix.",
"print(twenty_train.target_names)\n\nmetrics.confusion_matrix(twenty_test.target, predicted_bayes)",
"So we can see the naïve Bayes classifier got a lot more correct in some cases but also included a higher proportion in the last category.",
"metrics.confusion_matrix(twenty_test.target, predicted_svm)",
"We can see that atheism is miss categorised as Christian and science and medicine as computer graphics a high proportion of the time using the support vector machine.\nParameter tuning\nTransformation and classifiers can have various parameters. Rather than manually tweaking each parameter in the pipeline it is possible to use grid search instead.\nHere we try a couple of options for each stage. The more options the longer the grid search will take.",
"from sklearn.grid_search import GridSearchCV\nparameters = {'vect__ngram_range': [(1, 1), (1, 2)],\n 'tfidf__use_idf': (True, False),\n 'clf__alpha': (1e-3, 1e-4),\n}\n\ngs_clf = GridSearchCV(text_clf_svm_fit, parameters, n_jobs=-1)",
"Running the search on all the data will take a little while 10-30 seconds on a new ish desktop with 8 cores. If you don't want to wait that long uncomment the line with :400 and comment out the other.",
"#gs_clf_fit = gs_clf.fit(twenty_train.data[:400], twenty_train.target[:400])\ngs_clf_fit = gs_clf.fit(twenty_train.data, twenty_train.target)\n\nbest_parameters, score, _ = max(gs_clf_fit.grid_scores_, key=lambda x: x[1])\nfor param_name in sorted(parameters.keys()):\n print(\"%s: %r\" % (param_name, best_parameters[param_name]))\nscore",
"Well that is a significant improvement. Lets use these new parameters.",
"text_clf_svm_tuned = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))),\n ('tfidf', TfidfTransformer(use_idf=True)),\n ('clf', SGDClassifier(loss='hinge', penalty='l2',\n alpha=0.0001, n_iter=5, random_state=42)),\n])\ntext_clf_svm_tuned_fit = text_clf_svm_tuned.fit(twenty_train.data, twenty_train.target)\npredicted_tuned = text_clf_svm_tuned_fit.predict(docs_test)\nmetrics.accuracy_score(twenty_test.target, predicted_tuned, normalize=True, sample_weight=None)",
"Why has this only give a .93 instead of .97?\nhttp://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html",
"for x in gs_clf_fit.grid_scores_:\n print x[0], x[1], x[2]",
"Moving on from that lets see where the improvements where made.",
"print(metrics.classification_report(twenty_test.target, predicted_svm,\n target_names=twenty_test.target_names))\n\nmetrics.confusion_matrix(twenty_test.target, predicted_svm)\n\nprint(metrics.classification_report(twenty_test.target, predicted_tuned,\n target_names=twenty_test.target_names))\n\nmetrics.confusion_matrix(twenty_test.target, predicted_tuned)",
"We see comp.graphics is the only category to see a drop in prediction the other have improved.\nConclusion\nWe can see that scikit learn can do a good job in classification with the amount of training and test data in this simple example.\n\nCan you see a use in your project?\nWhat issues can you see with the training and test data?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
metpy/MetPy | v1.0/_downloads/f1c8c5b9729cd7164037ec8618030966/upperair_declarative.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Upper Air Analysis using Declarative Syntax\nThe MetPy declarative syntax allows for a simplified interface to creating common\nmeteorological analyses including upper air observation plots.",
"from datetime import datetime\n\nimport pandas as pd\n\nfrom metpy.cbook import get_test_data\nimport metpy.plots as mpplots\nfrom metpy.units import units",
"Getting the data\nIn this example, data is originally from the Iowa State Upper-air archive\n(https://mesonet.agron.iastate.edu/archive/raob/) available through a Siphon method.\nThe data are pre-processed to attach latitude/longitude locations for each RAOB site.",
"data = pd.read_csv(get_test_data('UPA_obs.csv', as_file_obj=False))",
"Plotting the data\nUse the declarative plotting interface to create a CONUS upper-air map for 500 hPa",
"# Plotting the Observations\nobs = mpplots.PlotObs()\nobs.data = data\nobs.time = datetime(1993, 3, 14, 0)\nobs.level = 500 * units.hPa\nobs.fields = ['temperature', 'dewpoint', 'height']\nobs.locations = ['NW', 'SW', 'NE']\nobs.formats = [None, None, lambda v: format(v, '.0f')[:3]]\nobs.vector_field = ('u_wind', 'v_wind')\nobs.reduce_points = 0\n\n# Add map features for the particular panel\npanel = mpplots.MapPanel()\npanel.layout = (1, 1, 1)\npanel.area = (-124, -72, 20, 53)\npanel.projection = 'lcc'\npanel.layers = ['coastline', 'borders', 'states', 'land', 'ocean']\npanel.plots = [obs]\n\n# Collecting panels for complete figure\npc = mpplots.PanelContainer()\npc.size = (15, 10)\npc.panels = [panel]\n\n# Showing the results\npc.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
drwalshaw/sc-python | 01-analysing-data.ipynb | mit | [
"analysing tabular data",
"import numpy\n\nnumpy.loadtxt\n\nnumpy.loadtxt(fname='data/weather-01.csv' delimiter = ',')\n\nnumpy.loadtxt(fname='data/weather-01.csv'delimiter=',')\n\nnumpy.loadtxt(fname='data/weather-01.csv',delimiter=',')",
"variables",
"weight_kg=55\n\nprint (weight_kg)\n\nprint('weight in pounds:',weight_kg*2.2)\n\nnumpy.loadtxt(fname='data/weather-01.csv',delimiter=',')\n\nnumpy.loadtxt(fname='data/weather-01.csv',delimiter=',')\n\nnumpy.loadtxt(fname='data/weather-01.csv',delimiter=',')\n\n%whos\n\ndata=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')\n\n%whos\n\n%whos\n\nprint(data.dtype)\n\nprint(data.shape)",
"this is 60 by 40",
"print (\"first value in data:\",data [0,0])\n\nprint ('A middle value:',data[30,20])",
"lets get the first 10 columns for the firsst 4 rows\nprint(data[0:4, 0:10])\nstart at index 0 and go up to but not including index 4",
"print (data[0:4, 0:10])",
"we dont need to start slicng at 0",
"print (data[5:10,7:15])",
"we dont even need to inc upper and lower limits",
"smallchunk=data[:3,36:]\nprint(smallchunk)",
"arithmetic on arrays",
"doublesmallchunk=smallchunk*2.0\n\nprint(doublesmallchunk)\n\ntriplesmallchunk=smallchunk+doublesmallchunk\n\nprint(triplesmallchunk)\n\nprint(numpy.mean(data))\n\nprint (numpy.max(data))\n\nprint (numpy.min(data))",
"get a set of data for the first station\nthis is shorthand for \"all the columns\"",
"station_0=data[0,:]\n\nprint(numpy.max(station_0))",
"we dont need to create @temporary@ array slices\nwe can refer to what we call array axes",
"print(numpy.mean(data, axis=0))\n\nprint(numpy.mean(data, axis=1))",
"axis = 0 gets mean down eaach column\naxis=1 gets the mean across each row so the mean temp\nfor each station for all periods\nsee above\ndo some simple vissualisations",
"import matplotlib.pyplot\n\n%matplotlib inline\n\nimage=matplotlib.pyplot.imshow(data)",
"lets look at the average tempp over time",
"avg_temperature=numpy.mean(data,axis=0)\n\navg_plot=matplotlib.pyplot.plot(avg_temperature)\n\nimport numpy\n\nimport matplotlib.pyplot\n\n%matplotlib inline\n\ndata=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')",
"create a wide figure to hold sub plots",
"fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))",
"create placeholders for plots",
"fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))\nsubplot1=fig.add_subplot (1,3,1)\nsubplot2=fig.add_subplot (1,3,2)\nsubplot3=fig.add_subplot (1,3,3)\n\nsubplot1.set_ylabel('average')\nsubplot1.plot(numpy.mean(data, axis=0))\n\nsubplot2.set_ylabel('minimum')\nsubplot2.plot(numpy.min(data, axis=0))\n\nsubplot3.set_ylabel('maximum')\nsubplot3.plot(numpy.max(data, axis=0))",
"this is fine for small numbers of datasets, what if wwe have hundreds or thousands? we need more automaation\nloops",
"word='notebook'\nprint (word[4])",
"see aabove note diff between squaare and normaal brackets",
"for char in word:\n # colon before word or indentation v imporetaant\n #indent is 4 spaces\n \n\nfor char in word:\n print (char)",
"reading filenames\nget a list of all the filenames from disk",
"import glob",
"global..something~",
"print(glob.glob('data/weather*.csv'))",
"putting it all together",
"filenames=sorted(glob.glob('data/weather*.csv'))\nfilenames=filenames[0:3]\n\nfor f in filenames:\n print (f)\n data=numpy.loadtxt(fname=f, delimiter=',')\n \n#next bits need indenting\n\n\n fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))\n subplot1=fig.add_subplot (1,3,1)\n subplot2=fig.add_subplot (1,3,2)\n subplot3=fig.add_subplot (1,3,3)\n\n subplot1.set_ylabel('average')\n subplot1.plot(numpy.mean(data, axis=0))\n\n subplot2.set_ylabel('minimum')\n subplot2.plot(numpy.min(data, axis=0))\n\n subplot3.set_ylabel('maximum')\n subplot3.plot(numpy.max(data, axis=0))\n \n fig.tight_layout()\n matplotlib.pyplot.show\n\nnum=37\nif num>100:\n print('greater')\nelse:\n print('not greater')\n print ('done')\n\nnum=107\nif num>100:\n print('greater')\nelse:\n print('not greater')\n print ('done')",
"didnt print \"done\" due to break in indentation sequence",
"num=-3\n\nif num>0:\n print (num, \"is positive\")\nelif num ==0:\n print (num, \"is zero\")\nelse:\n print (num, \"is negative\")",
"elif eqauls else if, always good to finish a chain with an else",
"filenames=sorted(glob.glob('data/weather*.csv'))\n\n\nfilenames=sorted(glob.glob('data/weather*.csv'))\nfilenames=filenames[0:3]\n\nfor f in filenames:\n print (f)\n data=numpy.loadtxt(fname=f, delimiter=',') == 0 \n if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:\n print ('suspicious looking maxima')\n elif numpy.sum(numpy.min(data, axis=0)) ==0:\n print ('minimum adds to zero')\n else:\n print ('data looks ok')\n \n \n \n#next bits need indenting\n\n\n fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))\n subplot1=fig.add_subplot (1,3,1)\n subplot2=fig.add_subplot (1,3,2)\n subplot3=fig.add_subplot (1,3,3)\n\n subplot1.set_ylabel('average')\n subplot1.plot(numpy.mean(data, axis=0))\n\n subplot2.set_ylabel('minimum')\n subplot2.plot(numpy.min(data, axis=0))\n\n subplot3.set_ylabel('maximum')\n subplot3.plot(numpy.max(data, axis=0))\n \n fig.tight_layout()\n matplotlib.pyplot.show",
"something went wrong with the above",
"def fahr_to_kelvin(temp):\n return((temp-32)*(5/9)+ 273.15)\n\nprint ('freezing point of water:', fahr_to_kelvin(32))\n\nprint ('boiling point of water:', fahr_to_kelvin(212))",
"using functions",
"def analyse (filename):\n data=numpy.loadtxt(fname=filename,)......",
"unfinsinshed",
"def detect_problems (filename):\n data=numpy.loadtxt(fname=filename, delimiter=',')\n \n if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:\n print ('suspicious looking maxima')\n elif numpy.sum(numpy.min(data, axis=0)) ==0:\n print ('minimum adds to zero')\n else:\n print ('data looks ok')\n \n \n\nfor f in filenames [0:5]:\n print (f)\n analyse (f)\n detect_problems (f)\n \n\ndef analyse (filename):\n data=numpy.loadtxt(fname=filename,delimiter=',')\n \n fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))\n subplot1=fig.add_subplot (1,3,1)\n subplot2=fig.add_subplot (1,3,2)\n subplot3=fig.add_subplot (1,3,3)\n\n subplot1.set_ylabel('average')\n subplot1.plot(numpy.mean(data, axis=0))\n\n subplot2.set_ylabel('minimum')\n subplot2.plot(numpy.min(data, axis=0))\n\n subplot3.set_ylabel('maximum')\n subplot3.plot(numpy.max(data, axis=0))\n \n fig.tight_layout()\n matplotlib.pyplot.show\n \n \n\nfor f in filenames [0:5]:\n print (f)\n analyse (f)\n detect_problems (f)\n \n\nhelp(numpy.loadtxt)\n\nhelp(detect_problems)\n\n\n\"\"\"some of our temperature files haave problems, check for these\n\nthis function reads a file and reports on odd looking maxima and minimia that add to zero\nthe function does not return any data\n\"\"\"\n\ndef detect_problems (filename):\n data=numpy.loadtxt(fname=filename, delimiter=',')\n \n if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:\n print ('suspicious looking maxima')\n elif numpy.sum(numpy.min(data, axis=0)) ==0:\n print ('minimum adds to zero')\n else:\n print ('data looks ok')\n \n\ndef analyse (filename):\n data=numpy.loadtxt(fname=filename,delimiter=',')\n \n \"\"\" this function analyses a dataset and outputs plots for maax min and ave\n \"\"\"\n \n fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))\n subplot1=fig.add_subplot (1,3,1)\n subplot2=fig.add_subplot (1,3,2)\n subplot3=fig.add_subplot (1,3,3)\n\n subplot1.set_ylabel('average')\n subplot1.plot(numpy.mean(data, axis=0))\n\n subplot2.set_ylabel('minimum')\n subplot2.plot(numpy.min(data, axis=0))\n\n subplot3.set_ylabel('maximum')\n subplot3.plot(numpy.max(data, axis=0))\n \n fig.tight_layout()\n matplotlib.pyplot.show"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ethen8181/machine-learning | deep_learning/softmax_tensorflow.ipynb | mit | [
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Tensorflow\" data-toc-modified-id=\"Tensorflow-1\"><span class=\"toc-item-num\">1 </span>Tensorflow</a></span><ul class=\"toc-item\"><li><span><a href=\"#Hello-World\" data-toc-modified-id=\"Hello-World-1.1\"><span class=\"toc-item-num\">1.1 </span>Hello World</a></span></li><li><span><a href=\"#Linear-Regression\" data-toc-modified-id=\"Linear-Regression-1.2\"><span class=\"toc-item-num\">1.2 </span>Linear Regression</a></span></li><li><span><a href=\"#MNIST-Using-Softmax\" data-toc-modified-id=\"MNIST-Using-Softmax-1.3\"><span class=\"toc-item-num\">1.3 </span>MNIST Using Softmax</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(plot_style = False)\n\nos.chdir(path)\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nfrom keras.datasets import mnist\nfrom keras.utils import np_utils\n\n%watermark -a 'Ethen' -d -t -v -p numpy,matplotlib,keras,tensorflow",
"Tensorflow\nTensorFlow provides multiple APIs. The lowest level API--TensorFlow Core-- provides you with complete programming control. We recommend TensorFlow Core for machine learning researchers and others who require fine levels of control over their models\nHello World\nWe can think of TensorFlow Core programs as consisting of two discrete sections:\n\nBuilding the computational graph.\nRunning the computational graph.",
"# note that this is simply telling tensorflow to \n# create a constant operation, nothing gets\n# executed until we start a session and run it\nhello = tf.constant('Hello, TensorFlow!')\nhello\n\n# start the session and run the graph\nwith tf.Session() as sess:\n print(sess.run(hello))",
"We can think of tensorflow as a system to define our computation, and using the operation that we've defined it will construct a computation graph (where each operation becomes a node in the graph). The computation graph that we've defined will not be run unless we give it some context and explicitly tell it to do so. In this case, we create the Session that encapsulates the environment in which the objects are evaluated (execute the operations that are defined in the graph).\nConsider another example that simply add and multiply two constant numbers.",
"a = tf.constant(2.0, tf.float32)\nb = tf.constant(3.0) # also tf.float32 implicitly\nc = a + b\n\nwith tf.Session() as sess:\n print('mutiply: ', sess.run(a * b))\n print('add: ', sess.run(c)) # note that we can define the add operation outside \n print('add: ', sess.run(a + b)) # or inside the .run()",
"The example above is not especially interesting because it always produces a constant result. A graph can be parameterized to accept external inputs, known as placeholders. Think of it as the input data we would give to machine learning algorithm at some point.\nWe can do the same operation as above by first defining a placeholder (note that we must specify the data type). Then feed in values using feed_dict when we run it.",
"a = tf.placeholder(tf.float32)\nb = tf.placeholder(tf.float32)\n\n# define some operations\nadd = a + b\nmul = a * b\n\nwith tf.Session() as sess:\n print('mutiply: ', sess.run(mul, feed_dict = {a: 2, b: 3}))\n print('add: ', sess.run(add, feed_dict = {a: 2, b: 3}))",
"Some matrix operations are the same compared to numpy. e.g.",
"c = np.array([[3.,4], [5.,6], [6.,7]])\nprint(c)\nprint(np.mean(c, axis = 1))\nprint(np.argmax(c, axis = 1))\n\nwith tf.Session() as sess:\n result = sess.run(tf.reduce_mean(c, axis = 1))\n print(result)\n print(sess.run(tf.argmax(c, axis = 1)))",
"The functionality of numpy.mean and tensorflow.reduce_mean are the same. When axis argument parameter is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed (axis = 1, means the operation is along the column, so it will compute the mean for each row). When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. The same can be applied to argmax which returns the index that contains the maximum value along an axis.\nLinear Regression\nWe'll start off by writing a simple linear regression model. To do so, we first need to understand the difference between tf.Variable and tf.placeholder. \n\nStackoverflow. The difference is that with tf.Variable you have to provide an initial value when you declare it. With tf.placeholder you don't have to provide an initial value and you can specify it at run time with the feed_dict argument inside Session.run.\nIn short, we will use tf.Variable for trainable variables such as weights (W) and biases (B) for our model. On the other hand, tf.placeholder is used to feed actual training examples.\n\nAlso note that, constants are automatically initialized when we call tf.constant, and their value can never change. By contrast, variables are not initialized when we call tf.Variable. To initialize all the variables in a TensorFlow program, we must explicitly call a special operation called tf.global_variables_initializer(). Things will become clearer with the example below.",
"# Parameters\nlearning_rate = 0.01 # learning rate for the optimizer (gradient descent)\nn_epochs = 1000 # number of iterations to train the model\ndisplay_epoch = 100 # display the cost for every display_step iteration\n\n# make up some trainig data\nX_train = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, \n 2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1], dtype = np.float32)\ny_train = np.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, \n 1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3], dtype = np.float32)\n\n# placeholder for the input data\nX = tf.placeholder(tf.float32)\nY = tf.placeholder(tf.float32)\n\n# give the model's parameter a randomized initial value\nW = tf.Variable(np.random.randn(), tf.float32, name = 'weight')\nb = tf.Variable(np.random.randn(), tf.float32, name = 'bias')\n\n# Construct the formula for the linear model\n# we can also do\n# pred = tf.add(tf.multiply(X, W), b)\npred = W * X + b\n\n# we then define the loss function that the model is going to optimize on,\n# here we use the standard mean squared error, which is sums the squares of the\n# prediction and the true y divided by the number of observations, note\n# that we're computing the difference between the prediction and the y label\n# from the placeholder\ncost = tf.reduce_mean(tf.pow(pred - Y, 2))\n\n# after defining the model structure and the function to optimize on,\n# tensorflow provides several optimizers that can do optimization task\n# for us, the simplest one being gradient descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate)\ntrain = optimizer.minimize(cost)\n\n# initializing the variables\ninit = tf.global_variables_initializer()\n\n# change default figure and font size\nplt.rcParams['figure.figsize'] = 8, 6 \nplt.rcParams['font.size'] = 12\n\n# Launch the graph\nwith tf.Session() as sess:\n sess.run(init)\n\n # Fit on all the training data\n feed_dict = {X: X_train, Y: y_train}\n for epoch in range(n_epochs):\n sess.run(train, feed_dict = feed_dict)\n\n # Display logs per epoch step\n if (epoch + 1) % display_epoch == 0:\n # run the cost to obtain the value for the cost function at each step\n c = sess.run(cost, feed_dict = feed_dict)\n print(\"Epoch: {}, cost: {}\".format(epoch + 1, c))\n\n print(\"Optimization Finished!\")\n c = sess.run(cost, feed_dict = feed_dict)\n weight = sess.run(W)\n bias = sess.run(b)\n print(\"Training cost: {}, W: {}, b: {}\".format(c, weight, bias))\n\n # graphic display\n plt.plot(X_train, y_train, 'ro', label = 'Original data')\n plt.plot(X_train, weight * X_train + bias, label = 'Fitted line')\n plt.legend()\n plt.show()",
"MNIST Using Softmax\nMNIST is a simple computer vision dataset. It consists of images of handwritten digits like these:\n<img src='images/mnist.png'>\nEach image is 28 pixels by 28 pixels, which is essentially a $28 \\times 28$ array of numbers. To use it in a context of a machine learning problem, we can flatten this array into a vector of $28 \\times 28 = 784$, this will be the number of features for each image. It doesn't matter how we flatten the array, as long as we're consistent between images. Note that, flattening the data throws away information about the 2D structure of the image. Isn't that bad? Well, the best computer vision methods do exploit this structure. But the simple method we will be using here, a softmax regression (defined below), won't.\nThe dataset also includes labels for each image, telling us the each image's label. For example, the labels for the above images are 5, 0, 4, and 1. Here we're going to train a softmax model to look at images and predict what digits they are. The possible label values in the MNIST dataset are numbers between 0 and 9, hence this will be a 10-class classification problem.",
"n_class = 10\nn_features = 784 # mnist is a 28 * 28 image\n\n# load the dataset and some preprocessing step that can be skipped\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_train = X_train.reshape(60000, n_features)\nX_test = X_test.reshape(10000, n_features)\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\n\n# images takes values between 0 - 255, we can normalize it\n# by dividing every number by 255\nX_train /= 255\nX_test /= 255\n\nprint(X_train.shape[0], 'train samples')\nprint(X_test.shape[0], 'test samples')\n\n# convert class vectors to binary class matrices (one-hot encoding)\n# note: you HAVE to to this step\nY_train = np_utils.to_categorical(y_train, n_class)\nY_test = np_utils.to_categorical(y_test , n_class)",
"In the following code chunk, we define the overall computational graph/structure for the softmax classifier using the cross entropy cost function as the objective. Recall that the formula for this function can be denoted as:\n$$L = -\\sum_i y'_i \\log(y_i)$$\nWhere y is our predicted probability distribution, and y′ is the true distribution.",
"# define some global variables\nlearning_rate = 0.1 \nn_iterations = 400\n\n# define the input and output \n# here None means that a dimension can be of any length,\n# which is what we want, since the number of observations\n# we have can vary;\n# note that the shape argument to placeholder is optional, \n# but it allows TensorFlow to automatically catch bugs stemming \n# from inconsistent tensor shapes\nX = tf.placeholder(tf.float32, [None, n_features])\ny = tf.placeholder(tf.float32, [None, n_class])\n\n# initialize both W and b as tensors full of zeros. \n# these are parameters that the model is later going to learn,\n# Notice that W has a shape of [784, 10] because we want to multiply \n# the 784-dimensional image vectors by it to produce 10-dimensional \n# vectors of evidence for the difference classes. b has a shape of [10] \n# so we can add it to the output.\nW = tf.Variable(tf.zeros([n_features, n_class]))\nb = tf.Variable(tf.zeros([n_class]))",
"```python\nto define the softmax classifier and cross entropy cost\nwe can do the following\nmatrix multiplication using the .matmul command\nand add the softmax output\noutput = tf.nn.softmax(tf.matmul(X, W) + b)\ncost function: cross entropy, the reduce mean is simply the average of the\ncost function across all observations\ncross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(output), axis = 1))\n```",
"# but for numerical stability reason, the tensorflow documentation\n# suggests using the following function\noutput = tf.matmul(X, W) + b\ncross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y, logits = output))",
"Now that we defined the structure of our model, we'll:\n\nDefine a optimization algorithm the train it. In this case, we ask TensorFlow to minimize our defined cross_entropy cost using the gradient descent algorithm with a learning rate of 0.5. There are also other off the shelf optimizers that we can use that are faster for more complex models.\nWe'll also add an operation to initialize the variables we created\nDefine helper \"function\" to evaluate the prediction accuracy",
"train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\ninit = tf.global_variables_initializer()\n\n# here we're return the predicted class of each observation using argmax\n# and see if the ouput (prediction) is equal to the target variable (y)\n# since equal is a boolean type tensor, we cast it to a float type to compute\n# the actual accuracy\ncorrect_prediction = tf.equal(tf.argmax(y, axis = 1), tf.argmax(output, axis = 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))",
"Now it's time to run it. During each step of the loop, we get a \"batch\" of one hundred random data points (defined by batch_size) from our training set. We run train_step feeding in the batches data to replace the placeholders.\nUsing small batches of random data is called stochastic training -- in this case, stochastic gradient descent. Ideally, we'd like to use all our data for every step of training because that would give us a better sense of what we should be doing, but that's expensive. So, instead, we use a different subset every time. Doing this is cheap and has much of the same benefit.",
"with tf.Session() as sess: \n # initialize the variable, train the \"batch\" gradient descent\n # for a specified number of iterations and evaluate on accuracy score\n # remember the key to the feed_dict dictionary must match the variable we use\n # as the placeholder for the data in the beginning\n sess.run(init)\n for i in range(n_iterations):\n # X_batch, y_batch = mnist.train.next_batch(batch_size)\n _, acc = sess.run([train_step, accuracy], feed_dict = {X: X_train, y: Y_train})\n \n # simply prints the training data's accuracy for every n iteration\n if i % 50 == 0:\n print(acc)\n \n # after training evaluate the accuracy on the testing data\n acc = sess.run(accuracy, feed_dict = {X: X_train, y: Y_train})\n print('test:', acc)",
"Notice that we did not have to worry about computing the gradient to update the model, the nice thing about Tensorflow is that, once we've defined the structure of our model it has the capability to automatically differentiate mathematical expressions. This means we no longer need to compute the gradients ourselves! In this example, our softmax classifier obtained pretty nice result around 90%. But we can certainly do better with more advanced techniques such as convolutional deep learning.\nReference\n\nBlog: What is a TensorFlow Session?\nGithub: Tensorflow Examples - Linear Regression\nTensorflow Documentation: Getting Started With TensorFlow\nTensorFlow Documentation: MNIST For ML Beginners"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kkhenriquez/python-for-data-science | Week-8-NLP-Databases/Working with Databases.ipynb | mit | [
"Access a Database with Python - Iris Dataset\nThe Iris dataset is a popular dataset especially in the Machine Learning community, it is a set of features of 50 Iris flowers and their classification into 3 species.\nIt is often used to introduce classification Machine Learning algorithms.\nFirst let's download the dataset in SQLite format from Kaggle:\nhttps://www.kaggle.com/uciml/iris/\nDownload database.sqlite and save it in the data/iris folder.\n<p><img src=\"https://upload.wikimedia.org/wikipedia/commons/4/49/Iris_germanica_%28Purple_bearded_Iris%29%2C_Wakehurst_Place%2C_UK_-_Diliff.jpg\" alt=\"Iris germanica (Purple bearded Iris), Wakehurst Place, UK - Diliff.jpg\" height=\"145\" width=\"114\"></p>\n\n<p><br> From <a href=\"https://commons.wikimedia.org/wiki/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg#/media/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg\">Wikimedia</a>, by <a href=\"//commons.wikimedia.org/wiki/User:Diliff\" title=\"User:Diliff\">Diliff</a> - <span class=\"int-own-work\" lang=\"en\">Own work</span>, <a href=\"http://creativecommons.org/licenses/by-sa/3.0\" title=\"Creative Commons Attribution-Share Alike 3.0\">CC BY-SA 3.0</a>, <a href=\"https://commons.wikimedia.org/w/index.php?curid=33037509\">Link</a></p>\n\nFirst let's check that the sqlite database is available and display an error message if the file is not available (assert checks if the expression is True, otherwise throws AssertionError with the error message string provided):",
"import os\ndata_iris_folder_content = os.listdir(\"data/iris\")\n\nerror_message = \"Error: sqlite file not available, check instructions above to download it\"\nassert \"database.sqlite\" in data_iris_folder_content, error_message",
"Access the Database with the sqlite3 Package\nWe can use the sqlite3 package from the Python standard library to connect to the sqlite database:",
"import sqlite3\n\nconn = sqlite3.connect('data/iris/database.sqlite')\n\ncursor = conn.cursor()\n\ntype(cursor)",
"A sqlite3.Cursor object is our interface to the database, mostly throught the execute method that allows to run any SQL query on our database.\nFirst of all we can get a list of all the tables saved into the database, this is done by reading the column name from the sqlite_master metadata table with:\nSELECT name FROM sqlite_master\n\nThe output of the execute method is an iterator that can be used in a for loop to print the value of each row.",
"for row in cursor.execute(\"SELECT name FROM sqlite_master\"):\n print(row)",
"a shortcut to directly execute the query and gather the results is the fetchall method:",
"cursor.execute(\"SELECT name FROM sqlite_master\").fetchall()",
"Notice: this way of finding the available tables in a database is specific to sqlite, other databases like MySQL or PostgreSQL have different syntax.\nThen we can execute standard SQL query on the database, SQL is a language designed to interact with data stored in a relational database. It has a standard specification, therefore the commands below work on any database.\nIf you need to connect to another database, you would use another package instead of sqlite3, for example:\n\nMySQL Connector for MySQL\nPsycopg for PostgreSQL\npymssql for Microsoft MS SQL\n\nthen you would connect to the database using specific host, port and authentication credentials but then you could execute the same exact SQL statements.\nLet's take a look for example at the first 3 rows in the Iris table:",
"sample_data = cursor.execute(\"SELECT * FROM Iris LIMIT 20\").fetchall()\n\nprint(type(sample_data))\nsample_data\n\n[row[0] for row in cursor.description]",
"It is evident that the interface provided by sqlite3 is low-level, for data exploration purposes we would like to directly import data into a more user friendly library like pandas.\nImport data from a database to pandas",
"import pandas as pd\n\niris_data = pd.read_sql_query(\"SELECT * FROM Iris\", conn)\n\niris_data.head()\n\niris_data.dtypes",
"pandas.read_sql_query takes a SQL query and a connection object and imports the data into a DataFrame, also keeping the same data types of the database columns. pandas provides a lot of the same functionality of SQL with a more user-friendly interface.\nHowever, sqlite3 is extremely useful for downselecting data before importing them in pandas.\nFor example you might have 1 TB of data in a table stored in a database on a server machine. You are interested in working on a subset of the data based on some criterion, unfortunately it would be impossible to first load data into pandas and then filter them, therefore we should tell the database to perform the filtering and just load into pandas the downsized dataset.",
"iris_setosa_data = pd.read_sql_query(\"SELECT * FROM Iris WHERE Species == 'Iris-setosa'\", conn)\n\niris_setosa_data\nprint(iris_setosa_data.shape)\nprint(iris_data.shape)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
molgor/spystats | notebooks/Sandboxes/TensorFlow/GPFlow_examples.ipynb | bsd-2-clause | [
"%matplotlib inline\nimport sys\nsys.path.append('/apps')\nimport django\ndjango.setup()\nfrom drivers.graph_models import TreeNode, Order, Family, graph, pickNode\nfrom traversals.strategies import sumTrees, UniformRandomSampleForest",
"GPFlow first approximation",
"## Import modules\nimport numpy as np\nimport scipy.spatial.distance as sp\nfrom matplotlib import pyplot as plt\nplt.style.use('ggplot')",
"Simulating Data\n\nSimulate random uniform 4-d vector. Give N of this.",
"## Parameter definitions\nN = 1000\nphi = 0.05\nsigma2 = 1.0\nbeta_0 = 10.0\nbeta_1 = 1.5\nbeta_2 = -1.0\n# AL NAGAT\nnugget = 0.03\n\nX = np.random.rand(N,4)",
"Calculate distance\nX can be interpreted as covariate matrix in which the first two columns are the longitud and latitude.\nGPFlow requires that all the covariates (including spatio-temporal coordinates) are in X.",
"points = X[:,0:2]\ndist_points = sp.pdist(points)\n\n## Reshape the vector to square matrix \ndistance_matrix = sp.squareform(dist_points)\ncorrelation_matrix = np.exp(- distance_matrix / phi)\ncovariance_matrix = correlation_matrix * sigma2\nplt.imshow(covariance_matrix)",
"Simulate the Gaussian Process $S$\nRemmember that for a stationary Gaussian Process, the value at Z is independent of the betas (Covariate weights).\nMean 0's $\\Sigma$ Correlation matrix\n$$S = MVN(0,\\Sigma) + \\epsilon$$\n$ \\epsilon \\sim N(0,\\sigma^{2}) $\nS is a realization of a spatial process.",
"S = np.random.multivariate_normal(np.zeros(N), correlation_matrix) +\\\n np.random.normal(size = N) * nugget\n\nS.shape\n# We convert to Matrix [1 column]\nS = S.reshape(N,1)\n## Plot x, y using as color the Gaussian process\nplt.scatter(X[:, 0], X[:, 1], c = S)\nplt.colorbar()",
"Simulate the Response Variable $y$ \n\n$$y_1(x_1,x_2) = S(x_1,x_2) $$\n$$y_2(x_1,x_2) = \\beta_0 + x_3\\beta_1 + x_4\\beta_2 + S(x_1,x_2)$$",
"# remmember index 0 is 1\nmu = beta_0 + beta_1 * X[:, 2] + beta_2 * X[:, 3]\nmu = mu.reshape(N,1)\nY1 = S\nY2 = mu + S\nplt.scatter(X[:, 0], X[:, 1], c = Y2)\nplt.colorbar()\n\nS.shape",
"GP Model !\nThis model is without covariates",
"# Import GPFlow\nimport GPflow as gf\n\n# Defining the model Matern function with \\kappa = 0.5 \nk = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] )\n\ntype(k)",
"Model for $y_1$",
"m = gf.gpr.GPR(points, Y1, k)\n## First guess\ninit_nugget = 0.001\nm.likelihood.variance = init_nugget\nprint(m)",
"Like in tensorflow, m is a graph and has at least three nodes: lengthscale, kern variance and likelihood variance",
"# Estimation using symbolic gradient descent\nm.optimize()\nprint(m)",
"compare with original parameters (made from the simulation)",
"print(phi,sigma2,nugget)\n\nprint points.shape\nprint Y1.shape",
"it was close enough\nGAUSSIAN PROCESS WITH LINEAR TREND\nDefining the model",
"k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] )\n\ngf.mean_functions.Linear()\nmeanf = gf.mean_functions.Linear(np.ones((4,1)), np.ones(1))\n\nm = gf.gpr.GPR(X, Y2, k, meanf)\nm.likelihood.variance = init_nugget\nprint(m)\n\n# Estimation\nm.optimize()\nprint(m)",
"Original parameters\n\nphi = 0.05 ---> lengthscale\nsigma2 = 1.0 ---> variance transform\nnugget = 0.03 ---> likelihood variance\nbeta_0 = 10.0 ---> mean_function b\nbeta_1 = 1.5 ---> mean_fucntionA [2]\nbeta_2 = -1.0 ---> mean_functionA [3]\nmean_functionA[0] and mean_functionA[1] are the betas for for x and y (coordinates respectively)\n\nWithout spatial coordinates as covariates",
"# Defining the model\nk = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1])",
"Custom made mean function (Erick Chacón )",
"from GPflow.mean_functions import MeanFunction, Param\nimport tensorflow as tf\nclass LinearG(MeanFunction):\n \"\"\"\n y_i = A x_i + b\n \"\"\"\n def __init__(self, A=None, b=None):\n \"\"\"\n A is a matrix which maps each element of X to Y, b is an additive\n constant.\n If X has N rows and D columns, and Y is intended to have Q columns,\n then A must be D x Q, b must be a vector of length Q.\n \"\"\"\n A = np.ones((1, 1)) if A is None else A\n b = np.zeros(1) if b is None else b\n MeanFunction.__init__(self)\n self.A = Param(np.atleast_2d(A))\n self.b = Param(b)\n\n def __call__(self, X):\n Anew = tf.concat([np.zeros((2,1)),self.A],0)\n return tf.matmul(X, Anew) + self.b\n",
"Now we can use the special mean function without the coordinates (covariates).",
"meanf = LinearG(np.ones((2,1)), np.ones(1))\n\nX.shape\nY2.shape\n\nm = gf.gpr.GPR(X, Y2, k, meanf)a\nm.likelihood.variance = 0.1\nprint(m)",
"Only 2 parameters now!",
"# Estimation\nm.optimize()\nprint(m)",
"Original parameters\n\nphi = 0.05 ---> lengthscale\nsigma2 = 1.0 ---> variance transform\nnugget = 0.03 ---> likelihood variance\nbeta_0 = 10.0 ---> mean_function b\nbeta_1 = 1.5 ---> mean_fucntionA [2]\nbeta_2 = -1.0 ---> mean_functionA [3]",
"predicted_x = np.linspace(0.0,1.0,100)\n\nfrom external_plugins.spystats.models import makeDuples\npredsX = makeDuples(predicted_x)\n\npX = np.array(predsX)\n\ntt = np.ones((10000,2)) *0.5\n\n## Concatenate with horizontal stack\nSuperX = np.hstack((pX,tt))\n\nSuperX.shape\n\nmean, variance = m.predict_y(SuperX)\nminmean = min(mean)\nmaxmean = max(mean)\n\n#plt.figure(figsize=(12, 6))\nplt.scatter(pX[:,0], pX[:,1])\n\n\nXx, Yy = np.meshgrid(predicted_x,predicted_x)\nplt.pcolor(Xx,Yy,mean.reshape(100,100),cmap=plt.cm.Accent)\n\nNn = 300\npredicted_x = np.linspace(0.0,1.0,Nn)\nXx, Yy = np.meshgrid(predicted_x,predicted_x)\n## Predict\nfrom external_plugins.spystats.models import makeDuples\npredsX = makeDuples(predicted_x)\npX = np.array(predsX)\ntt = np.ones((Nn**2,2)) *0.5\nSuperX = np.hstack((pX,tt))\nmean, variance = m.predict_y(SuperX)\nminmean = min(mean)\nmaxmean = max(mean)\nwidth = 12\nheight = 8\nminz = minmean\nmaxz = maxmean\nplt.figure(figsize=(width, height))\nplt.subplot(1,2,1)\nscat = plt.scatter(X[:, 0], X[:, 1], c = Y2)\n#plt.axis('equal')\nplt.xlim((0,1))\nplt.ylim((0,1))\nplt.clim(minz,maxz)\n#plt.colorbar()\nplt.subplot(1,2,2)\n#field = plt.imshow(mean.reshape(100,100).transpose().transpose(),interpolation=None)\nplt.pcolor(Xx,Yy,mean.reshape(Nn,Nn).transpose())\nplt.colorbar()\nplt.clim(minz,maxz)\n\n\n\n\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nscat = plt.scatter(X[:, 0], X[:, 1], c = Y2)\n\nfield = plt.imshow(mean.reshape(100,100),interpolation=None)\n#fig.subplots_adjust(right=0.8)\n#cbar_ax = fig.add_axes([0.85, 0.05])\nfig.colorbar(field, ax=axes.ravel().tolist())\n\n\n\nplt.imshow(mean.reshape(100,100),interpolation=None)\nplt.colorbar()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_evoked_topomap_delayed_ssp.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Create topographic ERF maps in delayed SSP mode\nThis script shows how to apply SSP projectors delayed, that is,\nat the evoked stage. This is particularly useful to support decisions\nrelated to the trade-off between denoising and preserving signal.\nIn this example we demonstrate how to use topographic maps for delayed\nSSP application.",
"# Authors: Denis Engemann <[email protected]>\n# Christian Brodbeck <[email protected]>\n# Alexandre Gramfort <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\necg_fname = data_path + '/MEG/sample/sample_audvis_ecg_proj.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = io.Raw(raw_fname)\nevents = mne.read_events(event_fname)\n\n# delete EEG projections (we know it's the last one)\nraw.del_proj(-1)\n# add ECG projs for magnetometers\n[raw.add_proj(p) for p in mne.read_proj(ecg_fname) if 'axial' in p['desc']]\n\n# pick magnetometer channels\npicks = mne.pick_types(raw.info, meg='mag', stim=False, eog=True,\n include=[], exclude='bads')\n\n# We will make of the proj `delayed` option to\n# interactively select projections at the evoked stage.\n# more information can be found in the example/plot_evoked_delayed_ssp.py\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(mag=4e-12), proj='delayed')\n\nevoked = epochs.average() # average epochs and get an Evoked dataset.",
"Interactively select / deselect the SSP projection vectors",
"# set time instants in seconds (from 50 to 150ms in a step of 10ms)\ntimes = np.arange(0.05, 0.15, 0.01)\n\nevoked.plot_topomap(times, proj='interactive')\n# Hint: the same works for evoked.plot and evoked.plot_topo"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ajgpitch/qutip-notebooks | examples/qip-processor-DJ-algorithm.ipynb | lgpl-3.0 | [
"Running the Deutsch–Jozsa algorithm on the noisy device simulator\nAuthor: Boxi Li ([email protected])\nIn this example, we demonstrate how to simulate simple quantum algorithms on a qauntum hardware with QuTiP. The simulators are defined in the class Processor(and its sub-classes). Processor represents a general quantum device. The interaction of the quantum systems such as qubits is defined by the control Hamiltonian. One can set the amplitude of the interaction by the attribute coeff which corresponds to the pulse intensity of the control system. For more details please refer to the introductory notebook.\nIn this example, we won't set the interaction strength by ourselves. Instead, we give it a sequence of gates, i.e. a QubitCircuit, and let the Processor find the desired pulses. The Processor class has a method load_circuit that can transfer a QubitCircuit object into a control pulse sequence. Different sub-class of Processor find their pulses in different ways. We show two examples here, one is based on a physical model and the other uses the qutip.control module. For each case, we also compare the result with or without noise by defining the t1 and t2 time of the device.\nThe Deutsch–Jozsa algorithm\nThe Deutsch–Jozsa algorithm is the simplest quantum algorithm that offers an exponential speed-up compared to the classical one. It assumes that we have a function $f:{0,1}^n \\rightarrow {0,1}$ which is either balanced or constant. Constant means that $f(x)$ is either 1 or 0 for all inputs while balanced means that $f(x)$ is 1 for half of the input domain and 0 for the other half. A more rigorous definition can be found at https://en.wikipedia.org/wiki/Deutsch-Jozsa_algorithm.\nThe implementation of the Deutsch–Jozsa algorithm inclues $n$ input qubits and 1 ancilla initialised in state $1$. At the end of the algorithm, the first $n$ qubits are measured on the computational basis. If the function is constant, the result will be $0$ for all $n$ qubits. If balanced, $\\left|00...0\\right\\rangle$ will never be measured.\nThe following example is implemented for the balanced function $f:{00,01,10,11} \\rightarrow {0,1}$, where $f(00)=f(11)=0$ and $f(01)=f(10)=1$. This function is balanced, so the probability of measuring state $\\left|00\\right\\rangle$ should be 0.",
"import numpy as np\nfrom qutip.qip.device import OptPulseProcessor, LinearSpinChain\nfrom qutip.qip.circuit import QubitCircuit\nfrom qutip.operators import sigmaz, sigmax, identity\nfrom qutip.tensor import tensor\nfrom qutip.states import basis\nfrom qutip.qobj import ptrace\nbasis00 = tensor([basis(2,0), basis(2,0)])\n\nqc = QubitCircuit(N=3)\nqc.add_gate(\"SNOT\", targets=0)\nqc.add_gate(\"SNOT\", targets=1)\nqc.add_gate(\"SNOT\", targets=2)\n\n# function f(x)\nqc.add_gate(\"CNOT\", controls=0, targets=2)\nqc.add_gate(\"CNOT\", controls=1, targets=2)\n\nqc.add_gate(\"SNOT\", targets=0)\nqc.add_gate(\"SNOT\", targets=1)",
"Using the optimal control module to find the pulse\nThis feature integrated into the sub-class OptPulseProcessor which use methods in the optimal control module to find the optimal pulse sequence for the desired gates. It can find the optimal pulse either for the whole unitary evolution or for each gate. Here we choose the second option.",
"setting_args = {\"SNOT\": {\"num_tslots\": 5, \"evo_time\": 1},\n \"CNOT\": {\"num_tslots\": 12, \"evo_time\": 5}}\nprocessor = OptPulseProcessor(N=3)\nprocessor.add_control(sigmaz(), cyclic_permutation=True)\nprocessor.add_control(sigmax(), cyclic_permutation=True)\nprocessor.add_control(tensor([sigmax(), sigmax(), identity(2)]))\nprocessor.add_control(tensor([identity(2), sigmax(), sigmax()]))\nprocessor.load_circuit(qc, setting_args=setting_args, merge_gates=False, verbose=True,\n amp_ubound=5, amp_lbound=0);",
"To quickly visualize the pulse, Processor has a method called plot_pulses. In the figure bellow, each colour represents the pulse sequence of one control Hamiltonian in the system as a function of time. In each time interval, the pulse remains constant.",
"processor.plot_pulses(title=\"Control pulse of OptPulseProcessor\", figsize=(8, 4), dpi=100);",
"To simulate the evolution, we only need to call the method run_state which calls one of the open system solvers in QuTiP and calculate the time evolution.\nWithout decoherence",
"psi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])\nresult = processor.run_state(init_state=psi0)\nprint(\"Probability of measuring state 00:\")\nprint(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))",
"With decoherence",
"processor.t1 = 100\nprocessor.t2 = 30\npsi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])\nresult = processor.run_state(init_state=psi0)\nprint(\"Probability of measuring state 00:\")\nprint(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))",
"We can see that under noisy evolution their is a none zero probability of measuring state 00.\nGenerating pulse based on quantum computing model\nBelow, we simulate the same quantum circuit using one sub-class LinearSpinChain. It will find the pulse based on the Hamiltonian available on a quantum computer of the linear spin chain system.\nPlease refer to the notebook of the spin chain model for more details.",
"processor2 = LinearSpinChain(3)\nprocessor2.load_circuit(qc);\n\nprocessor2.plot_pulses(title=\"Control pulse of Spin chain\");",
"The first three pulse periods (from $t=0$ to $t\\approx5$) are for the three Hadamard gates, they are followed by two long periods for the CNOT gates and then again two Hadamard. Different colours represent different kinds of interaction, as shown in the legend.\nWithout decoherence",
"psi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])\nresult = processor2.run_state(init_state=psi0)\nprint(\"Probability of measuring state 00:\")\nprint(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))",
"With decoherence",
"processor2.t1 = 100\nprocessor2.t2 = 30\npsi0 = tensor([basis(2, 0), basis(2, 0), basis(2, 1)])\nresult = processor2.run_state(init_state=psi0)\nprint(\"Probability of measuring state 00:\")\nprint(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))\n\nfrom qutip.ipynbtools import version_table\nversion_table()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kuwamai/probrobo_note | monte_calro_localization/notebook_demo.ipynb | mit | [
"Notebook Demo\n文章と式で説明してもわかりにくいので図を添えます。\nまずは色々設定します。",
"%matplotlib inline\nimport numpy as np\nimport math, random # 計算用、乱数の生成用ライブラリ\nimport matplotlib.pyplot as plt # 描画用ライブラリ\n\nclass Landmarks:\n def __init__(self, array):\n self.positions = array # array = [[1個めの星のx座標, 1個めの星のy座標], [2個めの星のx座標, 2個めの星のy座標]...]\n \n def draw(self):\n # ランドマークの位置を取り出して描画\n xs = [e[0] for e in self.positions]\n ys = [e[1] for e in self.positions]\n plt.scatter(xs,ys,s=300,marker=\"*\",label=\"landmarks\",color=\"orange\")\n\ndef Movement(pos, fw, rot):\n # 移動モデル\n # posからfw前進、rot回転した位置をリストで返す\n \n # 雑音の入った前進、回転の動き\n actual_fw = random.gauss(fw, fw * 0.2) # 20%の標準偏差でばらつく\n actual_rot = random.gauss(rot, rot * 0.2) # 20%の標準偏差でばらつく\n dir_error = random.gauss(0.0, math.pi / 180.0 * 3.0) # 3[deg]の標準偏差\n \n # 異動前の位置を保存\n px, py, pt = pos\n\n # 移動後の位置を計算\n x = px + actual_fw * math.cos(pt + dir_error)\n y = py + actual_fw * math.sin(pt + dir_error)\n t = pt + dir_error + actual_rot # dir_errorを足す\n\n # 結果を返す\n return [x,y,t]\n\ndef Observation(pos, landmark):\n # 観測モデル\n # posから見えるランドマークの距離と方向をリストで返す\n \n obss = []\n \n # センサの計測範囲\n # 距離0.1 ~ 1\n # 角度90 ~ -90[deg]\n sensor_max_range = 1.0\n sensor_min_range = 0.1\n sensor_max_angle = math.pi / 2\n sensor_min_angle = -math.pi / 2\n \n # ロボットやパーティクルの位置姿勢を保存\n rx, ry, rt = pos\n \n # ランドマークごとに観測\n for lpos in landmark.positions:\n true_lx, true_ly = lpos\n # 観測が成功したらresultをTrue\n result = True\n\n # ロボットとランドマークの距離を計算\n # センサの範囲外であればresultがFalseに\n distance = math.sqrt((rx - true_lx) ** 2 + (ry - true_ly) ** 2)\n if distance > sensor_max_range or distance < sensor_min_range:\n result = False\n\n # ロボットから見えるランドマークの方向を計算\n # こちらもセンサの範囲外であればresultがFalseに\n direction = math.atan2(true_ly - ry, true_lx - rx) - rt\n if direction > math.pi: direction -= 2 * math.pi\n if direction < - math.pi: direction += 2 * math.pi\n if direction > sensor_max_angle or direction < sensor_min_angle:\n result = False\n\n # 雑音の大きさを設定\n # これは尤度計算に使う正規分布関数の分散になる\n sigma_d = distance * 0.2 # 20%の標準偏差\n sigma_f = math.pi * 3 / 180 # 3degの標準偏差\n\n # 雑音を混ぜる\n d = random.gauss(distance, sigma_d)\n f = random.gauss(direction, sigma_f)\n \n # 観測データを保存\n z = []\n z.append([d, f, sigma_d, sigma_f, result])\n \n return z\n\nclass Robot:\n def __init__(self, x, y, rad):\n random.seed()\n \n # ステップごとにロボットの姿勢の真値が入った配列\n self.actual_poses = [[x,y,rad]]\n\n def move(self,fw,rot):\n # ロボットの位置を記録する(軌跡を残すために配列に入れてる)\n self.actual_poses.append(Movement(self.actual_poses[-1], fw, rot))\n \n def observation(self, landmarks):\n # 現在地から見た観測データの保存\n self.z = Observation(self.actual_poses[-1], landmarks)\n \n\n # 矢印の描画に必要な位置と方向を計算して描画\n def draw(self, sp):\n xs = [e[0] for e in self.actual_poses]\n ys = [e[1] for e in self.actual_poses]\n vxs = [math.cos(e[2]) for e in self.actual_poses]\n vys = [math.sin(e[2]) for e in self.actual_poses]\n plt.quiver(xs,ys,vxs,vys,color=\"red\",scale=15,angles='xy',scale_units='xy',alpha = 0.3)\n\ndef draw(i):\n # グラフの設定\n fig = plt.figure(i, figsize=(8,8))\n sp = fig.add_subplot(111,aspect='equal')\n sp.set_xlim(-0.5,2.0)\n sp.set_ylim(-0.5,0.5)\n \n # ロボット、ランドマークの描画\n for robot in robots:\n robot.draw(sp)\n \n if i:\n for robot in robots:\n for obs in robot.z:\n d = obs[0]\n f = obs[1]\n x = d * math.cos(f)\n y = d * math.sin(f)\n plt.plot(x, y, \"o\")\n \n actual_landmarks.draw()\n \n plt.legend()",
"移動モデル\nロボットを100台出してみます。\nどのロボットも1前進という命令を出しましたが、バラバラしていることがわかります。",
"actual_landmarks = Landmarks([[1.0,0.0]])\nrobots = []\n\nfor i in range(100):\n robots.append(Robot(0,0,0))\n\nfor robot in robots:\n robot.move(1.0, 0)\n \ndraw(0)",
"観測モデル\n同様にロボットを100台出して、1先に見える星を観測させます。\nカラフルなドットがロボットから見える星の位置です。",
"actual_landmarks = Landmarks([[1.0,0.0]])\nrobots = []\n\nfor i in range(100):\n robots.append(Robot(0,0,0))\n \nfor robot in robots:\n robot.observation(actual_landmarks)\n \ndraw(1)",
"尤度計算\n今回は尤度計算に正規分布を使っています。\nロボットの観測データ$d, \\varphi$と、パーティクルの観測データ$d', \\varphi'$の差が大きいほど小さな値になります。 \n$$\np(z|x_t) \\propto \\frac {\\exp (- \\frac {1}{2\\sigma_d}(d-d')^2)}{\\sigma_d \\sqrt{2 \\pi}} \\frac {\\exp (- \\frac {1}{2\\sigma_\\varphi}(\\varphi-\\varphi')^2)}{\\sigma_\\varphi \\sqrt{2 \\pi}}\n$$\n下のグラフは$d$と$\\varphi$についてそれぞれ別々に描画した正規分布のグラフです。\n$d$がオレンジ、$\\varphi$が青のグラフです。\n上の観測モデルと同じように、ランドマークはロボットから見て、0°の向きに1離れた位置にあります。\nなので$d$、distanceのグラフは、1に近いほど値が大きく、$\\varphi$、directionのグラフは0に近いほど値が大きくなっています。",
"rd = 1\nrf = 0\nsigma_rd = 1.0 * 0.2\nsigma_rf = math.pi * 3 / 180\n\npd = np.arange(-3, 3, 0.01)\npf = np.arange(-3, 3, 0.01)\n\nd = np.exp(-(rd - pd) ** 2 / (2 * (sigma_rd ** 2))) / (sigma_rd * np.sqrt(2 * np.pi))\nf = np.exp(-(rf - pf) ** 2 / (2 * (sigma_rf ** 2))) / (sigma_rf * np.sqrt(2 * np.pi))\n\nfig = plt.figure(figsize=(10,4))\nsp = fig.add_subplot(111)\nsp.set_xlim(-0.5,2.0)\nsp.set_ylim(-0.5,8)\n\nplt.plot(pd, d, color = \"orange\",label=\"distance\")\nplt.plot(pf, f, color = \"blue\",label=\"direction\")\n\nplt.legend()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
iRipVanWinkle/ml | Data Science UA - September 2017/Lecture 05 - Modeling Techniques and Regression/Linear_Regression.ipynb | mit | [
"Introduction to Linear Regression\nAdapted from Chapter 3 of An Introduction to Statistical Learning\nPredictive modeling, using a data samples to make predictions about unobserved or future events, is a common data analytics task. Predictive modeling is considered to be a form of machine learning.\nLinear regression is a technique for predicting a response/dependent variable based on one or more explanatory/independent variables, or features. The term \"linear\" refers to the fact that the method models data as a linear combination of explanatory variables. \nLinear regression, in its simplest form, fits a straight line to the response variable data so that the line minimizes the squared differences (also called errors or residuals) between the actual obbserved response and the predicted point on the line. Since linear regression fits the observed data with a line, it is most effective when the response and the explanatory variable do have a linear relationship.\nMotivating Example: Advertising Data\nLet us look at data depicting the money(in thousands of dollars) spent on TV, Radio and newspaper ads for a product in a given market, as well as the corresponding sales figures.",
"# imports\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n# read data into a DataFrame\ndata = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\ndata.head()",
"The features?\n- TV: advertising dollars spent on TV (for a single product, in a given market) \n- Radio: advertising dollars spent on Radio (for a single product, in a given market) \n- Newspaper: advertising dollars spent on Newspaper (for a single product, in a given market) \nWhat is the response?\n- Sales: sales of a single product in a given market (in thousands of widgets)",
"# print the size of the DataFrame object, i.e., the size of the dataset\ndata.shape",
"There are 200 observations, corresponding to 200 markets.\nWe can try to discover if there is any relationship between the money spend on a specific type of ad, in a given market, and the sales in that market by plotting the sales figures against each category of advertising expenditure.",
"fig, axs = plt.subplots(1, 3, sharey=True)\ndata.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8))\ndata.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1])\ndata.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])",
"Questions\nHow can the company selling the product decide on how to spend its advertising money in the future? We first need to answer the following question: \"Based on this data, does there apear to be a relationship between ads and sales?\"\nIf yes, \n1. Which ad types contribute to sales?\n2. How strong is the relationship between each ad type and sales?\n4. What is the effect of each ad type of sales?\n5. Given ad spending in a particular market, can sales be predicted?\nWe will use Linear Regression to try and asnwer these questions.\nSimple Linear Regression\nSimple linear regression is an approach for modeling the relatrionship between a dependent variable (a \"response\") and an explanatory variable, also known as a \"predictor\" or \"feature\". The relationship is modeled as a linear function $y = \\beta_0 + \\beta_1x$ whose parameters are estimated from the available data.\nIn the equation above:\n- $y$ is called the response, regressand, endogenous variable, dependent variable, etc.\n- $x$ is the feature, regressor, exogenous variable, explanatory variables, predictor, etc.\n- $\\beta_0$ is known as the intercept\n- $\\beta_1$ is the regression coefficient, effect, etc. \nTogether, $\\beta_0$ and $\\beta_1$ are called paramaters, model/regression coefficients, or effects. To create a model, we must discover/learn/estimate the values of these coefficients. \nEstimating/Learning Model/Regression Coefficients\nRegression coefficients are estimated using a variety of methods. The least squares method, which finds the line which minimizes the sum of squared residuals (or \"sum of squared errors\") is among the most oftenly used.\nIn the pictures below:\n- The blue dots are the observed values of x and y.\n- The red line is the least squares line.\n- The residuals are the distances between the observed values and the least squares line.\n\n$\\beta_0$ is the intercept of the least squares line (the value of $y$ when $x$=0)\n$\\beta_1$ is the slope of the least squares line, i.e. the ratio of the vertical change (in $y$) and the horizontal change (in $x$).\n\nWe can use the statsmodels package to estimate the model coefficients for the advertising data:",
"import statsmodels.formula.api as sf\n\n#create a model with Sales as dependent variable and TV as explanatory variable\nmodel = sf.ols('Sales ~ TV', data)\n\n#fit the model to the data \nfitted_model = model.fit()\n\n# print the coefficients\nprint(fitted_model.params)",
"Interpreting Model Coefficients\nQ: How do we interpret the coefficient ($\\beta_1$) of the explanatory variable \"TV\"?\nA: A unit (a thousand dollars) increase in TV ad spending is associated with a 0.047537 unit (a thousand widgets) increase in Sales, i.e., an additional $1000 spent on TV ads is associated with an increase in sales of ~47.5 widgets.\nNote that it is, in general, possible to have a negative effect, e.g., an increase in TV ad spending to be associated with a decrease in sales. $\\beta_1$ would be negative in this case.\nUsing the Model for Prediction\nCan we use the model we develop to guide advertising spending decisions? For example, if the company spends $50,000 on TV advertising in a new market, what would the model predict for the sales in that market?\n$$y = \\beta_0 + \\beta_1x$$\n$$y = 7.032594 + 0.047537 \\times 50$$",
"7.032594 + 0.047537*50",
"The predicted Sales in that market are of 9.409444 * 1000 =~ 9409 widgets\nUsing Statsmodels:",
"# create a DataFrame to use with the Statsmodels formula interface\nNew_TV_spending = pd.DataFrame({'TV': [50]})\n\n#check the newly created DataFrame\nNew_TV_spending.head()\n\n# use the model created above to predict the sales to be generated by the new TV ad money\nsales = fitted_model.predict(New_TV_spending)\nprint(sales)",
"Plotting the Least Squares Line\nLet's make predictions for the smallest and largest observed values of money spent on TV ads, and then use the predicted values to plot the least squares line:",
"# create a DataFrame with the minimum and maximum values of TV ad money\nNew_TV_money = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]})\nprint(New_TV_money.head())\n\n# make predictions for those x values and store them\nsales_predictions = fitted_model.predict(New_TV_money)\nprint(sales_predictions)\n\n# plot the observed data\ndata.plot(kind='scatter', x='TV', y='Sales')\n\n# plot the least squares line\nplt.plot(New_TV_money, sales_predictions, c='red', linewidth=2)",
"Confidence in Linear Regression Models\nQ: Is linear regression a high bias/low variance model, or a low variance/high bias model?\nA: High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won't do a great job capturing the true relationship (high bias). (A low variance is a useful characteristic when limited training data is available.)\nWe can use Statsmodels to calculate 95% confidence intervals for the model coefficients, which are interpreted as follows: If the population from which this sample was drawn was sampled 100 times, approximately 95 of those confidence intervals would contain the \"true\" coefficient.",
"# print the confidence intervals for the model coefficients\nprint(fitted_model.conf_int())",
"Since we only have a single sample of data, and not the entire population the \"true\" value of the regression coefficient is either within this interval or it isn't, but there is no way to actually know. \nWe estimate the regression coefficient using the data we have, and then we characterize the uncertainty about that estimate by giving a confidence interval, an interval that will \"probably\" contain the value coefficient. Note that there is no probability associated with the true value of the regression coefficient being in the given confidence interval!\nAlso note that using 95% confidence intervals is simply a convention. One can create 90% confidence intervals (narrower intervals), 99% confidence intervals (wider intervals), etc.\nHypothesis Testing and p-values\nClosely related to confidence intervals is hypothesis testing. Generally speaking, you start with a null hypothesis and an alternative hypothesis (that is opposite the null). Then, you check whether the data supports rejecting the null hypothesis or failing to reject the null hypothesis.\n(Note that \"failing to reject\" the null is not the same as \"accepting\" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.)\nAs it relates to model coefficients, here is the conventional hypothesis test:\n- null hypothesis: There is no relationship between TV ads and Sales (and thus $\\beta_1$ equals zero)\n- alternative hypothesis: There is a relationship between TV ads and Sales (and thus $\\beta_1$ is not equal to zero)\nHow do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval does not include zero. Conversely, the p-value represents the probability that the coefficient is actually zero:",
"# print the p-values for the model coefficients\nfitted_model.pvalues",
"If the 95% confidence interval includes zero, the p-value for that coefficient will be greater than 0.05. If the 95% confidence interval does not include zero, the p-value will be less than 0.05. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.)\nIn this case, the p-value for TV is far less than 0.05, and so we believe that there is a relationship between TV ads and Sales.\nNote that we generally ignore the p-value for the intercept.\nHow Well Does the Model Fit the data?\nThe most common way to evaluate the overall fit of a linear model to the available data is by calculating the R-squared (a.k.a, \"coefficient of determination\") value. \nR-squared has several interpretations:\n(1) R-squared ×100 percent of the variation in the dependent variable ($y$) is reduced by taking into account predictor $x$\n(2) R-squared is the proportion of variance in the observed data that is \"explained\" by the model.\nR-squared is between 0 and 1, and, generally speaking, higher is considered to be better because more variance is accounted for (\"explained\") by the model.\nNote, however, that R-squared does not indicate whether a regression model is actually good. You can have a low R-squared value for a good model, or a high R-squared value for a model that does not fit the data!\nOne should evaluate the adequacy of a model by looking at R-squared values as well as residual (i.e., observed value - fitted value) plots, other model statistics, and subject area knowledge.\nThe R-squared value for our simple linear regression model is:",
"# print the R-squared value for the model\nfitted_model.rsquared",
"Is that a \"good\" R-squared value? One cannot generally assess that. What a \"good\" R-squared value is depends on the domain and therefore R-squared is most useful as a tool for comparing different models.\nMultiple Linear Regression\nSimple linear regression can be extended to include multiple explanatory variables:\n$y = \\beta_0 + \\beta_1x_1 + ... + \\beta_nx_n$\nEach $x$ represents a different predictor/feature, and each predictor has its own coefficient. In our case:\n$y = \\beta_0 + \\beta_1 \\times TV + \\beta_2 \\times Radio + \\beta_3 \\times Newspaper$\nLet's use Statsmodels to estimate these coefficients:",
"# create a model with all three features\nmulti_model = sf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data)\nfitted_multi_model = multi_model.fit()\n\n# print the coefficients\nprint(fitted_multi_model.params)",
"How do we interpret the coefficients? For a given amount of Radio and Newspaper ad spending, an increase of a unit ($1000 dollars) in TV ad spending is associated with an increase in Sales of 45.765 widgets.\nOther information is available in the model summary output:",
"# print a summary of the fitted model\nfitted_multi_model.summary()",
"TV and Radio have significant p-values, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper.\nTV and Radio ad spending are both positively associated with Sales, whereas Newspaper ad spending is slightly negatively associated with Sales. \nThis model has a higher R-squared (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV.\n\nFeature Selection\nHow do I decide which features to include in a linear model? \n- Try different models and check whether the R-squared value goes up when you add new predictors.\nWhat are the drawbacks to this approach?\n- Linear models rely upon a lot of assumptions (such as the predictors/features being independent), and if those assumptions are violated (which they usually are), R-squared are less reliable.\n- R-squared is susceptible to overfitting, and thus there is no guarantee that a model with a high R-squared value will generalize well to new data. For example:",
"# only include TV and Radio in the model\nmodel1 = sf.ols(formula='Sales ~ TV + Radio', data=data).fit()\nprint(model1.rsquared)\n\n# add Newspaper to the model (which we believe has no association with Sales)\nmodel2 = sf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()\nprint(model2.rsquared)",
"R-squared will always increase as you add more features to the model, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model.\nThere is alternative to R-squared called adjusted R-squared that penalizes model complexity (to control for overfitting), but this approach has its own set of issues.\nIs there a better approach to feature selection? Cross-validation, which provides a more reliable estimate of out-of-sample error, and thus is better at choosing which model will better generalize to out-of-sample data. Cross-validation can be applied to any type of model, not just linear models.\nLinear Regression in scikit-learn\nThe work done using Statsmodels can also be using scikit-learn:",
"# create a DataFrame\nfeature_cols = ['TV', 'Radio', 'Newspaper']\nX = data[feature_cols]\ny = data.Sales\n\n\nfrom sklearn.linear_model import LinearRegression\nlm = LinearRegression()\nlm.fit(X, y)\n\n# print intercept and coefficients\nprint(lm.intercept_)\nprint(lm.coef_)\n\n# pair the feature names with the coefficients\nprint(zip(feature_cols, lm.coef_))\n\n# predict for a new observation\nlm.predict([[100, 25, 25]])\n\n# calculate the R-squared\nlm.score(X, y)",
"Handling Categorical Predictors with Two Categories\nWhat if one of the predictors was categorical?\nLet's create a new feature called Size, and randomly assign observations to be small or large:",
"import numpy as np\n\n# create a Series of booleans in which roughly half are True\n\n#generate len(data) numbers between 0 and 1\nnumbers = np.random.rand(len(data))\n\n#create and index of 0s and 1s by based on whether the corresponding random number\n#is greater than 0.5. \nindex_for_large = (numbers > 0.5)\n\n#create a new data column called Size and set its values to 'small'\ndata['Size'] = 'small'\n\n# change the values of Size to 'large' whenever the corresponding value of the index is 1 \ndata.loc[index_for_large, 'Size'] = 'large'\ndata.head()",
"When using scikit-learn, we need to represent all data numerically. For example, if the feature we want to represent has only two categories, we create a dummy variable that represents the categories as a binary value:",
"# create a new Series called IsLarge\ndata['IsLarge'] = data.Size.map({'small':0, 'large':1})\ndata.head()",
"The multiple linear regression including the IsLarge predictor:",
"# create X and y\nfeature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge']\nX = data[feature_cols]\ny = data.Sales\n\n# instantiate, fit\nlm = LinearRegression()\nlm.fit(X, y)\n\n# print coefficients\nlist(zip(feature_cols, lm.coef_))",
"How do we interpret the coefficient of IsLarge? For a given amount of TV/Radio/Newspaper ad spending, a large market is associated with an average increase in Sales of 51.55 widgets (as compared to sales in a Small market).\nIf we reverse the 0/1 encoding and created the feature 'IsSmall', the coefficient would be the same in absolute value, but negative instead of positive. All that changes is the interpretation of the coefficient.\nHandling Categorical Predictors with More than Two Categories\nLet's create a new feature called Area, and randomly assign observations to be rural, suburban, or urban:",
"# set a seed for reproducibility\nnp.random.seed(123456)\n\n# assign roughly one third of observations to each group\nnums = np.random.rand(len(data))\nmask_suburban = (nums > 0.33) & (nums < 0.66)\nmask_urban = nums > 0.66\ndata['Area'] = 'rural'\ndata.loc[mask_suburban, 'Area'] = 'suburban'\ndata.loc[mask_urban, 'Area'] = 'urban'\ndata.head()",
"We have to represent Area numerically, but an encoding such as 0=rural, 1=suburban, 2=urban would not work because that would imply that there is an ordered relationship between suburban and urban. Instead, we can create another dummy variable.",
"# create three dummy variables using get_dummies, then exclude the first dummy column\narea_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:]\n\n# concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns)\ndata = pd.concat([data, area_dummies], axis=1)\ndata.head()",
"rural is coded as Area_suburban=0 and Area_urban=0\nsuburban is coded as Area_suburban=1 and Area_urban=0\nurban is coded as Area_suburban=0 and Area_urban=1\n\nOnly two dummies are needed to captures all of the information about the Area feature.(In general, for a categorical feature with k levels, we create k-1 dummy variables.)\nLet's include the two new dummy variables in the model:",
"# read data into a DataFrame\n#data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\n\n# create X and y\nfeature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge', 'Area_suburban', 'Area_urban']\nX = data[feature_cols]\ny = data.Sales\n\n# instantiate, fit\nlm = LinearRegression()\nlm.fit(X, y)\n\n# print coefficients\nlist(zip(feature_cols, lm.coef_))",
"How do we interpret the coefficients?\n- All other variables being fixed, being a suburban area is associated with an average decrease in Sales of 109.68 widgets (as compared to the baseline level, which is the rural area).\n- Being an urban area is associated with an average increase in Sales of 260.63 widgets (as compared to the rural area).\nNote that Linear Regression can only make good predictions if there is indeed a linear relationship between the features and the response. \nWhat Didn't We Cover?\n\nDetecting collinearity\nDiagnosing model fit\nTransforming predictors to fit non-linear relationships\nInteraction terms\nAssumptions of linear regression\nAnd so much more!\n\nPlease see lecture slides for more details. It's a good way to start your modeling process when working a regression problem. However, it is limited by the fact that it can only make good predictions if there is a linear relationship between the features and the response, which is why more complex methods (with higher variance and lower bias) will often outperform linear regression.\nTherefore, we want you to understand linear regression conceptually, understand its strengths and weaknesses, be familiar with the terminology, and know how to apply it. However, we also want to spend time on many other machine learning models, which is why we aren't going deeper here.\nResources\n\n\nChapter 3 of An Introduction to Statistical Learning\n\n\nrelated videos \n\nquick reference guide \nStatsmodels: simple linear regression and multiple linear regression.\nintroduction to linear regression \nassumptions of linear regression."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
brettavedisian/phys202-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | [
"Optimization Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt",
"Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:",
"def hat(x,a,b):\n return -a*x**2+b*x**4\n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0",
"Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:",
"a = 5.0\nb = 1.0\n\nx=np.linspace(-3,3,100)\nplt.figure(figsize=(9,6))\nplt.xlabel('Range'), plt.ylabel('V(x)'), plt.title('Hat Potential')\nplt.plot(x, hat(x,a,b))\nplt.box(False)\nplt.grid(True)\nplt.tick_params(axis='x', top='off', direction='out')\nplt.tick_params(axis='y', right='off', direction='out');\n\nassert True # leave this to grade the plot",
"Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.",
"res1 = opt.minimize_scalar(hat, bounds=(-3,0), args=(a,b), method='bounded')\nres2 = opt.minimize_scalar(hat, bounds=(0,3), args=(a,b), method='bounded')\nprint('Local minima: %f, %f' % (res1.x, res2.x))\nplt.figure(figsize=(9,6))\nplt.xlabel('Range'), plt.ylabel('V(x)')\nplt.plot(x, hat(x,a,b), label=\"Potential\")\nplt.scatter(res1.x, res1.fun, marker=\"o\", color=\"r\")\nplt.scatter(res2.x, res2.fun, marker=\"o\", color=\"r\")\nplt.title('Finding Local Minima of Hat Potential')\nplt.box(False), plt.grid(True), plt.xlim(-2.5,2.5), plt.ylim(-8,4)\nplt.tick_params(axis='x', top='off', direction='out')\nplt.tick_params(axis='y', right='off', direction='out')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.);\n\nassert True # leave this for grading the plot",
"To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\nTo find the local minima of the hat potential analytically, I needed to take the first derivative with respect to $x$ and set that equal to zero.\n$$ V(x) = -ax^2 + bx^4 $$\n$$ \\frac{dV}{dx} = -2ax + 4bx^3 = 0 $$\nA solution we will not use is the $x=0$ because that corresponds to a maximum.\nAdd $-2ax$ to the other side and cancel out an $x$ to get:\n$$ 4bx^2 = 2a $$\nDivide by $4b$ and reduce the fraction:\n$$ x^2 = \\frac{a}{2b} $$\nTake the square root:\n$$ x = \\pm \\sqrt{\\frac{a}{2b}} $$\nPlugging $a=5.0$ and $b=1.0$, we get:\n$$ x = -\\sqrt{\\frac{5}{2}} \\: or \\: \\sqrt{\\frac{5}{2}} $$\nOr\n$$ x = -1.581140 \\: or \\: 1.581140 $$"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Mdround/fastai-deeplearning1 | deeplearning1/nbs/lesson5.ipynb | apache-2.0 | [
"from theano.sandbox import cuda\n\n%matplotlib inline\nimport utils #; reload(utils)\nfrom utils import *\nfrom __future__ import division, print_function\nimport pickle\nimport utils_MDR\n\nmodel_path = 'data/imdb/models/'",
"MDR: and needs by GPU-fan code, too...",
"import utils_MDR\nfrom utils_MDR import *",
"Setup data\nWe're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. Keras comes with some helpers for this dataset.",
"from keras.datasets import imdb\nidx = imdb.get_word_index()",
"This is the word list:",
"idx_arr = sorted(idx, key=idx.get)\nidx_arr[:10]",
"...and this is the mapping from id to word",
"## idx2word = {v: k for k, v in idx.iteritems()} ## Py 2.7\nidx2word = {v: k for k, v in idx.items()} ## Py 3.x",
"We download the reviews using code copied from keras.datasets:",
"path = get_file('imdb_full.pkl',\n origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl',\n md5_hash='d091312047c43cf9e4e38fef92437263')\nf = open(path, 'rb')\n(x_train, labels_train), (x_test, labels_test) = pickle.load(f)\n\nlen(x_train)",
"Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word.",
"', '.join(map(str, x_train[0]))",
"The first word of the first review is 23022. Let's see what that is.",
"idx2word[23022]",
"Here's the whole review, mapped from ids to words.",
"' '.join([idx2word[o] for o in x_train[0]])",
"The labels are 1 for positive, 0 for negative.",
"labels_train[:10]",
"Reduce vocab size by setting rare words to max index.",
"vocab_size = 5000\n\ntrn = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_train]\ntest = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_test]",
"Look at distribution of lengths of sentences.",
"## create an array of 'len'...gths\n# lens = np.array(map(len, trn)) ## only works in Py2.x, not 3.x ... \n## 'map in Python 3 return an iterator, while map in Python 2 returns a list'\n## (https://stackoverflow.com/questions/35691489/error-in-python-3-5-cant-add-map-results-together)\n\n# This is a quick fix - not really a proper P3x approach.\n\nlens = np.array(list(map(len, trn))) ## wrapped a list around it\n\n(lens.max(), lens.min(), lens.mean())",
"Pad (with zero) or truncate each sentence to make consistent length.",
"seq_len = 500\n\ntrn = sequence.pad_sequences(trn, maxlen=seq_len, value=0)\ntest = sequence.pad_sequences(test, maxlen=seq_len, value=0)",
"This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are pre-padded with zeros, those greater are truncated.",
"trn.shape",
"Create simple models\nSingle hidden layer NN\nThe simplest model that tends to give reasonable results is a single hidden layer net. So let's try that. Note that we can't expect to get any useful results by feeding word ids directly into a neural net - so instead we use an embedding to replace them with a vector of 32 (initially random) floats for each word in the vocab.",
"model = Sequential([\n Embedding(vocab_size, 32, input_length=seq_len),\n Flatten(),\n Dense(100, activation='relu'),\n Dropout(0.7),\n Dense(1, activation='sigmoid')])\n\nmodel.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])\nmodel.summary()\n\nset_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)\nset_gpu_fan_speed(0)",
"The stanford paper that this dataset is from cites a state of the art accuracy (without unlabelled data) of 0.883. So we're short of that, but on the right track.\nSingle conv layer with max pooling\nA CNN is likely to work better, since it's designed to take advantage of ordered data. We'll need to use a 1D CNN, since a sequence of words is 1D.",
"conv1 = Sequential([\n Embedding(vocab_size, 32, input_length=seq_len, dropout=0.2),\n Dropout(0.2),\n Convolution1D(64, 5, border_mode='same', activation='relu'),\n Dropout(0.2),\n MaxPooling1D(),\n Flatten(),\n Dense(100, activation='relu'),\n Dropout(0.7),\n Dense(1, activation='sigmoid')])\n\nconv1.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])\n\nset_gpu_fan_speed(90)\nconv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)\nset_gpu_fan_speed(0)",
"That's well past the Stanford paper's accuracy - another win for CNNs!",
"conv1.save_weights(model_path + 'conv1.h5')\n\nconv1.load_weights(model_path + 'conv1.h5')",
"Pre-trained vectors\nYou may want to look at wordvectors.ipynb before moving on.\nIn this section, we replicate the previous CNN, but using <strong>pre-trained</strong> embeddings.",
"def load_vectors(loc):\n return (load_array(loc+'.dat'),\n pickle.load(open(loc+'_words.pkl','rb')),\n pickle.load(open(loc+'_idx.pkl','rb')))\n\n#vecs, words, wordidx = load_vectors('data/glove/results/6B.50d') ## JH's original\nvecs, words, wordidx = load_vectors('data/glove/results/6B.100d') ## MDR's experiment",
"The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist).",
"def create_emb():\n n_fact = vecs.shape[1]\n emb = np.zeros((vocab_size, n_fact))\n\n for i in range(1,len(emb)):\n word = idx2word[i]\n if word and re.match(r\"^[a-zA-Z0-9\\-]*$\", word):\n src_idx = wordidx[word]\n emb[i] = vecs[src_idx]\n else:\n # If we can't find the word in glove, randomly initialize\n emb[i] = normal(scale=0.6, size=(n_fact,))\n\n # This is our \"rare word\" id - we want to randomly initialize\n emb[-1] = normal(scale=0.6, size=(n_fact,))\n emb/=3\n return emb\n\nemb = create_emb()",
"We pass our embedding matrix to the Embedding constructor, and set it to non-trainable.",
"model = Sequential([\n #Embedding(vocab_size, 50, \n Embedding(vocab_size, 100, \n input_length=seq_len, dropout=0.2, weights=[emb], trainable=False),\n Dropout(0.25), ## JH (0.25)\n Convolution1D(64, 5, border_mode='same', activation='relu'),\n Dropout(0.25), ## JH (0.25)\n MaxPooling1D(),\n Flatten(),\n Dense(100, activation='relu'),\n Dropout(0.3), ## JH (0.7)\n Dense(1, activation='sigmoid')])\n\nmodel.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])",
"I get better results with the 100d embedding than I do with the 50d embedding, after 4 epochs. - MDR",
"# model.optimizer.lr = 1e-3 ## MDR: added to the 50d for marginally faster training than I was getting\nset_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)\nset_gpu_fan_speed(0)\nmodel.save_weights(model_path+'glove100_wt1.h5') ## care, with the weight count!\n\nmodel.load_weights(model_path+'glove50_wt1.h5')\n\nmodel.load_weights(model_path+'glove100_wt1.h5')",
"MDR: so my initial results were nowhere near as good, but we're not overfitting yet.\nMDR: my results are nowhere near JH's! [] Investigate this!\nWe already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings.",
"model.layers[0].trainable=True\n\nmodel.optimizer.lr=1e-4\n\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)",
"\"As expected, that's given us a nice little boost. :)\" - \nMDR: actually made it worse! For both 50d and 100d cases!",
"model.save_weights(model_path+'glove50.h5')",
"Multi-size CNN\nThis is an implementation of a multi-size CNN as shown in Ben Bowles' excellent blog post.",
"from keras.layers import Merge",
"We use the functional API to create multiple conv layers of different sizes, and then concatenate them.",
"#graph_in = Input ((vocab_size, 50))\ngraph_in = Input ((vocab_size, 100)) ## MDR - for 100d embedding\nconvs = [ ] \nfor fsz in range (3, 6): \n x = Convolution1D(64, fsz, border_mode='same', activation=\"relu\")(graph_in)\n x = MaxPooling1D()(x) \n x = Flatten()(x) \n convs.append(x)\nout = Merge(mode=\"concat\")(convs) \ngraph = Model(graph_in, out) \n\nemb = create_emb()",
"We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers.",
"model = Sequential ([\n #Embedding(vocab_size, 50, \n Embedding(vocab_size, 100, \n input_length=seq_len, dropout=0.2, weights=[emb]),\n Dropout (0.2),\n graph,\n Dropout (0.5),\n Dense (100, activation=\"relu\"),\n Dropout (0.7),\n Dense (1, activation='sigmoid')\n ])\n\nmodel.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])",
"MDR: it turns out that there's no improvement, in this expt, for using the 100d embedding over the 50d.",
"set_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)\nset_gpu_fan_speed(0)",
"Interestingly, I found that in this case I got best results when I started the embedding layer as being trainable, and then set it to non-trainable after a couple of epochs. I have no idea why!\nMDR: (does it limit overfitting, maybe?) ... anyway, my running of the same code achieved nearly the same results, so much happier.",
"model.save_weights(model_path+'glove50_conv2_wt1.h5')\n\nmodel.load_weights(model_path+'glove50_conv2_wt1.h5')",
"MDR: I want to test this statement from JH, above, by running another couple of epochs. First let's reduce the LR.",
"model.optimizer.lr = 1e-5\n\nset_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)\nset_gpu_fan_speed(0)",
"Okay, so that didn't help. Reload the weights from before.",
"model.load_weights(model_path+'glove50_conv2_wt1.h5')",
"MDR: following JH's plan, from this point.",
"model.layers[0].trainable=False\n\nmodel.optimizer.lr=1e-5\n\nset_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)\nset_gpu_fan_speed(0)",
"This more complex architecture has given us another boost in accuracy.\nMDR: although I didn't see a huge advantage, personally.\nLSTM\nWe haven't covered this bit yet!\nMDR: so, there's no preloaded embedding, here - it's a fresh, random set?",
"model = Sequential([\n Embedding(vocab_size, 32, input_length=seq_len, mask_zero=True,\n W_regularizer=l2(1e-6), dropout=0.2),\n LSTM(100, consume_less='gpu'),\n Dense(1, activation='sigmoid')])\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\nmodel.summary()",
"MDR: hang on! These summary() outputs look quite different, to me! Not least that this is apparently the 13th lstm he's produced (in this session?) - and yet I've fot a higher numbered dense layer than him. Eh?\nBut then I reach better results in fewer epochs than he does, this time around. Compare the times, and the more stable convergence in my results. Weird. Still, that's my first LSTM!!",
"set_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64)\nset_gpu_fan_speed(0)\n\nmodel.save_weights(model_path+'glove50_lstm1_wt1.h5')",
"MDR: let's see if it's possible to improve on that.",
"model.optimizer.lr = 1e-5\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64)",
"MDR: Conclusion: that may be all that's achievable with this dataset, of course. It's sentiment, after all! \nMDR's lstm + preloaded embeddings\nGod knows whether this will work. Let's see if I can create an LSTM layer on top of pretrained embeddings...",
"model2 = Sequential([\n Embedding(vocab_size, 100, input_length = seq_len,\n #mask_zero=True, W_regularizer=l2(1e-6), ## used in lstm above - not needed?\n dropout=0.2, weights=[emb], trainable = False),\n LSTM(100, consume_less = 'gpu'),\n Dense(100, activation = 'sigmoid')\n])\n\nmodel2.summary()\n\nmodel2.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])\n\nset_gpu_fan_speed(90)\nmodel.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)\nset_gpu_fan_speed(0)",
"MDR: OMFG. It needs one epoch to be 90% accurate."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dereneaton/ipyrad | testdocs/analysis/cookbook-pca-empirical.ipynb | gpl-3.0 | [
"<h1><span style=\"color:gray\">ipyrad-analysis toolkit:</span> Dimensionality reduction</h1>\n\nThe pca tool can be used to implement a number of dimensionality reduction methods on SNP data (PCA, t-SNE, UMAP) and to filter and/or impute missing data in genotype matrices to reduce the effects of missing data. \nLoad libraries",
"# conda install ipyrad -c conda-forge -c bioconda\n# conda install ipcoal -c conda-forge\n# conda install scikit-learn -c conda-forge\n\nimport ipyrad.analysis as ipa\nimport toyplot\n\nprint(ipa.__version__)\nprint(toyplot.__version__)",
"The input data",
"# the simulated SNP database file\nSNPS = \"/tmp/oaks.snps.hdf5\"\n\n# download example hdf5 dataset (158Mb, takes ~2-3 minutes)\nURL = \"https://www.dropbox.com/s/x6a4i47xqum27fo/virentes_ref.snps.hdf5?raw=1\"\nipa.download(url=URL, path=SNPS);",
"Make an IMAP dictionary (map popnames to list of samplenames)",
"IMAP = {\n \"virg\": [\"LALC2\", \"TXWV2\", \"FLBA140\", \"FLSF33\", \"SCCU3\"],\n \"mini\": [\"FLSF47\", \"FLMO62\", \"FLSA185\", \"FLCK216\"],\n \"gemi\": [\"FLCK18\", \"FLSF54\", \"FLWO6\", \"FLAB109\"],\n \"bran\": [\"BJSL25\", \"BJSB3\", \"BJVL19\"],\n \"fusi\": [\"MXED8\", \"MXGT4\", \"TXMD3\", \"TXGR3\"],\n \"sagr\": [\"CUCA4\", \"CUSV6\", \"CUVN10\"],\n \"oleo\": [\"MXSA3017\", \"BZBB1\", \"HNDA09\", \"CRL0030\", \"CRL0001\"],\n}\nMINMAP = {\n \"virg\": 3,\n \"mini\": 3,\n \"gemi\": 3,\n \"bran\": 2,\n \"fusi\": 2,\n \"sagr\": 2,\n \"oleo\": 3,\n}",
"Initiate tool with filtering options",
"tool = ipa.pca(data=SNPS, minmaf=0.05, imap=IMAP, minmap=MINMAP, impute_method=\"sample\")",
"Run PCA\nUnlinked SNPs are automatically sampled from each locus. By setting nreplicates=N the subsampling procedure is repeated N times to show variation over the subsampled SNPs. The imap dictionary is used in the .draw() function to color points, and can be overriden to color points differently from the IMAP used in the tool above.",
"tool.run(nreplicates=10)\ntool.draw(imap=IMAP);\n\n# a convenience function for plotting across three axes\ntool.draw_panels(0, 1, 2, imap=IMAP);",
"Run TSNE\nt-SNE is a manifold learning algorithm that can sometimes better project data into a 2-dimensional plane. The distances between points in this space are harder to interpret.",
"tool.run_tsne(perplexity=5, seed=333)\ntool.draw(imap=IMAP);",
"Run UMAP\nUMAP is similar to t-SNE but the distances between clusters are more representative of the differences betwen groups. This requires another package that if it is not yet installed it will ask you to install.",
"tool.run_umap(n_neighbors=13, seed=333)\ntool.draw(imap=IMAP);",
"Missing data with imputation\nMissing data has large effects on dimensionality reduction methods, and it is best to (1) minimize the amount of missing data in your input data set by using filtering, and (2) impute missing data values. In the examples above data is imputed using the 'sample' method, which probabilistically samples alleles for based on the allele frequency in the group that a taxon is assigned to in IMAP. It is good to compare this to a case where imputation is performed without IMAP assignments, to assess the impact of the a priori assignments. Although this comparison is useful, assigning taxa to groups with IMAP dictionaries for imputation is expected to yield more accurate imputation.",
"# allow very little missing data\nimport itertools\ntool = ipa.pca(\n data=SNPS, \n imap={'samples': list(itertools.chain(*[i for i in IMAP.values()]))},\n minmaf=0.05, \n mincov=0.9, \n impute_method=\"sample\", \n quiet=True,\n)\ntool.run(nreplicates=10, seed=123)\ntool.draw(imap=IMAP);",
"Statistics",
"# variance explained by each PC axes in the first replicate run\ntool.variances[0].round(2)\n\n# PC loadings in the first replicate\ntool.pcs(0)",
"Styling plots (see toyplot documentation)\nThe .draw() function returns a canvas and axes object from toyplot which can be further modified and styled.",
"# get plot objects, several styling options to draw\ncanvas, axes = tool.draw(imap=IMAP, size=8, width=400);\n\n# various axes styling options shown for x axis\naxes.x.ticks.show = True\naxes.x.spine.style['stroke-width'] = 1.5\naxes.x.ticks.labels.style['font-size'] = '13px'\naxes.x.label.style['font-size'] = \"15px\"\naxes.x.label.offset = \"22px\""
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
patrick-kidger/diffrax | examples/symbolic_regression.ipynb | apache-2.0 | [
"Symbolic Regression\nThis example combines neural differential equations with regularised evolution to discover the equations\n$\\frac{\\mathrm{d} x}{\\mathrm{d} t}(t) = \\frac{y(t)}{1 + y(t)}$\n$\\frac{\\mathrm{d} y}{\\mathrm{d} t}(t) = \\frac{-x(t)}{1 + x(t)}$\ndirectly from data.\nReferences:\nThis example appears as an example in:\nbibtex\n@phdthesis{kidger2021on,\n title={{O}n {N}eural {D}ifferential {E}quations},\n author={Patrick Kidger},\n year={2021},\n school={University of Oxford},\n}\nWhilst drawing heavy inspiration from:\n```bibtex\n@inproceedings{cranmer2020discovering,\n title={{D}iscovering {S}ymbolic {M}odels from {D}eep {L}earning with {I}nductive\n {B}iases},\n author={Cranmer, Miles and Sanchez Gonzalez, Alvaro and Battaglia, Peter and\n Xu, Rui and Cranmer, Kyle and Spergel, David and Ho, Shirley},\n booktitle={Advances in Neural Information Processing Systems},\n publisher={Curran Associates, Inc.},\n year={2020},\n}\n@software{cranmer2020pysr,\n title={PySR: Fast \\& Parallelized Symbolic Regression in Python/Julia},\n author={Miles Cranmer},\n publisher={Zenodo},\n url={http://doi.org/10.5281/zenodo.4041459},\n year={2020},\n}\n```\nThis example is available as a Jupyter notebook here.",
"import tempfile\nfrom typing import List\n\nimport equinox as eqx # https://github.com/patrick-kidger/equinox\nimport jax\nimport jax.numpy as jnp\nimport optax # https://github.com/deepmind/optax\nimport pysr # https://github.com/MilesCranmer/PySR\nimport sympy\n\n\n# Note that PySR, which we use for symbolic regression, uses Julia as a backend.\n# You'll need to install a recent version of Julia if you don't have one.\n# (And can get funny errors if you have a too-old version of Julia already.)\n# You may also need to restart Python after running `pysr.install()` the first time.\npysr.silence_julia_warning()\npysr.install(quiet=True)",
"Now for a bunch of helpers. We'll use these in a moment; skip over them for now.",
"def quantise(expr, quantise_to):\n if isinstance(expr, sympy.Float):\n return expr.func(round(float(expr) / quantise_to) * quantise_to)\n elif isinstance(expr, sympy.Symbol):\n return expr\n else:\n return expr.func(*[quantise(arg, quantise_to) for arg in expr.args])\n\n\nclass SymbolicFn(eqx.Module):\n fn: callable\n parameters: jnp.ndarray\n\n def __call__(self, x):\n # Dummy batch/unbatching. PySR assumes its JAX'd symbolic functions act on\n # tensors with a single batch dimension.\n return jnp.squeeze(self.fn(x[None], self.parameters))\n\n\nclass Stack(eqx.Module):\n modules: List[eqx.Module]\n\n def __call__(self, x):\n return jnp.stack([module(x) for module in self.modules], axis=-1)\n\n\ndef expr_size(expr):\n return sum(expr_size(v) for v in expr.args) + 1\n\n\ndef _replace_parameters(expr, parameters, i_ref):\n if isinstance(expr, sympy.Float):\n i_ref[0] += 1\n return expr.func(parameters[i_ref[0]])\n elif isinstance(expr, sympy.Symbol):\n return expr\n else:\n return expr.func(\n *[_replace_parameters(arg, parameters, i_ref) for arg in expr.args]\n )\n\n\ndef replace_parameters(expr, parameters):\n i_ref = [-1] # Distinctly sketchy approach to making this conversion.\n return _replace_parameters(expr, parameters, i_ref)",
"Okay, let's get started.\nWe start by running the Neural ODE example.\nThen we extract the learnt neural vector field, and symbolically regress across this.\nFinally we fine-tune the resulting symbolic expression.",
"def main(\n symbolic_dataset_size=2000,\n symbolic_num_populations=100,\n symbolic_population_size=20,\n symbolic_migration_steps=4,\n symbolic_mutation_steps=30,\n symbolic_descent_steps=50,\n pareto_coefficient=2,\n fine_tuning_steps=500,\n fine_tuning_lr=3e-3,\n quantise_to=0.01,\n):\n #\n # First obtain a neural approximation to the dynamics.\n # We begin by running the previous example.\n #\n\n # Runs the Neural ODE example.\n # This defines the variables `ts`, `ys`, `model`.\n print(\"Training neural differential equation.\")\n %run neural_ode.ipynb\n\n #\n # Now symbolically regress across the learnt vector field, to obtain a Pareto\n # frontier of symbolic equations, that trades loss against complexity of the\n # equation. Select the \"best\" from this frontier.\n #\n\n print(\"Symbolically regressing across the vector field.\")\n vector_field = model.func.mlp # noqa: F821\n dataset_size, length_size, data_size = ys.shape # noqa: F821\n in_ = ys.reshape(dataset_size * length_size, data_size) # noqa: F821\n in_ = in_[:symbolic_dataset_size]\n out = jax.vmap(vector_field)(in_)\n with tempfile.TemporaryDirectory() as tempdir:\n symbolic_regressor = pysr.PySRRegressor(\n niterations=symbolic_migration_steps,\n ncyclesperiteration=symbolic_mutation_steps,\n populations=symbolic_num_populations,\n npop=symbolic_population_size,\n optimizer_iterations=symbolic_descent_steps,\n optimizer_nrestarts=1,\n procs=1,\n verbosity=0,\n tempdir=tempdir,\n temp_equation_file=True,\n output_jax_format=True,\n )\n symbolic_regressor.fit(in_, out)\n best_equations = symbolic_regressor.get_best()\n expressions = [b.sympy_format for b in best_equations]\n symbolic_fns = [\n SymbolicFn(b.jax_format[\"callable\"], b.jax_format[\"parameters\"])\n for b in best_equations\n ]\n\n #\n # Now the constants in this expression have been optimised for regressing across\n # the neural vector field. This was good enough to obtain the symbolic expression,\n # but won't quite be perfect -- some of the constants will be slightly off.\n #\n # To fix this we now plug our symbolic function back into the original dataset\n # and apply gradient descent.\n #\n\n print(\"Optimising symbolic expression.\")\n\n symbolic_fn = Stack(symbolic_fns)\n flat, treedef = jax.tree_flatten(\n model, is_leaf=lambda x: x is model.func.mlp # noqa: F821\n )\n flat = [symbolic_fn if f is model.func.mlp else f for f in flat] # noqa: F821\n symbolic_model = jax.tree_unflatten(treedef, flat)\n\n @eqx.filter_grad\n def grad_loss(symbolic_model):\n vmap_model = jax.vmap(symbolic_model, in_axes=(None, 0))\n pred_ys = vmap_model(ts, ys[:, 0]) # noqa: F821\n return jnp.mean((ys - pred_ys) ** 2) # noqa: F821\n\n optim = optax.adam(fine_tuning_lr)\n opt_state = optim.init(eqx.filter(symbolic_model, eqx.is_inexact_array))\n\n @eqx.filter_jit\n def make_step(symbolic_model, opt_state):\n grads = grad_loss(symbolic_model)\n updates, opt_state = optim.update(grads, opt_state)\n symbolic_model = eqx.apply_updates(symbolic_model, updates)\n return symbolic_model, opt_state\n\n for _ in range(fine_tuning_steps):\n symbolic_model, opt_state = make_step(symbolic_model, opt_state)\n\n #\n # Finally we round each constant to the nearest multiple of `quantise_to`.\n #\n\n trained_expressions = []\n for module, expression in zip(symbolic_model.func.mlp.modules, expressions):\n expression = replace_parameters(expression, module.parameters.tolist())\n expression = quantise(expression, quantise_to)\n trained_expressions.append(expression)\n\n print(f\"Expressions found: {trained_expressions}\")\n\nmain()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
monicathieu/cu-psych-r-tutorial | public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb | mit | [
"Control Flow\nGoals of this lesson\nStudents will learn:\n\nHow to use if/else and for loops in python\nHow to indent code correctly in python\n\nIndenting in python\n\nPython, unlike R, is strict about indentation! \nIndentations in python often have bearing on the order in which they are executed, and switching indentation can change how code runs (or break it)\nComing from R, indentation might seem annoying at first, but eventually this can help with code readability \nUltimately, python is trying to help us stay organized\n\n\nIf statements\n\nIf statements in python are the equivalent of the following English: \"If condition X is met, then do action Y\"\nIf statements in python consist of the following syntax\n\nif (condition X):\n actions...",
"myVar = 25\nif myVar > 10:\n print('Above 10!')",
"Nesting If Statements\nAny conditional statements within others are called 'nested'",
"if myVar > 5:\n print('Above 10!')\n if myVar > 20:\n print('Above 20!')",
"Else Statements\n\nIt is also very helpful to specify code that we want to run if a condition is NOT met\nElse statements in python always follow if statements, and consist of the following syntax\n\nif (condition X):\n actions...\n else:\n actions...",
"myVar2 = 'dog'\nif myVar2 == 'cat':\n print('meow')\nelse:\n print('woof')",
"Else If & Sequential If Statements\n\nWe may also want to specify a series of conditions\nPython always evaluates conditions on the same nest level in order, from top to bottom \nElif means 'else if' -- only run this statement if the previous if statement condition was not met, and the condition following is met\nSequential if statements on the same level will run if the statement condition is met, regardless of the previous",
"myVar2 = 'dog'\nif len(myVar2) == 3:\n print('3 letters long')\nelif myVar2 == 'dog':\n print('woof')\nelse:\n print('unknown animal')\n\nmyVar2 = 'dog'\nif len(myVar2) == 3:\n print('3 letters long')\nif myVar2 == 'dog':\n print('woof')\nelse:\n print('unknown animal')",
"Loops\n\nLooping is a great way to apply the same operation to many pieces of data\n\nLooping through a list",
"nums = [2,3,4,-1,7]\nfor number in nums:\n print(number)",
"Looping a certain number of times",
"for i in range(10):\n print(i)",
"Fancly looping with enumerate",
"stringList = ['banana', 'mango', 'kiwi', 'blackberry']\n# fancy looping with enumerate()\nfor index, item in enumerate(stringList):\n print(index, item)",
"Nested loops",
"for i in stringList:\n for j in range(4):\n print(i, j)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ckemere/CloudShuffles | score_bayes_parallel-dask.ipynb | gpl-3.0 | [
"Score multi-session events using the replay score from Davidson et al.",
"import numpy as np\nimport os\nimport pandas as pd\nimport warnings\n\nimport nelpy as nel\n\nwarnings.filterwarnings(\"ignore\")",
"Load experimental data",
"datadirs = ['/Users/ckemere/Development/Data/Buzsaki']\n\nfileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)\n# conda install pandas=0.19.2\nif fileroot is None:\n raise FileNotFoundError('datadir not found')\n\nload_from_nel = True\n\n# load from nel file:\nif load_from_nel:\n jar = nel.load_pkl(os.path.join(fileroot,'gor01vvp01_processed_speed.nel'))\n exp_data = jar.exp_data\n aux_data = jar.aux_data\n del jar\n \n with pd.HDFStore(os.path.join(fileroot,'DibaMetadata.h5')) as store:\n df = store.get('Session_Metadata')\n df2 = store.get('Subset_Metadata')",
"Define subset of sessions to score",
"# restrict sessions to explore to a smaller subset\nmin_n_placecells = 16\nmin_n_PBEs = 27 # 27 total events ==> minimum 21 events in training set\n\ndf2_subset = df2[(df2.n_PBEs >= min_n_PBEs) & (df2.n_placecells >= min_n_placecells)]\n\nsessions = df2_subset['time'].values.tolist()\nsegments = df2_subset['segment'].values.tolist()\n\nprint('Evaluating subset of {} sessions'.format(len(sessions)))\n\ndf2_subset.sort_values(by=['n_PBEs', 'n_placecells'], ascending=[0,0])",
"Parallel scoring\nNOTE: it is relatively easy (syntax-wise) to score each session as a parallel task, but since the Bayesian scoring takes such a long time to compute, we can be more efficient (higher % utilization) by further parallelizing over events, and not just over sessions. This further level of parallelization makes the bookkeeping a little ugly, so I provide the code for both approaches here.",
"n_jobs = 20 # set this equal to number of cores\nn_shuffles = 100 # 5000\nn_samples = 35000 # 35000\nw=3 # single sided bandwidth (0 means only include bin who's center is under line, 3 means a total of 7 bins)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Parallelize by EVENT\nimport dask\nimport distributed.joblib\nfrom joblib import Parallel, delayed \nfrom joblib import parallel_backend\n\n# A function that can be called to do work:\ndef work_events(arg): \n\n # Split the list to individual variables:\n session, segment, ii, bst, tc = arg\n scores, shuffled_scores, percentiles = nel.analysis.replay.score_Davidson_final_bst_fast(bst=bst,\n tuningcurve=tc,\n w=w,\n n_shuffles=n_shuffles,\n n_samples=n_samples)\n\n return (session, segment, ii, scores, shuffled_scores, percentiles)\n\n# List of instances to pass to work():\n# unroll all events:\nparallel_events = []\nfor session, segment in zip(sessions, segments):\n for nn in range(aux_data[session][segment]['PBEs'].n_epochs):\n parallel_events.append((session, segment, nn, aux_data[session][segment]['PBEs'][nn], \n aux_data[session][segment]['tc']))\n\n#parallel_results = list(map(work_events, parallel_events))\n\nwith parallel_backend('dask.distributed', scheduler_host='35.184.42.12:8786'):\n # Anything returned by work() can be stored:\n parallel_results = Parallel(n_jobs=n_jobs, verbose=1)(map(delayed(work_events), parallel_events))\n\n\n\n\n# standardize parallel results\nbdries_ = [aux_data[session][segment]['PBEs'].n_epochs for session, segment in zip(sessions, segments) ]\nbdries = np.cumsum(np.insert(bdries_,0,0))\nbdries\n\nsessions_ = np.array([result[0] for result in parallel_results])\nsegments_ = np.array([result[1] for result in parallel_results])\nidx = [result[2] for result in parallel_results]\n\nscores_bayes_evt = np.array([float(result[3]) for result in parallel_results])\nscores_bayes_shuffled_evt = np.array([result[4].squeeze() for result in parallel_results])\nscores_bayes_percentile_evt = np.array([float(result[5]) for result in parallel_results])\n\nresults = {}\nfor nn in range(len(bdries)-1):\n session = np.unique(sessions_[bdries[nn]:bdries[nn+1]])\n if len(session) > 1:\n raise ValueError(\"parallel results in different format / order than expected!\")\n session = session[0]\n segment = np.unique(segments_[bdries[nn]:bdries[nn+1]])\n if len(segment) > 1:\n raise ValueError(\"parallel results in different format / order than expected!\")\n segment = segment[0]\n try:\n results[session][segment]['scores_bayes'] = scores_bayes_evt[bdries[nn]:bdries[nn+1]]\n except KeyError:\n try:\n results[session][segment] = dict()\n results[session][segment]['scores_bayes'] = scores_bayes_evt[bdries[nn]:bdries[nn+1]]\n except KeyError:\n results[session] = dict()\n results[session][segment] = dict()\n results[session][segment]['scores_bayes'] = scores_bayes_evt[bdries[nn]:bdries[nn+1]]\n\n results[session][segment]['scores_bayes_shuffled'] = scores_bayes_shuffled_evt[bdries[nn]:bdries[nn+1]]\n results[session][segment]['scores_bayes_percentile'] = scores_bayes_percentile_evt[bdries[nn]:bdries[nn+1]]\n\nprint('done packing results')",
"Save results to disk",
"jar = nel.ResultsContainer(results=results, description='gor01 and vvp01 speed restricted results for best 20 candidate sessions')\njar.save_pkl('score_bayes_all_sessions.nel')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pycroscopy/pycroscopy | jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb | mit | [
"Introduction to dynamic AFM simulations\nContent under Creative Commons Attribution license CC-BY 4.0 version,\nEnrique A. López-Guerra.\nPurpose of the notebook: show an application of numerical methods to simulate the dynamics of a probe in atomic force microscopy.\nRequirements to take the best advantage of this notebook: knowing the fundamentals of Harmonic Oscillators in clasical mechanics and Fundamentals of Vibrations. \nIntroduction\nSince the atomic force microscope (AFM) was invented in 1986 it has become one of the main tools to study matter at the micro and nanoscale. This powerful tool is so versatile that it can be used to study a wide variety of materials, ranging from stiff inorganic surfaces to soft biological samples. \nIn its early stages the AFM was used in permanent contact with the sample (the probe is dragged over the sample during the whole operation), which brought about important drawbacks, such as rapid probe wear and often sample damage, but these obstacles have been overcome with the development of dynamic techniques.\nIn this Jupyter notebook, we will focus on the operation of the probe in dynamic mode.",
"from __future__ import division, print_function, absolute_import, unicode_literals\nimport os\nimport numpy\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import Image\n\npath = os.getcwd()\nfig1 = path + '/Fig1.jpg'\nImage(filename=fig1)",
"Figure 1. Schematics of the setup of a atomic force microscope (Adapted from reference 6)\nIn AFM the interacting probe is in general a rectangular cantilever (please check the image above that shows the AFM setup where you will be able to see the probe!). \nProbably the most used dynamic technique in AFM is the Tapping Mode. In this method the probe taps a surface in intermittent contact fashion. The purpose of tapping the probe over the surface instead of dragging it is to reduce frictional forces that may cause damage of soft samples and wear of the tip. Besides with the tapping mode we can get more information about the sample! HOW???\nIn Tapping Mode AFM the cantilever is shaken to oscillate up and down at a specific frequency (most of the time shaken at its natural frequency). Then the deflection of the tip is measured at that frequency to get information about the sample. Besides acquiring the topography of the sample, the phase lag between the excitation and the response of the cantilever can be related to compositional material properties! \nIn other words one can simultaneously get information about how the surface looks and also get compositional mapping of the surface! THAT SOUNDS POWERFUL!!!",
"fig2 = path + '/Fig2DHO.jpg'\nImage(filename=fig2)",
"Figure 2. Schematics of a damped harmonic oscillator without tip-sample interactions\nAnalytical Solution\nThe motion of the probe can be derived using Euler-Bernoulli's equation. However that equation has partial derivatives (it depends on time and space) because it deals with finding the position of each point of the beam in a certain time, which cant make the problem too expensive computationally for our purposes. In our case, we have the advantage that we are only concerned about the position of the tip (which is the only part of the probe that will interact with the sample). As a consequence many researchers in AFM have successfully made approximations using a simple mass point model approximation [see ref. 2] like the one in figure 2 (with of course the addition of tip sample forces! We will see more about this later).\nFirst we will study the system of figure 2 AS IS (without addition of tip-sample force term), WHY? Because we want to get an analytical solution to get a reference of how our integration schemes are working, and the addition of tip sample forces to our equation will prevent the acquisition of straightforward analytical solutions :(\nThen, the equation of motion of the damped harmonic oscillator of figure 2, which is DRIVEN COSINUSOIDALLY (remember that we are exciting our probe during the scanning process) is:\n$$\\begin{equation}\nm \\frac{d^2z}{dt^2} = - k z - \\frac{m\\omega_0}{Q}\\frac{dz}{dt} + F_0\\cos(\\omega t)\n\\end{equation}$$\nwhere k is the stiffness of the cantilever, z is the vertical position of the tip with respect to the cantilever base position, Q is the quality factor (which is related to the damping of the system), $F_0$ is the driving force amplitude, $\\omega_0$ is the resonance frequency of the oscillator, and $\\omega$ is the frequency of the oscillating force.\nThe analytical solution of the above ODE is composed by a transient term and a steady state term. We are only interested in the steady state part because during the scanning process it is assumed that the probe has achieved that state.\nThe steady state solution is given by:\n$$\\begin{equation}\nA\\cos (\\omega t - \\phi)\n\\end{equation}$$\nwhere A is the steady state amplitude of the oscillation response, which depends on the cantilever parameters and the driving parameters, as can be seen in the following relation:\n$$\\begin{equation}\nA = \\frac{F_0/m}{\\sqrt{(\\omega_0^2-\\omega^2)^2+(\\frac{\\omega\\omega_0}{Q})^2}}\n\\end{equation}$$\nand $\\phi$ is given by:\n$$\\begin{equation}\n\\phi = \\arctan \\big( \\frac{\\omega\\omega_0/Q}{\\omega_0^2 - \\omega^2} \\big)\n\\end{equation}$$\nLet's first name the variables that we are going to use. Because we are dealing with a damped harmonic oscillator model we have to include variables such as: spring stiffness, resonance frequency, quality factor (related to damping coefficient), target oscillation amplitude, etc.",
"k = 10.\nfo = 45000\nwo = 2.0*numpy.pi*fo\nQ = 25.\n\nperiod = 1./fo\nm = k/(wo**2)\nAo = 60.e-9\nFd = k*Ao/Q\n\nspp = 28. # time steps per period \ndt = period/spp #Intentionally chosen to be quite big\n#you can decrease dt by increasing the number of steps per period\n\nsimultime = 100.*period\nN = int(simultime/dt)\n\n#Analytical solution\n\ntime_an = numpy.linspace(0,simultime,N) #time array for the analytical solution\nz_an = numpy.zeros(N) #position array for the analytical solution\n\n#Driving force amplitude this gives us 60nm of amp response (A_target*k/Q)\nFo_an = 24.0e-9 \n\nA_an = Fo_an*Q/k #when driven at resonance A is simply Fo*Q/k\nphi = numpy.pi/2 #when driven at resonance the phase is pi/2\n\nz_an[:] = A_an*numpy.cos(wo*time_an[:] - phi) #this gets the analytical solution\n\n#slicing the array to include only steady state (only the last 10 periods)\nz_an_steady = z_an[int(90.*period/dt):]\ntime_an_steady = time_an[int(90.*period/dt):]\n\nplt.title('Plot 1 Analytical Steady State Solution of Eq 1', fontsize=20)\nplt.xlabel('time, ms', fontsize=18)\nplt.ylabel('z_Analytical, nm', fontsize=18)\nplt.plot(time_an_steady*1e3, z_an_steady*1e9, 'b--')\n",
"Approximating through Euler's method\nIf we perform a Taylor series expansion of $z_{n+1}$ around $z_{n}$ we get:\n$$z_{n+1} = z_{n} + \\Delta t\\frac{dz}{dt}\\big|_n + {\\mathcal O}(\\Delta t^2)$$\nThe Euler formula neglects terms in the order of two or higher, ending up as:\n$$\\begin{equation}\nz_{n+1} = z_{n} + \\Delta t\\frac{dz}{dt}\\big|_n\n\\end{equation}$$\nIt can be easily seen that the truncation error of the Euler algorithm is in the order of ${\\mathcal O}(\\Delta t^2)$.\nThis is a second order ODE, but we can convert it to a system of two coupled 1st order differential equations. To do it we will define $\\frac{dz}{dt} = v$. Then equation (1) will be decomposed as:\n$$\\begin{equation}\n\\frac{dz}{dt} = v\n\\end{equation}$$\n$$\\begin{equation}\n\\frac{dv}{dt} = -kz-\\frac{m\\omega_0}{Q}+F_o\\cos(\\omega t)\n\\end{equation}$$\nThese coupled equations will be used during Euler's aproximation and also during our integration using Runge Kutta 4 method.",
"t= numpy.linspace(0,simultime,N) #time grid for Euler method\n \n#Initializing variables for Euler\nvdot_E = numpy.zeros(N)\nv_E = numpy.zeros(N)\nz_E = numpy.zeros(N)\n\n#Initial conditions\nz_E[0]= 0.0\nv_E[0]=0.0\n\nfor i in range (N-1):\n vdot_E[i] =( ( -k*z_E[i] - (m*wo/Q)*(v_E[i]) +\\\n Fd*numpy.cos(wo*t[i]) ) / m) #Equation 7\n v_E[i+1] = v_E[i] + dt*vdot_E[i] #Based on equation 5\n z_E[i+1] = z_E[i] + v_E[i]*dt #Equation 5\n\nplt.title('Plot 2 Eulers approximation of Equation1', fontsize=20); \nplt.plot(t*1e3,z_E*1e9);\nplt.xlabel('time, s', fontsize=18);\nplt.ylabel('z_Euler, nm', fontsize=18);\n",
"This looks totally unphysical! We were expecting to have a steady state oscillation of 60 nm and we got a huge oscillation that keeps growing. Can it be due to the scheme? The timestep that we have chosen is quite big with respect to the oscillation period. We have intentionally set it to ONLY 28 time steps per period (That could be the reason why the scheme can't capture the physics of the problem). That's quite discouraging. However the timestep is quite big and it really gets better as you decrease the time step. Try it! Reduce the time step and see how the numerical solution acquires an amplitude of 60 nm as the analytical one. At this point we can't state anything about accuracy before doing an analysis of error (we will make this soon). But first, let's try to analyze if another more efficient scheme can capture the physics of our damped harmonic oscillator even with this large time step.\nLet's try to get more accurate... Verlet Algorithm\nThis is a very popular algorithm widely used in molecular dynamics simulations. Its popularity has been related to high stability when compared to the simple Euler method, it is also very simple to implement and accurate as we will see soon! Verlet integration can be seen as using the central difference approximation to the second derivative. Consider the Taylor expansion of $z_{n+1}$ and $z_{n-1}$ around $z_n$:\n$$\\begin{equation}\nz_{n+1} = z_n + \\Delta t \\frac{dz}{dt}\\big|_n + \\frac{\\Delta t^2}{2} \\frac{d^2 z}{d t^2}\\big|_n + \\frac{\\Delta t^3}{6} \\frac{d^3 z}{d t^3}\\big|_n + {\\mathcal O}(\\Delta t^4)\n\\end{equation}$$\n$$\\begin{equation}\nz_{n-1} = z_n - \\Delta t \\frac{dz}{dt}\\big|_n + \\frac{\\Delta t^2}{2} \\frac{d^2 z}{dt^2}\\big|_n - \\frac{\\Delta t^3}{6} \n\\frac{d^3 z}{d t^3}\\big|_n + {\\mathcal O}(\\Delta t^4)\n\\end{equation}$$\nAdding up these two expansions and solving for $z_{n+1}$ we get:\n$$z_{n+1}= 2z_{n} - z_{n-1} + \\frac{d^2 z}{d t^2} \\Delta t^2\\big|_n + {\\mathcal O}(\\Delta t^4) $$\nVerlet algorithm neglects terms on the order of 4 or higher, ending up with:\n$$\\begin{equation}\nz_{n+1}= 2z_{n} - z_{n-1} + \\frac{d^2 z}{d t^2} \\Delta t^2\\big|_n\n\\end{equation}$$\nThis looks nice; it seems that the straightforward calculation of the second derivative will give us good results. BUT have you seen that we also need the value of the first derivative (velocity) to put it into the equation of motion that we are integrating (see equation 1). YES, that's a main drawback of this scheme and therefore it's mainly used in applications where the equation to be integrated doesn't have first derivative. But don't panic we will see what can we do...\nWhat about subtracting equations 8 and 9 and then solving for $\\frac{dz}{dt}\\big|n$:\n$$\n\\frac{dz}{dt}\\big|_n = \\frac{z{n+1} - z_{n-1}}{2\\Delta t} + {\\mathcal O}(\\Delta t^2)\n$$\nIf we neglect terms on the order of 2 or higher we can calculate velocity:\n$$\\begin{equation}\n\\frac{dz}{dt}\\big|n = \\frac{z{n+1} - z_{n-1}}{2\\Delta t}\n\\end{equation}$$\nThis way of calculating velocity is pretty common in Verlet integration in applications where velocity is not explicit in the equation of motion. However for our purposes of solving equation 1 (where first derivative is explicitly present) it seems that we will lose accuracy because of the velocity, we will discuss more about this soon after...\nHave you noticed that we need a value $z_{n-1}$? Does it sound familiar? YES! This is not a self-starting method. As a result we will have to overcome the issue by setting the initial conditions of the first step using Euler approximation. This is a bit annoying, but a couple of extra lines of code won't kill you :)",
"time_V = numpy.linspace(0,simultime,N)\n\n#Initializing variables for Verlet\nzdoubledot_V = numpy.zeros(N)\nzdot_V = numpy.zeros(N)\nz_V = numpy.zeros(N)\n\n#Initial conditions Verlet. Look how we use Euler for the first step approximation!\nz_V[0] = 0.0\nzdot_V[0] = 0.0\nzdoubledot_V[0] = ( ( -k*z_V[0] - (m*wo/Q)*zdot_V[0] +\\\n Fd*numpy.cos(wo*t[0]) ) ) / m\nzdot_V[1] = zdot_V[0] + zdoubledot_V[0]*dt\nz_V[1] = z_V[0] + zdot_V[0]*dt\nzdoubledot_V[1] = ( ( -k*z_V[1] - (m*wo/Q)*zdot_V[1] +\\\n Fd*numpy.cos(wo*t[1]) ) ) / m\n\n#VERLET ALGORITHM\n\nfor i in range(2,N):\n z_V[i] = 2*z_V[i-1] - z_V[i-2] + zdoubledot_V[i-1]*dt**2 #Eq 10\n zdot_V[i] = (z_V[i]-z_V[i-2])/(2.0*dt) #Eq 11\n zdoubledot_V[i] = ( ( -k*z_V[i] - (m*wo/Q)*zdot_V[i] +\\\n Fd*numpy.cos(wo*t[i]) ) ) / m #from eq 1\n \nplt.title('Plot 3 Verlet approximation of Equation1', fontsize=20); \nplt.xlabel('time, ms', fontsize=18);\nplt.ylabel('z_Verlet, nm', fontsize=18);\nplt.plot(time_V*1e3, z_V*1e9, 'g-');\nplt.ylim(-65,65);\n \n",
"It WAS ABLE to capture the physics! Even with the big time step that we use with Euler scheme!\nAs you can see, and as we previously discussed the harmonic response is composed of a transient and a steady part. We are only concerned about the steady-state, since it is assumed that the probe achieves steady state motion during the imaging process. Therefore, we are going to slice our array in order to show only the last 10 oscillations, and we will see if it resembles the analytical solution.",
"#Slicing the full response vector to get the steady state response\nz_steady_V = z_V[int(90*period/dt):]\ntime_steady_V = time_V[int(90*period/dt):]\n\nplt.title('Plot 3 Verlet approx. of steady state sol. of Eq 1', fontsize=20); \nplt.xlabel('time, ms', fontsize=18);\nplt.ylabel('z_Verlet, nm', fontsize=18);\nplt.plot(time_steady_V*1e3, z_steady_V*1e9, 'g-');\nplt.ylim(-65,65);\nplt.show();\n",
"Let's use now one of the most popular schemes... The Runge Kutta 4!\nThe Runge Kutta 4 (RK4) method is very popular for the solution of ODEs. This method is designed to solve 1st order differential equations. We have converted our 2nd order ODE to a system of two coupled 1st order ODEs when we implemented the Euler scheme (equations 5 and 6). And we will have to use these equations for the RK4 algorithm.\nIn order to clearly see the RK4 implementation we are going to put equations 5 and 6 in the following form:\n$$\\begin{equation}\n\\frac{dz}{dt}=v \\Rightarrow f1(t,z,v)\n\\end{equation}$$\n$$\\begin{equation}\n\\frac{dv}{dt} = -kz-\\frac{m\\omega_0}{Q}+F_ocos(\\omega t) \\Rightarrow f2(t,z,v)\n\\end{equation}$$\nIt can be clearly seen that we have two coupled equations f1 and f2 and both depend in t, z, and v.\nThe RK4 equations for our special case where we have two coupled equations, are the following:\n$$\\begin{equation}\nk_1 = f1(t_i, z_i, v_i)\n\\end{equation}$$\n$$\\begin{equation}\nm_1 = f2(t_i, z_i, v_i)\n\\end{equation}$$\n$$\\begin{equation}\nk_2 = f1(t_i +1/2\\Delta t, z_i + 1/2k_1\\Delta t, v_i + 1/2m_1\\Delta t)\n\\end{equation}$$\n$$\\begin{equation}\nm_2 = f2(t_i +1/2\\Delta t, z_i + 1/2k_1\\Delta t, v_i + 1/2m_1\\Delta t)\n\\end{equation}$$\n$$\\begin{equation}\nk_3 = f1(t_i +1/2\\Delta t, z_i + k_2\\Delta t, v_i + 1/2m_2\\Delta t)\n\\end{equation}$$\n$$\\begin{equation}\nm_3 = f2(t_i +1/2\\Delta t, z_i + 1/2k_2\\Delta t, v_i + 1/2m_2\\Delta t)\n\\end{equation}$$\n$$\\begin{equation}\nk_4 = f1(t_i + \\Delta t, z_i + k_3\\Delta t, v_i + m_3\\Delta t)\n\\end{equation}$$\n$$\\begin{equation}\nk_4 = f2(t_i + \\Delta t, z_i + k_3\\Delta t, v_i + m_3\\Delta t)\n\\end{equation}$$\n$$\\begin{equation}\nf1_{n+1} = f1_n + \\Delta t/6(k_1+2k_2+2k_3+k_4)\n\\end{equation}$$\n$$\\begin{equation}\nf2_{n+1} = f2_n + \\Delta t/6(m_1+2m_2+2m_3+m_4)\n\\end{equation}$$\nPlease notice how k values and m values are used sequentially, since it is crucial in the implementation of the method!",
"#Definition of v, z, vectors\nvdot_RK4 = numpy.zeros(N)\nv_RK4 = numpy.zeros(N)\nz_RK4 = numpy.zeros(N)\nk1v_RK4 = numpy.zeros(N)\nk2v_RK4 = numpy.zeros(N)\nk3v_RK4 = numpy.zeros(N)\nk4v_RK4 = numpy.zeros(N)\n\nk1z_RK4 = numpy.zeros(N)\nk2z_RK4 = numpy.zeros(N)\nk3z_RK4 = numpy.zeros(N)\nk4z_RK4 = numpy.zeros(N)\n \n#calculation of velocities RK4\n\n#INITIAL CONDITIONS\nv_RK4[0] = 0\nz_RK4[0] = 0\n\n \nfor i in range (1,N):\n #RK4\n k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14 \n k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \\\n Fd*numpy.cos(wo*t[i-1]) ) ) / m ) #m1 Equation 15\n \n k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16\n k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m2 Eq 17\n \n k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18\n k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m3, Eq 19\n \n k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20\n k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt)) ) ) / m )#m4, Eq 21\n \n #Calculation of velocity, Equation 23\n v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\\\n 2.*k3v_RK4[i] + k4v_RK4[i] ) \n #calculation of position, Equation 22\n z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\\\n 2.*k3z_RK4[i] + k4z_RK4[i] ) \n\n#slicing array to get steady state\nz_steady_RK4 = z_RK4[int(90.*period/dt):]\ntime_steady_RK4 = t[int(90.*period/dt):]\n \nplt.title('Plot 3 RK4 approx. of steady state sol. of Eq 1', fontsize=20); \nplt.xlabel('time, ms', fontsize=18);\nplt.ylabel('z_RK4, nm', fontsize=18);\nplt.plot(time_steady_RK4 *1e3, z_steady_RK4*1e9, 'r-');\nplt.ylim(-65,65);\nplt.show();",
"Error Analysis\nLet's plot together our solutions using the different schemes along with our analytical reference.",
"plt.title('Plot 4 Schemes comparison with analytical sol.', fontsize=20);\nplt.plot(time_an_steady*1e3, z_an_steady*1e9, 'b--' );\nplt.plot(time_steady_V*1e3, z_steady_V*1e9, 'g-' );\nplt.plot(time_steady_RK4*1e3, z_steady_RK4*1e9, 'r-');\nplt.xlim(2.0, 2.06);\nplt.legend(['Analytical solution', 'Verlet method', 'Runge Kutta 4']);\nplt.xlabel('time, ms', fontsize=18);\nplt.ylabel('z_position, nm', fontsize=18);\n",
"It was pointless to include Euler in the last plot because it was not following the physics at all for this given time step. REMEMBER that Euler can give fair approximations, but you MUST decrease the time step in this particular case if you want to see the sinusoidal trajectory!\nIt seems our different schemes are giving different quality in approximating the solution. However it's hard to conclude something strong based on this qualitative observations. In order to state something stronger we have to perform further error analysis. We will do this at the end of the notebook after the references and will choose L1 norm for this purpose (You can find more information about this L1 ).\nAs we can see Runge Kutta 4 converges faster than Verlet for the range of time steps studied. And the difference between both is near one order of magnitude. One additional advantage with Runge Kutta 4 is that the method is very stable, even with big time steps (eg. 10 time steps per period) the method is able to catch up the physics of the oscillation, something where Verlet is not so good at.\nLet's add a sample and oscillate our probe over it\nIt is very common in the field of probe microscopy to model the tip sample interactions through DMT contact mechanics. \nDMT stands for Derjaguin, Muller and Toporov who were the scientists that developed the model (see ref 1). This model uses Hertz contact mechanics (see ref 2) with the addition of long range tip-sample interactions. These long range tip-sample interactions are ascribed to intermolecular interactions between the atoms of the tip and the upper atoms of the surface, and include mainly the contribution of van de Waals forces and Pauli repulsion from electronic clouds when the atoms of the tip meet closely the atoms of the surface. Figure 2 displays a force vs distance curve (FD curve) where it is shown how the forces between the tip and the sample behave with respect to the separation. It can be seen that at positive distances the tip starts \"feeling\" attraction from the tip (from the contribution of van der Waals forces) where the slope of the curve is positive and at some minimum distance ($a_0$) the tip starts experiencing repulsive interactions arising from electronic cloud repulsion (area where the slope of the curve is negative and the forces are negative). At lower distances, an area known as \"contact area\" arises and it is characterized by a negative slope and an emerging positive force.",
"fig3 = path + '/Fig3FDcurve.jpg'\nImage(filename=fig3)",
"Figure 3. Force vs Distance profile depicting tip-sample interactions in AFM (Adapted from reference 6)\nIn Hertz contact mechanics, one central aspect is to consider that the contact area increases as the sphere is pressed against an elastic surface, and this increase of the contact area \"modulates\" the effective stiffness of the sample. This concept is represented in figure 4 where the sample is depicted as comprised by a series of springs that are activated as the tip goes deeper into the sample. In other words, the deeper the sample goes, the larger the contact area and therefore more springs are activated (see more about this on reference 5).",
"fig4 = path + '/Fig4Hertzspring.jpg'\nImage(filename= fig4)",
"Figure 4. Conceptual representation of Hertz contact mechanics\nThis concept is represented mathematically by a non-linear spring whose elastic coefficient is a function of the contact area which at the same time depends on the sample indentation ( k(d) ).\n$$F_{ts} = k(d)d$$\nwhere\n$$k(d) = 4/3E\\sqrt{Rd}$$\nbeing $\\sqrt{Rd}$ the contact area when a sphere of radius R indents a half-space to depth d.\n$E$ is the effective Young's modulus of the tip-sample interaction. \nThe long range attractive forces are derived using Hamaker's equation (see reference 4): $if$ $d > a_0$\n$$F_{ts} = \\frac{-HR}{6d^2}$$\nwhere H is the Hamaker constant, R the tip radius and d the tip sample distance. $a_0$ is defined as the intermolecular distance and normally is chosen to be 0.2 nm.\nIn summary the equations that we will include in our code to take care of the tip sample interactions are the following:\n$$\\begin{equation}\nFts_{DMT} = \\begin{cases} \\frac{-HR}{6d^2} \\quad \\quad d \\leq{a_0}\\ \\\n\\frac{-HR}{6d^2} + 4/3E*R^{1/2}d^{3/2} \\quad \\quad d> a_0 \\end{cases}\n\\end{equation}$$\nwhere the effective Young's modulus E is defined by:\n$$\\begin{equation}\n1/E = \\frac{1-\\nu^2}{E_t}+\\frac{1-\\nu^2}{E_s}\n\\end{equation}$$\nwhere $E_t$ and $E_s$ are the tip and sample Young's modulus respectively. $\\nu_t$ and $\\nu_s$ are tip and sample Poisson ratios, respectively.\nEnough theory, Let's make our code!\nNow we will have to solve equation (1) but with the addition of tip-sample interactions which are described by equation (5). So we have a second order non-linear ODE which is no longer analytically straightforward:\n$$\\begin{equation}\nm \\frac{d^2z}{dt^2} = - k z - \\frac{m\\omega_0}{Q}\\frac{dz}{dt} + F_0 cos(\\omega t) + Fts_{DMT}\n\\end{equation}$$\nTherefore we have to use numerical methods to solve it. RK4 has shown to be more accurate to solve equation (1) among the methods reviewed in the previous section of the notebook, and therefore it is going to be the chosen method to solve equation (6).\nNow we have to declare all the variables related to the tip-sample forces. Since we are modeling our tip-sample forces using Hertz contact mechanics with addition of long range Van der Waals forces we have to define the Young's modulus of the tip and sample, the diameter of the tip of our probe, Poisson ratio, etc.",
"#DMT parameters (Hertz contact mechanics with long range Van der Waals forces added\na=0.2e-9 #intermolecular parameter\nH=6.4e-20 #hamaker constant of sample\nR=20e-9 #tip radius of the cantilever\nEs=70e6 #elastic modulus of sample\nEt=130e9 #elastic modulus of the tip\nvt=0.3 #Poisson coefficient for tip\nvs=0.3 #Poisson coefficient for sample\nE_star= 1/((1-pow(vt,2))/Et+(1-pow(vs,2))/Es) #Effective Young Modulus",
"Now let's declare the timestep, the simulation time and let's oscillate our probe!",
"#IMPORTANT distance where you place the probe above the sample\nz_base = 40.e-9 \n\nspp = 280. # time steps per period \ndt = period/spp \n\nsimultime = 100.*period\nN = int(simultime/dt)\nt = numpy.linspace(0,simultime,N)\n\n#Initializing variables for RK4\nv_RK4 = numpy.zeros(N)\nz_RK4 = numpy.zeros(N)\nk1v_RK4 = numpy.zeros(N) \nk2v_RK4 = numpy.zeros(N)\nk3v_RK4 = numpy.zeros(N)\nk4v_RK4 = numpy.zeros(N)\n \nk1z_RK4 = numpy.zeros(N)\nk2z_RK4 = numpy.zeros(N)\nk3z_RK4 = numpy.zeros(N)\nk4z_RK4 = numpy.zeros(N)\n\nTipPos = numpy.zeros(N)\nFts = numpy.zeros(N)\nFcos = numpy.zeros(N)\n\nfor i in range(1,N):\n #RK4\n k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14 \n k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \\\n Fd*numpy.cos(wo*t[i-1]) +Fts[i-1]) ) / m ) #m1 Equation 15\n \n k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16\n k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt/2.)) +Fts[i-1]) ) / m ) #m2 Eq 17\n \n k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18\n k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt/2.)) +Fts[i-1]) ) / m ) #m3, Eq19\n \n k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20\n k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt)) +Fts[i-1]) ) / m )#m4, Eq 21\n \n #Calculation of velocity, Equation 23\n v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\\\n 2.*k3v_RK4[i] + k4v_RK4[i] ) \n #calculation of position, Equation 22\n z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\\\n 2.*k3z_RK4[i] + k4z_RK4[i] ) \n \n TipPos[i] = z_base + z_RK4[i] #Adding base position to z position\n \n #calculation of DMT force\n\n if TipPos[i] > a: #this defines the attractive regime\n Fts[i] = -H*R/(6*(TipPos[i])**2)\n else: #this defines the repulsive regime\n Fts[i] = -H*R/(6*a**2)+4./3*E_star*numpy.sqrt(R)*(a-TipPos[i])**1.5\n \n \n Fcos[i] = Fd*numpy.cos(wo*t[i]) #Driving force (this will be helpful to plot the driving force)\n\n#Slicing arrays to get steady state\nTipPos_steady = TipPos[int(95*period/dt):] \nt_steady = t[int(95*period/dt):] \nFcos_steady = Fcos[int(95*period/dt):] \nFts_steady = Fts[int(95*period/dt):] \n\nplt.figure(1)\nfig, ax1 = plt.subplots()\nax2 = ax1.twinx()\nax1.plot(t_steady*1e3,TipPos_steady*1e9, 'g-')\nax2.plot(t_steady*1e3, Fcos_steady*1e9, 'b-')\nax1.set_xlabel('Time,s')\nax1.set_ylabel('Tip position (nm)', color='g')\nax2.set_ylabel('Drive Force (nm)', color='b')\nplt.title('Plot 7 Tip response and driving force', fontsize = 20)\n\n\nplt.figure(2)\nplt.title('Plot 8 Force-Distance curve', fontsize=20)\nplt.plot(TipPos*1e9, Fts*1e9, 'b--' )\nplt.xlabel('Tip Position, nm', fontsize=18)\nplt.ylabel('Force, nN', fontsize=18)\nplt.xlim(-20, 30)",
"Check that we have two sinusoidals. The one in green (the output) is the response signal of the tip (the tip trajectory in time) while the blue one (the input) is the cosinusoidal driving force that we are using to excite the tip. When the tip is excited in free air (without tip sample interactions) the phase lag between the output and the input is 90 degrees. You can test that with the previous code by only changing the position of the base to a high-enough position that it does not interact with the sample. However in the above plot the phase lag is less than 90 degrees. Interestingly the phase can give relative information about the material properties of the sample. There is a well-developed theory of this in tapping mode AFM and it's called phase spectroscopy. If you are interested in this topic you can read reference 1.\nAlso look at the above plot and see that the response amplitude is no longer 60 nm as we initially set (in this case is near 45 nm!). It means that we have experienced a significant amplitude reduction due to the tip sample interactions.\nBesides with the data acquired we are able to plot a Force-curve as the one shown in Figure 3. It shows the attractive and repulsive interactions of our probe with the surface.\nWe have arrived to the end of the notebook. I hope you have found it interesting and helpful!\nREFERENCES\n\nGarcı́a, Ricardo, and Ruben Perez. \"Dynamic atomic force microscopy methods.\" Surface science reports 47.6 (2002): 197-301.\nB. V. Derjaguin, V. M. Muller, and Y. P. Toporov, J. Colloid\nInterface Sci. 53, 314 (1975)\nHertz, H. R., 1882, Ueber die Beruehrung elastischer Koerper (On Contact Between Elastic Bodies), in Gesammelte Werke (Collected Works), Vol. 1, Leipzig, Germany, 1895.\nVan Oss, Carel J., Manoj K. Chaudhury, and Robert J. Good. \"Interfacial Lifshitz-van der Waals and polar interactions in macroscopic systems.\" Chemical Reviews 88.6 (1988): 927-941.\nEnrique A. López-Guerra, and Santiago D. Solares. \"Modeling viscoelasticity through spring–dashpot models in intermittent-contact atomic force microscopy.\" Beilstein journal of nanotechnology 5, no. 1 (2014): 2149-2163.\nEnrique A. López-Guerra, and Santiago D. Solares, \"El microscopio de Fuerza Atómica: Metodos y Aplicaciones.\" Revista UVG (2013) No. 28, 14-23.\n\nOPTIONAL: Further error analysis based in norm L1",
"print('This cell takes a while to compute')\n\n\"\"\"ERROR ANALYSIS EULER, VERLET AND RK4\"\"\"\n\n# time-increment array\ndt_values = numpy.array([8.0e-7, 2.0e-7, 0.5e-7, 1e-8, 0.1e-8])\n\n# array that will contain solution of each grid\nz_values_E = numpy.zeros_like(dt_values, dtype=numpy.ndarray)\nz_values_V = numpy.zeros_like(dt_values, dtype=numpy.ndarray)\nz_values_RK4 = numpy.zeros_like(dt_values, dtype=numpy.ndarray)\nz_values_an = numpy.zeros_like(dt_values, dtype=numpy.ndarray)\n\nfor n, dt in enumerate(dt_values):\n simultime = 100*period\n timestep = dt\n N = int(simultime/dt)\n t = numpy.linspace(0.0, simultime, N)\n \n #Initializing variables for Verlet\n zdoubledot_V = numpy.zeros(N)\n zdot_V = numpy.zeros(N)\n z_V = numpy.zeros(N)\n \n #Initializing variables for RK4\n vdot_RK4 = numpy.zeros(N)\n v_RK4 = numpy.zeros(N)\n z_RK4 = numpy.zeros(N)\n k1v_RK4 = numpy.zeros(N) \n k2v_RK4 = numpy.zeros(N)\n k3v_RK4 = numpy.zeros(N)\n k4v_RK4 = numpy.zeros(N)\n \n k1z_RK4 = numpy.zeros(N)\n k2z_RK4 = numpy.zeros(N)\n k3z_RK4 = numpy.zeros(N)\n k4z_RK4 = numpy.zeros(N)\n \n \n #Initial conditions Verlet (started with Euler approximation)\n z_V[0] = 0.0\n zdot_V[0] = 0.0\n zdoubledot_V[0] = ( ( -k*z_V[0] - (m*wo/Q)*zdot_V[0] + \\\n Fd*numpy.cos(wo*t[0]) ) ) / m\n zdot_V[1] = zdot_V[0] + zdoubledot_V[0]*timestep**2\n z_V[1] = z_V[0] + zdot_V[0]*dt\n zdoubledot_V[1] = ( ( -k*z_V[1] - (m*wo/Q)*zdot_V[1] + \\\n Fd*numpy.cos(wo*t[1]) ) ) / m\n \n \n #Initial conditions Runge Kutta\n v_RK4[1] = 0\n z_RK4[1] = 0 \n \n #Initialization variables for Analytical solution\n z_an = numpy.zeros(N)\n \n # time loop \n for i in range(2,N):\n \n #Verlet\n z_V[i] = 2*z_V[i-1] - z_V[i-2] + zdoubledot_V[i-1]*dt**2 #Eq 10\n zdot_V[i] = (z_V[i]-z_V[i-2])/(2.0*dt) #Eq 11\n zdoubledot_V[i] = ( ( -k*z_V[i] - (m*wo/Q)*zdot_V[i] +\\\n Fd*numpy.cos(wo*t[i]) ) ) / m #from eq 1\n \n #RK4\n k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14 \n k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \\\n Fd*numpy.cos(wo*t[i-1]) ) ) / m ) #m1 Equation 15\n \n k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16\n k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m2 Eq 17\n \n k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18\n k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m3, Eq 19\n \n k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20\n k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\\\n (v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\\\n numpy.cos(wo*(t[i-1] + dt)) ) ) / m )#m4, Equation 21\n \n #Calculation of velocity, Equation 23\n v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\\\n 2.*k3v_RK4[i] + k4v_RK4[i] ) \n #calculation of position, Equation 22\n z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\\\n 2.*k3z_RK4[i] + k4z_RK4[i] ) \n\n \n #Analytical solution\n A_an = Fo_an*Q/k #when driven at resonance A is simply Fo*Q/k\n phi = numpy.pi/2 #when driven at resonance the phase is pi/2\n z_an[i] = A_an*numpy.cos(wo*t[i] - phi) #Analytical solution eq. 1\n \n \n #Slicing the full response vector to get the steady state response\n z_steady_V = z_V[int(80*period/timestep):]\n z_an_steady = z_an[int(80*period/timestep):]\n z_steady_RK4 = z_RK4[int(80*period/timestep):]\n time_steady = t[int(80*period/timestep):]\n \n z_values_V[n] = z_steady_V.copy() # error for certain value of timestep\n z_values_RK4[n] = z_steady_RK4.copy() #error for certain value of timestep\n z_values_an[n] = z_an_steady.copy() #error for certain value of timestep\n\n\ndef get_error(z, z_exact, dt):\n #Returns the error with respect to the analytical solution using L1 norm\n \n return dt * numpy.sum(numpy.abs(z-z_exact))\n \n#NOW CALCULATE THE ERROR FOR EACH RESPECTIVE DELTA T\nerror_values_V = numpy.zeros_like(dt_values)\nerror_values_RK4 = numpy.zeros_like(dt_values)\n\nfor i, dt in enumerate(dt_values):\n ### call the function get_error() ###\n error_values_V[i] = get_error(z_values_V[i], z_values_an[i], dt)\n error_values_RK4[i] = get_error(z_values_RK4[i], z_values_an[i], dt)\n\n\nplt.figure(1)\nplt.title('Plot 5 Error analysis Verlet based on L1 norm', fontsize=20)\nplt.tick_params(axis='both', labelsize=14)\nplt.grid(True) #turn on grid lines\nplt.xlabel('$\\Delta t$ Verlet', fontsize=16) #x label\nplt.ylabel('Error Verlet', fontsize=16) #y label\nplt.loglog(dt_values, error_values_V, 'go-') #log-log plot\nplt.axis('equal') #make axes scale equally;\n\nplt.figure(2)\nplt.title('Plot 6 Error analysis RK4 based on L1 norm', fontsize=20) \nplt.tick_params(axis='both', labelsize=14) \nplt.grid(True) #turn on grid lines\nplt.xlabel('$\\Delta t$ RK4', fontsize=16) #x label\nplt.ylabel('Error RK4', fontsize=16) #y label\nplt.loglog(dt_values, error_values_RK4, 'co-') #log-log plot\nplt.axis('equal') #make axes scale equally;"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
minxuancao/shogun | doc/ipython-notebooks/multiclass/KNN.ipynb | gpl-3.0 | [
"K-Nearest Neighbors (KNN)\nby Chiyuan Zhang and Sören Sonnenburg\nThis notebook illustrates the <a href=\"http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm\">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href=\"http://en.wikipedia.org/wiki/Cover_tree\">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href=\"http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM\">Multiclass Support Vector Machines</a> is shown. \nThe basics\nThe training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias.\nIn SHOGUN, you can use CKNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard CEuclideanDistance, but in general, any subclass of CDistance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K.\nFirst we load and init data split:",
"import numpy as np\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\nfrom scipy.io import loadmat, savemat\nfrom numpy import random\nfrom os import path\n\nmat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))\nXall = mat['data']\nYall = np.array(mat['label'].squeeze(), dtype=np.double)\n\n# map from 1..10 to 0..9, since shogun\n# requires multiclass labels to be\n# 0, 1, ..., K-1\nYall = Yall - 1\n\nrandom.seed(0)\n\nsubset = random.permutation(len(Yall))\n\nXtrain = Xall[:, subset[:5000]]\nYtrain = Yall[subset[:5000]]\n\nXtest = Xall[:, subset[5000:6000]]\nYtest = Yall[subset[5000:6000]]\n\nNsplit = 2\nall_ks = range(1, 21)\n\nprint Xall.shape\nprint Xtrain.shape\nprint Xtest.shape",
"Let us plot the first five examples of the train data (first row) and test data (second row).",
"%matplotlib inline\nimport pylab as P\ndef plot_example(dat, lab):\n for i in xrange(5):\n ax=P.subplot(1,5,i+1)\n P.title(int(lab[i]))\n ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')\n ax.set_xticks([])\n ax.set_yticks([])\n \n \n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xtrain, Ytrain)\n\n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xtest, Ytest)",
"Then we import shogun components and convert the data to shogun objects:",
"from modshogun import MulticlassLabels, RealFeatures\nfrom modshogun import KNN, EuclideanDistance\n\nlabels = MulticlassLabels(Ytrain)\nfeats = RealFeatures(Xtrain)\nk=3\ndist = EuclideanDistance()\nknn = KNN(k, dist, labels)\nlabels_test = MulticlassLabels(Ytest)\nfeats_test = RealFeatures(Xtest)\nknn.train(feats)\npred = knn.apply_multiclass(feats_test)\nprint \"Predictions\", pred[:5]\nprint \"Ground Truth\", Ytest[:5]\n\nfrom modshogun import MulticlassAccuracy\nevaluator = MulticlassAccuracy()\naccuracy = evaluator.evaluate(pred, labels_test)\n\nprint \"Accuracy = %2.2f%%\" % (100*accuracy)",
"Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.",
"idx=np.where(pred != Ytest)[0]\nXbad=Xtest[:,idx]\nYbad=Ytest[idx]\n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xbad, Ybad)",
"Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:",
"knn.set_k(13)\nmultiple_k=knn.classify_for_multiple_k()\nprint multiple_k.shape",
"We have the prediction for each of the 13 k's now and can quickly compute the accuracies:",
"for k in xrange(13):\n print \"Accuracy for k=%d is %2.2f%%\" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest))",
"So k=3 seems to have been the optimal choice.\nAccellerating KNN\nObviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:",
"from modshogun import Time, KNN_COVER_TREE, KNN_BRUTE\nstart = Time.get_curtime()\nknn.set_k(3)\nknn.set_knn_solver_type(KNN_BRUTE)\npred = knn.apply_multiclass(feats_test)\nprint \"Standard KNN took %2.1fs\" % (Time.get_curtime() - start)\n\n\nstart = Time.get_curtime()\nknn.set_k(3)\nknn.set_knn_solver_type(KNN_COVER_TREE)\npred = knn.apply_multiclass(feats_test)\nprint \"Covertree KNN took %2.1fs\" % (Time.get_curtime() - start)\n",
"So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:",
"def evaluate(labels, feats, use_cover_tree=False):\n from modshogun import MulticlassAccuracy, CrossValidationSplitting\n import time\n split = CrossValidationSplitting(labels, Nsplit)\n split.build_subsets()\n \n accuracy = np.zeros((Nsplit, len(all_ks)))\n acc_train = np.zeros(accuracy.shape)\n time_test = np.zeros(accuracy.shape)\n for i in range(Nsplit):\n idx_train = split.generate_subset_inverse(i)\n idx_test = split.generate_subset_indices(i)\n\n for j, k in enumerate(all_ks):\n #print \"Round %d for k=%d...\" % (i, k)\n\n feats.add_subset(idx_train)\n labels.add_subset(idx_train)\n\n dist = EuclideanDistance(feats, feats)\n knn = KNN(k, dist, labels)\n knn.set_store_model_features(True)\n if use_cover_tree:\n knn.set_knn_solver_type(KNN_COVER_TREE)\n else:\n knn.set_knn_solver_type(KNN_BRUTE)\n knn.train()\n\n evaluator = MulticlassAccuracy()\n pred = knn.apply_multiclass()\n acc_train[i, j] = evaluator.evaluate(pred, labels)\n\n feats.remove_subset()\n labels.remove_subset()\n feats.add_subset(idx_test)\n labels.add_subset(idx_test)\n\n t_start = time.clock()\n pred = knn.apply_multiclass(feats)\n time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()\n\n accuracy[i, j] = evaluator.evaluate(pred, labels)\n\n feats.remove_subset()\n labels.remove_subset()\n return {'eout': accuracy, 'ein': acc_train, 'time': time_test}",
"Evaluate KNN with and without Cover Tree. This takes a few seconds:",
"labels = MulticlassLabels(Ytest)\nfeats = RealFeatures(Xtest)\nprint(\"Evaluating KNN...\")\nwo_ct = evaluate(labels, feats, use_cover_tree=False)\nwi_ct = evaluate(labels, feats, use_cover_tree=True)\nprint(\"Done!\")",
"Generate plots with the data collected in the evaluation:",
"import matplotlib\n\nfig = P.figure(figsize=(8,5))\nP.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')\nP.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')\nP.legend([\"Test Accuracy\", \"Training Accuracy\"])\nP.xlabel('K')\nP.ylabel('Accuracy')\nP.title('KNN Accuracy')\nP.tight_layout()\n\nfig = P.figure(figsize=(8,5))\nP.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')\nP.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')\nP.xlabel(\"K\")\nP.ylabel(\"time\")\nP.title('KNN time')\nP.legend([\"Plain KNN\", \"CoverTree KNN\"], loc='center right')\nP.tight_layout()",
"Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.\nComparison to Multiclass Support Vector Machines\nIn contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.\nLet us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).",
"from modshogun import GaussianKernel, GMNPSVM\n\nwidth=80\nC=1\n\ngk=GaussianKernel()\ngk.set_width(width)\n\nsvm=GMNPSVM(C, gk, labels)\n_=svm.train(feats)",
"Let's apply the SVM to the same test data set to compare results:",
"out=svm.apply(feats_test)\nevaluator = MulticlassAccuracy()\naccuracy = evaluator.evaluate(out, labels_test)\n\nprint \"Accuracy = %2.2f%%\" % (100*accuracy)",
"Since the SVM performs way better on this task - let's apply it to all data we did not use in training.",
"Xrem=Xall[:,subset[6000:]]\nYrem=Yall[subset[6000:]]\n\nfeats_rem=RealFeatures(Xrem)\nlabels_rem=MulticlassLabels(Yrem)\nout=svm.apply(feats_rem)\n\nevaluator = MulticlassAccuracy()\naccuracy = evaluator.evaluate(out, labels_rem)\n\nprint \"Accuracy = %2.2f%%\" % (100*accuracy)\n\nidx=np.where(out.get_labels() != Yrem)[0]\nXbad=Xrem[:,idx]\nYbad=Yrem[idx]\n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xbad, Ybad)",
"The misclassified examples are indeed much harder to label even for human beings."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chengsoonong/crowdastro | notebooks/56_nonlinear_astro_features.ipynb | mit | [
"Nonlinear Astro Features\nThis notebook examines whether $w_1 - w_2$ and $w_2 - w_3$ are good features. There are indications that these may be correlated with whether galaxies contain AGNs. It also looks at whether the fluxes are more useful than the magnitudes, i.e., should we exponentiate the magnitudes.",
"import h5py, numpy, sklearn.linear_model, sklearn.cross_validation, sklearn.metrics\n\nwith h5py.File('../data/training.h5') as f:\n raw_astro_features = f['features'][:, :4]\n dist_features = f['features'][:, 4]\n image_features = f['features'][:, 5:]\n \n w1_w2 = raw_astro_features[:, 0] - raw_astro_features[:, 1]\n w2_w3 = raw_astro_features[:, 1] - raw_astro_features[:, 2]\n \n features_linear = f['features'][:]\n features_nonlinear = numpy.hstack([\n raw_astro_features,\n dist_features.reshape((-1, 1)),\n w1_w2.reshape((-1, 1)),\n w2_w3.reshape((-1, 1)),\n image_features,\n ])\n features_exp = numpy.hstack([\n numpy.power(10, -0.4 * raw_astro_features),\n dist_features.reshape((-1, 1)),\n image_features,\n ])\n features_nlexp = numpy.hstack([\n numpy.power(10, -0.4 * raw_astro_features),\n numpy.power(10, -0.4 * w1_w2.reshape((-1, 1))),\n numpy.power(10, -0.4 * w2_w3.reshape((-1, 1))),\n dist_features.reshape((-1, 1)),\n image_features,\n ])\n labels = f['labels'].value\n\nx_train, x_test, t_train, t_test = sklearn.cross_validation.train_test_split(\n numpy.arange(raw_astro_features.shape[0]), labels, test_size=0.2)\n\nlr = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')\nlr.fit(features_linear[x_train], t_train)\ncm = sklearn.metrics.confusion_matrix(t_test, lr.predict(features_linear[x_test]))\ntp = cm[1, 1]\nn, p = cm.sum(axis=1)\ntn = cm[0, 0]\nba = (tp / p + tn / n) / 2\nprint('Linear features, balanced accuracy: {:.02%}'.format(ba))\nprint(cm)\n\nlrnl = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')\nlrnl.fit(features_nonlinear[x_train], t_train)\ncm = sklearn.metrics.confusion_matrix(t_test, lrnl.predict(features_nonlinear[x_test]))\ntp = cm[1, 1]\nn, p = cm.sum(axis=1)\ntn = cm[0, 0]\nba = (tp / p + tn / n) / 2\nprint('Nonlinear features, balanced accuracy: {:.02%}'.format(ba))\nprint(cm)",
"So maybe they're useful features (but not very). What about the fact they're magnitudes?",
"lrexp = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')\nlrexp.fit(features_exp[x_train], t_train)\ncm = sklearn.metrics.confusion_matrix(t_test, lrexp.predict(features_exp[x_test]))\ntp = cm[1, 1]\nn, p = cm.sum(axis=1)\ntn = cm[0, 0]\nba = (tp / p + tn / n) / 2\nprint('Exponentiated features, balanced accuracy: {:.02%}'.format(ba))\nprint(cm)\n\nlrnlexp = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')\nlrnlexp.fit(features_nlexp[x_train], t_train)\ncm = sklearn.metrics.confusion_matrix(t_test, lrnlexp.predict(features_nlexp[x_test]))\ntp = cm[1, 1]\nn, p = cm.sum(axis=1)\ntn = cm[0, 0]\nba = (tp / p + tn / n) / 2\nprint('Exponentiated features, balanced accuracy: {:.02%}'.format(ba))\nprint(cm)",
"Those are promising results, but we need to rererun this a few times with different training and testing sets to get some error bars.",
"def balanced_accuracy(lr, x_test, t_test):\n cm = sklearn.metrics.confusion_matrix(t_test, lr.predict(x_test))\n tp = cm[1, 1]\n n, p = cm.sum(axis=1)\n tn = cm[0, 0]\n ba = (tp / p + tn / n) / 2\n return ba\n\ndef test_feature_set(features, x_train, t_train, x_test, t_test):\n lr = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced')\n lr.fit(features[x_train], t_train)\n return balanced_accuracy(lr, features[x_test], t_test)\n\nlinear_ba = []\nnonlinear_ba = []\nexp_ba = []\nnonlinear_exp_ba = []\n\nn_trials = 10\nfor trial in range(n_trials):\n print('Trial {}/{}'.format(trial + 1, n_trials))\n x_train, x_test, t_train, t_test = sklearn.cross_validation.train_test_split(\n numpy.arange(raw_astro_features.shape[0]), labels, test_size=0.2)\n linear_ba.append(test_feature_set(features_linear, x_train, t_train, x_test, t_test))\n nonlinear_ba.append(test_feature_set(features_nonlinear, x_train, t_train, x_test, t_test))\n exp_ba.append(test_feature_set(features_exp, x_train, t_train, x_test, t_test))\n nonlinear_exp_ba.append(test_feature_set(features_nlexp, x_train, t_train, x_test, t_test))\n\nprint('Linear features: ({:.02f} +- {:.02f})%'.format(\n numpy.mean(linear_ba) * 100, numpy.std(linear_ba) * 100))\nprint('Nonlinear features: ({:.02f} +- {:.02f})%'.format(\n numpy.mean(nonlinear_ba) * 100, numpy.std(nonlinear_ba) * 100))\nprint('Exponentiated features: ({:.02f} +- {:.02f})%'.format(\n numpy.mean(exp_ba) * 100, numpy.std(exp_ba) * 100))\nprint('Exponentiated nonlinear features: ({:.02f} +- {:.02f})%'.format(\n numpy.mean(nonlinear_exp_ba) * 100, numpy.std(nonlinear_exp_ba) * 100))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wzxiong/DAVIS-Machine-Learning | labs/lab1-soln.ipynb | mit | [
"Lab 1: Nearest Neighbor Regression and Overfitting\nThis is based on the notebook file 01 in Aurélien Geron's github page",
"# Import the necessary packages\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import LeaveOneOut\nfrom sklearn import linear_model, neighbors\n%matplotlib inline\nplt.style.use('ggplot')\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \"..\"\ndatapath = PROJECT_ROOT_DIR + \"/data/lifesat/\"",
"Load and prepare data",
"# Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI\noecd_bli = pd.read_csv(datapath+\"oecd_bli_2015.csv\", thousands=',')\noecd_bli = oecd_bli[oecd_bli[\"INEQUALITY\"]==\"TOT\"]\noecd_bli = oecd_bli.pivot(index=\"Country\", columns=\"Indicator\", values=\"Value\")\n\noecd_bli.columns\n\noecd_bli[\"Life satisfaction\"].head()\n\n# Load and prepare GDP per capita data\n\n# Download data from http://goo.gl/j1MSKe (=> imf.org)\ngdp_per_capita = pd.read_csv(datapath+\"gdp_per_capita.csv\", thousands=',', delimiter='\\t',\n encoding='latin1', na_values=\"n/a\")\ngdp_per_capita.rename(columns={\"2015\": \"GDP per capita\"}, inplace=True)\ngdp_per_capita.set_index(\"Country\", inplace=True)\n\nfull_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)\nfull_country_stats.sort_values(by=\"GDP per capita\", inplace=\"True\")\n\n_ = full_country_stats.plot(\"GDP per capita\",'Life satisfaction',kind='scatter')",
"Here's the full dataset, and there are other columns. I will subselect a few of them by hand.",
"xvars = ['Self-reported health','Water quality','Quality of support network','GDP per capita']\n\nX = np.array(full_country_stats[xvars])\ny = np.array(full_country_stats['Life satisfaction'])",
"I will define the following functions to expedite the LOO risk and the Empirical risk.",
"def loo_risk(X,y,regmod):\n \"\"\"\n Construct the leave-one-out square error risk for a regression model\n \n Input: design matrix, X, response vector, y, a regression model, regmod\n Output: scalar LOO risk\n \"\"\"\n loo = LeaveOneOut()\n loo_losses = []\n for train_index, test_index in loo.split(X):\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n regmod.fit(X_train,y_train)\n y_hat = regmod.predict(X_test)\n loss = np.sum((y_hat - y_test)**2)\n loo_losses.append(loss)\n return np.mean(loo_losses)\n\ndef emp_risk(X,y,regmod):\n \"\"\"\n Return the empirical risk for square error loss\n \n Input: design matrix, X, response vector, y, a regression model, regmod\n Output: scalar empirical risk\n \"\"\"\n regmod.fit(X,y)\n y_hat = regmod.predict(X)\n return np.mean((y_hat - y)**2)\n\nlin1 = linear_model.LinearRegression(fit_intercept=False)\nprint('LOO Risk: '+ str(loo_risk(X,y,lin1)))\nprint('Emp Risk: ' + str(emp_risk(X,y,lin1)))",
"As you can see, the empirical risk is much less than the leave-one-out risk! This can happen in more dimensions.\nNearest neighbor regression\nUse the method described here: http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html\nI have already imported the necessary module, so you just need to use the regression object (like we used LinearRegression)",
"# knn = neighbors.KNeighborsRegressor(n_neighbors=5)",
"Exercise 1 For each k from 1 to 30 compute the nearest neighbors empirical risk and LOO risk. Plot these as a function of k and reflect on the bias-variance tradeoff here. (Hint: use the previously defined functions)",
"LOOs = []\nMSEs = []\nK=30\nKs = range(1,K+1)\nfor k in Ks:\n knn = neighbors.KNeighborsRegressor(n_neighbors=k)\n LOOs.append(loo_risk(X,y,knn))\n MSEs.append(emp_risk(X,y,knn))\n\nplt.plot(Ks,LOOs,'r',label=\"LOO risk\")\nplt.title(\"Risks for kNN Regression\")\nplt.plot(Ks,MSEs,'b',label=\"Emp risk\")\nplt.legend()\n_ = plt.xlabel('k')",
"I decided to see what the performance is for k from 1 to 30. We see that the bias does not dominate until k exceeds 17, the performance is somewhat better for k around 12. This demonstrates that you can't trust the Empirical risk, since it includes the training sample. We can compare this LOO risk to that of linear regression (0.348) and see that it outperforms linear regression.\nExercise 2 Do the same but for the reduced predictor variables below...",
"X1 = np.array(full_country_stats[['Self-reported health']])\n\nLOOs = []\nMSEs = []\nK=30\nKs = range(1,K+1)\nfor k in Ks:\n knn = neighbors.KNeighborsRegressor(n_neighbors=k)\n LOOs.append(loo_risk(X1,y,knn))\n MSEs.append(emp_risk(X1,y,knn))\n\nplt.plot(Ks,LOOs,'r',label=\"LOO risk\")\nplt.title(\"Risks for kNN Regression\")\nplt.plot(Ks,MSEs,'b',label=\"Emp risk\")\nplt.legend()\n_ = plt.xlabel('k')",
"For this one dimensional nearest neighbor method, we have a more significant cost of overfitting (k small) when the variance dominates."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jakerylandwilliams/partitioner | tests/partitioner_examples.ipynb | apache-2.0 | [
"Partitioner examples\nThis is a jupyter notebook with a few vignettes that present some of the Python partitioner package's functionality.\nNote: Cleaning of text and determination of clauses occurs in the partitionText method. Because of this, it is unwise to pass large, uncleaned pieces of text as 'clauses' directly through the .partition() method (regardless of the type of partition being taken), as this will simply tokenize the text by splitting on \" \", producing many long, punctuation-filled phrases, and likely run very slow. As such, best practices only use .partition() for testing and exploring the tool on case-interested clauses.",
"from partitioner import partitioner\nfrom partitioner.methods import *",
"Process the English Wiktionary to generate the (default) partition probabilities.\nNote: this step can take significant time for large dictionaries (~5 min).",
"## Vignette 1: Build informed partition data from a dictionary, \n## and store to local collection\ndef preprocessENwiktionary():\n pa = partitioner(informed = True, dictionary = \"./dictionaries/enwiktionary.txt\")\n pa.dumpqs(qsname=\"enwiktionary\")\n\npreprocessENwiktionary()",
"Perform a few one-off partitions.",
"## Vignette 2: An informed, one-off partition of a single clause\ndef informedOneOffPartition(clause = \"How are you doing today?\"):\n pa = oneoff()\n print pa.partition(clause)\n\ninformedOneOffPartition()\ninformedOneOffPartition(\"Fine, thanks a bunch for asking!\")",
"Solve for the informed stochastic expectation partition (given the informed partition probabilities).",
"## Vignette 3: An informed, stochastic expectation partition of a single clause\ndef informedStochasticPartition(clause = \"How are you doing today?\"):\n pa = stochastic()\n print pa.partition(clause)\n\ninformedStochasticPartition()",
"Perform a pure random (uniform) one-off partition.",
"## Vignette 4: An uniform, one-off partition of a single clause\ndef uniformOneOffPartition(informed = False, clause = \"How are you doing today?\", qunif = 0.25):\n pa = oneoff(informed = informed, qunif = qunif)\n print pa.partition(clause)\n\nuniformOneOffPartition()\nuniformOneOffPartition(qunif = 0.75)",
"Solve for the uniform stochastic expectation partition (given the uniform partition probabilities).",
"## Vignette 5: An uniform, stochastic expectation partition of a single clause\ndef uniformStochasticPartition(informed = False, clause = \"How are you doing today?\", qunif = 0.25):\n pa = stochastic(informed = informed, qunif = qunif)\n print pa.partition(clause)\n\nuniformStochasticPartition()\nuniformStochasticPartition(clause = \"Fine, thanks a bunch for asking!\")",
"Build a rank-frequency distribution for a text and determine its Zipf/Simon (bag-of-phrase) $R^2$.",
"## Vignette 6: Use the default partitioning method to partition the main partitioner.py file and compute rsq\ndef testPartitionTextAndFit():\n pa = oneoff()\n pa.partitionText(textfile = pa.home+\"/../README.md\")\n pa.testFit()\n print \"R-squared: \",round(pa.rsq,2)\n print\n phrases = sorted(pa.counts, key = lambda x: pa.counts[x], reverse = True)\n for j in range(25):\n phrase = phrases[j]\n print phrase, pa.counts[phrase]\n\ntestPartitionTextAndFit()",
"Process the some other Wiktionaries to generate the partition probabilities.\nNote: These dictionaries are not as well curated and potentially contain phrases from other languages (a consequence of wiktionary construction). As a result, they hold many many more phrases and will take longer to process. However, since the vast majority of these dictionaries are language-correct, effects on the partitioner and its (course) partition probabilities is likely negligable.",
"## Vignette X1: Build informed partition data from other dictionaries, \n## and store to local collection\ndef preprocessOtherWiktionaries():\n for lang in [\"ru\", \"pt\", \"pl\", \"nl\", \"it\", \"fr\", \"fi\", \"es\", \"el\", \"de\", \"en\"]:\n print \"working on \"+lang+\"...\"\n pa = partitioner(informed = True, dictionary = \"./dictionaries/\"+lang+\".txt\")\n pa.dumpqs(qsname=lang)\n\npreprocessOtherWiktionaries()",
"Test partitioner on some other languages.",
"from partitioner import partitioner\nfrom partitioner.methods import *\n## Vignette X2: Use the default partitioning method to partition the main partitioner.py file and compute rsq\ndef testFrPartitionTextAndFit():\n for lang in [\"ru\", \"pt\", \"pl\", \"nl\", \"it\", \"fr\", \"fi\", \"es\", \"el\", \"de\", \"en\"]:\n pa = oneoff(qsname = lang)\n pa.partitionText(textfile = \"./tests/test_\"+lang+\".txt\")\n pa.testFit()\n print\n print lang+\" R-squared: \",round(pa.rsq,2)\n print\n phrases = sorted(pa.counts, key = lambda x: pa.counts[x], reverse = True)\n for j in range(5):\n phrase = phrases[j]\n print phrase, pa.counts[phrase]\n\ntestFrPartitionTextAndFit()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ekaakurniawan/3nb | Equations/Dose-Response Relations/Dose-Response Relations.ipynb | gpl-3.0 | [
"Part of Neural Network Notebook (3nb) project.\n\nCopyright (C) 2014 Eka A. Kurniawan\neka.a.kurniawan(ta)gmail(tod)com\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.\n\nTested On",
"import sys\nprint(\"Python %d.%d.%d\" % (sys.version_info.major, \\\n sys.version_info.minor, \\\n sys.version_info.micro))\n\nimport numpy as np\nprint(\"NumPy %s\" % np.__version__)\n\n# Display graph inline\n%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nprint(\"matplotlib %s\" % matplotlib.__version__)",
"Display Settings",
"# Display graph in 'retina' format for Mac with retina display. Others, use PNG or SVG format.\n%config InlineBackend.figure_format = 'retina'\n#%config InlineBackend.figure_format = 'PNG'\n#%config InlineBackend.figure_format = 'SVG'",
"Housekeeping Functions\nFollowing function plots different concentrations used to visualize dose-response relation.",
"def plot_concentration(c):\n fig = plt.figure()\n \n sp111 = fig.add_subplot(111)\n # Display grid\n sp111.grid(True, which = 'both')\n # Plot concentration\n len_c = len(c)\n sp111.plot(np.linspace(0,len_c-1,len_c), c, color = 'gray', linewidth = 2)\n # Label\n sp111.set_ylabel('Concentration (nM)')\n \n # Set X axis within different concentration\n plt.xlim([0, len_c-1])\n \n plt.show()",
"Following function plots dose-response relation in both linear (log_flag = False) and logarithmic (log_flag = True) along X axis.",
"# d : Dose\n# r1 : First response data\n# r1_label : First response label\n# r2 : Second response data\n# r2_label : Second response label\n# log_flag : Selection for linear or logarithmic along X axis\n# - False: Plot linear (default)\n# - True: Plot logarithmic\ndef plot_dose_response_relation(d, r1, r1_label, r2 = None, r2_label = \"\", log_flag = False):\n fig = plt.figure()\n \n sp111 = fig.add_subplot(111)\n # Handle logarithmic along X axis\n if log_flag:\n sp111.set_xscale('log')\n # Display grid\n sp111.yaxis.set_ticks([0.0, 0.5, 1.0])\n sp111.grid(True, which = 'both')\n # Plot dose-response\n sp111.plot(d, r1, color = 'blue', label = r1_label, linewidth = 2)\n if r2 is not None:\n sp111.plot(d, r2, color = 'red', label = r2_label, linewidth = 2)\n # Labels\n sp111.set_ylabel('Response')\n sp111.set_xlabel('Concentration (nM)')\n # Legend\n sp111.legend(loc='upper left')\n \n # Set Y axis in between 0 and 1\n plt.ylim([0, 1])\n \n plt.show()",
"Dose-Response Relations $^{[1]}$\nGenerally, dose-response relations can be written as follow. In which the dose is represented as concentration ($c$), while the formula returns the response ($r$).\n$$r = \\frac{F.c^{n_H}}{{c^{n_H} + EC_{50}^{n_H}}}$$\nOther terms like $EC_{50}$ is the effective concentration achieved at 50% of maximum response. Normally, efficacy ($F$) is normalized to one so that it is easier to make comparison among different drugs. Furthermore, if full agonist is defined to have efficacy equal to one, anything lower than one is treated to be partial agonist. Finally, Hill coefficients ($n_H$) defines the number of drug molecules needed to activate target receptor.\nDrug Concentartion\nBoth linearly and logarithmically increased concentrations are used to study dose-response relations.\n\nLinearly increased concentration (c_lin):",
"c_lin = np.linspace(0,100,101) # Drug concentration in nanomolar (nM)\nplot_concentration(c_lin)",
"Logarithmically increased concentration (c_log):",
"c_log = np.logspace(0,5,101) # Drug concentration in nanomolar (nM)\nplot_concentration(c_log)",
"Agonist Only\nTo calculate dose-response relation in the case of agonist only, we use general dose-response relation equation described previously. The function is shown below.",
"# Calculate dose-response relation (DRR) for agonist only\n# c : Drug concentration(s) in nanomolar (nM)\n# EC_50 : 50% effective concentration in nanomolar (nM)\n# F : Efficacy (unitless)\n# n_H : Hill coefficients (unitless)\ndef calc_drr(c, EC_50 = 20, F = 1, n_H = 1):\n r = (F * (c ** n_H) / ((c ** n_H) + (EC_50 ** n_H)))\n return r",
"Following result shows drug response of agonist only to the linearly increased concentrations.",
"c = c_lin # Drug concentration(s) in nanomolar (nM)\nEC_50 = 20 # 50% effective concentration in nanomolar (nM)\nF = 1 # Efficacy (unitless)\nn_H = 1 # Hill coefficients (unitless)\nr = calc_drr(c, EC_50, F, n_H)\nplot_dose_response_relation(c, r, \"Agonist\")",
"Following result shows drug response of agonist only to the logarithmically increased concentrations.",
"c = c_log # Drug concentration(s) in nanomolar (nM)\nEC_50 = 20 # 50% effective concentration in nanomolar (nM)\nF = 1 # Efficacy (unitless)\nn_H = 1 # Hill coefficients (unitless)\nr = calc_drr(c, EC_50, F, n_H)\nplot_dose_response_relation(c, r, \"Agonist\", log_flag = True)",
"Agonist Plus Competitive Antagonist\nCompatitive antagonist, as the name sugest, competes with agonist molecules to sit in the same pocket. It makes the binding harder for agonist as well as to trigger the activation. Therefore, higher agonist concentration is required to reach both full and partial (like $EC_{50}$) activation. New $EC_{50}$ value, called $EC_{50}'$ ($EC_{50}$ prime) is calculated using following formula.\n$$EC_{50}' = EC_{50} * \\left(1 + \\frac{c_i}{K_i}\\right)$$\nIt depends on inhibitor concentration ($c_i$) and dissociation constant of the inhibitor ($K_i$).\nFollowing is a new function to calculate drug response of agonist with competitive antagonist. It shows new $EC_{50}$ value (EC_50_prime) replacing agonist only $EC_{50}$ value (EC_50).",
"# Calculate dose-response relation (DRR) for agonist plus competitive antagonist\n# - Agonist\n# c : Drug concentration(s) in nanomolar (nM)\n# EC_50 : 50% effective concentration in nanomolar (nM)\n# F : Efficacy (unitless)\n# n_H : Hill coefficients (unitless)\n# - Antagonist\n# K_i : Dissociation constant of inhibitor in nanomolar (nM)\n# c_i : Inhibitor concentration in nanomolar (nM)\ndef calc_drr_agonist_cptv_antagonist(c, EC_50 = 20, F = 1, n_H = 1, K_i = 5, c_i = 25):\n EC_50_prime = EC_50 * (1 + (c_i / K_i))\n r = calc_drr(c, EC_50_prime, F, n_H)\n return r",
"Following result shows drug response of agonist with competitive antagonist to the linearly increased concentrations.",
"c = c_lin # Drug concentration(s) in nanomolar (nM)\nEC_50 = 20 # 50% effective concentration in nanomolar (nM)\nF = 1 # Efficacy (unitless)\nn_H = 1 # Hill coefficients (unitless)\nr_a = calc_drr(c, EC_50, F, n_H)\n\nK_i = 5 # Dissociation constant of inhibitor in nanomolar (nM)\nc_i = 25 # Inhibitor concentration in nanomolar (nM)\nr_aca = calc_drr_agonist_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i)\n\nplot_dose_response_relation(c, r_a, \"Agonist Only\", r_aca, \"Plus Antagonist\")",
"Following result shows drug response of agonist with competitive antagonist to the logarithmically increased concentrations.",
"c = c_log # Drug concentration(s) in nanomolar (nM)\nEC_50 = 20 # 50% effective concentration in nanomolar (nM)\nF = 1 # Efficacy (unitless)\nn_H = 1 # Hill coefficients (unitless)\nr_a = calc_drr(c, EC_50, F, n_H)\n\nK_i = 5 # Dissociation constant of inhibitor in nanomolar (nM)\nc_i = 25 # Inhibitor concentration in nanomolar (nM)\nr_aca = calc_drr_agonist_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i)\n\nplot_dose_response_relation(c, r_a, \"Agonist Only\", r_aca, \"Plus Antagonist\", log_flag = True)",
"Agonist Plus Noncompetitive Antagonist\nUnlike competitive antagonist, noncompetitive antagonist does not compete directly to the location where agonist binds but somewhere else in the subsequent pathway. Instead of altering effective concentration (like $EC_{50}$), noncompetitive antagonist affects efficacy. New efficacy value ($F'$) due to the existance of noncompetitive antagonist is calculated as follow.\n$$F' = \\frac{F}{\\left(1 + \\frac{c_i}{K_i}\\right)}$$\nFollowing is a new function to calculate drug response of agonist with noncompetitive antagonist. It shows new efficacy value (F_prime) replacing agonist only efficacy value (F).",
"# Calculate dose-response relation (DRR) for agonist plus noncompetitive antagonist\n# - Agonist\n# c : Drug concentration(s) in nanomolar (nM)\n# EC_50 : 50% effective concentration in nanomolar (nM)\n# F : Efficacy (unitless)\n# n_H : Hill coefficients (unitless)\n# - Antagonist\n# K_i : Dissociation constant of inhibitor in nanomolar (nM)\n# c_i : Inhibitor concentration in nanomolar (nM)\ndef calc_drr_agonist_non_cptv_antagonist(c, EC_50 = 20, F = 1, n_H = 1, K_i = 5, c_i = 25):\n F_prime = F / (1 + (c_i / K_i))\n r = calc_drr(c, EC_50, F_prime, n_H)\n return r",
"Following result shows drug response of agonist with noncompetitive antagonist to the linearly increased concentrations.",
"c = c_lin # Drug concentration(s) in nanomolar (nM)\nEC_50 = 20 # 50% effective concentration in nanomolar (nM)\nF = 1 # Efficacy (unitless)\nn_H = 1 # Hill coefficients (unitless)\nr_a = calc_drr(c, EC_50, F, n_H)\n\nK_i = 5 # Dissociation constant of inhibitor in nanomolar (nM)\nc_i = 25 # Inhibitor concentration in nanomolar (nM)\nr_ana = calc_drr_agonist_non_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i)\n\nplot_dose_response_relation(c, r_a, \"Agonist Only\", r_ana, \"Plus Antagonist\")",
"Following result shows drug response of agonist with noncompetitive antagonist to the logarithmically increased concentrations.",
"c = c_log # Drug concentration(s) in nanomolar (nM)\nEC_50 = 20 # 50% effective concentration in nanomolar (nM)\nF = 1 # Efficacy (unitless)\nn_H = 1 # Hill coefficients (unitless)\nr_a = calc_drr(c, EC_50, F, n_H)\n\nK_i = 5 # Dissociation constant of inhibitor in nanomolar (nM)\nc_i = 25 # Inhibitor concentration in nanomolar (nM)\nr_ana = calc_drr_agonist_non_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i)\n\nplot_dose_response_relation(c, r_a, \"Agonist Only\", r_ana, \"Plus Antagonist\", log_flag = True)",
"Reference\n\nHenry A. Lester, 2014. Drugs and the Brain. Week 2: Dose-response Relations. California Institute of Technology. Coursera. https://www.coursera.org/course/drugsandbrain"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
peterfig/keras-deep-learning-course | .ipynb_checkpoints/TextModels-checkpoint.ipynb | mit | [
"Text Models with Keras\nDense vector word embeddings\nA dense vector word embedding means we represent words with number-full numerical vectors-- most components are nonzero. This is in contrast to sparse vector, or bag-of-word embeddings, which have very high-dimensional vectors (the size of the vocabulary) yet with most components zero.\nDense vector models also capture word meaning, such that similar words (car and automobile) have similar numerical vectors. In a sparse vector representation, similar words probably have completely different numerical vectors. Dense vectors are formed as a by-product of some prediction task. The quality of the embedding depends on both the prediction task and the data set upon which the prediction task was trained. \nWhen we use word embeddings in our deep learning models, we refer to their birthplace as the embedding layer. Sometimes, we don't actually care about the trained predictor (skip-gram and cbow models); we're just interested in the embeddings by-product for use elsewhere. Other times, we need an embedding layer to represent words in a larger model such as a sentiment classifier; there, we may opt for pre-trained dense vectors. \nWhen we don't care about the trained model and just want to create meaningful, dense word vectors, there are two popular prediction models: skip-gram and CBOW (continuous bag of words). Word embeddings constructed in this manner are termed word2vec or w2v. We will also look at another more recent method, fastText. In any case, we've first got to construct training data from our corpus. The exact procedure depends on the model.\nKeras Models\nLet's have a look at the Keras models we'll use in this section. (I'm keeping the code as markup since we haven't defined any of the parameters yet. We'll run this code after we develop input data and parameters.)",
"from IPython.display import Image",
"skip-gram",
" Image('diagrams/skip-gram.png')",
"```python\nword1 = Input(shape=(1,), dtype='int64', name='word1')\nword2 = Input(shape=(1,), dtype='int64', name='word2')\nshared_embedding = Embedding(\n input_dim=VOCAB_SIZE+1, \n output_dim=DENSEVEC_DIM, \n input_length=1, \n embeddings_constraint = unit_norm(),\n name='shared_embedding')\nembedded_w1 = shared_embedding(word1)\nembedded_w2 = shared_embedding(word2)\nw1 = Flatten()(embedded_w1)\nw2 = Flatten()(embedded_w2)\ndotted = Dot(axes=1, name='dot_product')([w1, w2])\nprediction = Dense(1, activation='sigmoid', name='output_layer')(dotted)\nsg_model = Model(inputs=[word1, word2], outputs=prediction)\n```\nfastText\n```python\nft_model = Sequential()\nft_model.add(Embedding(\n input_dim = MAX_FEATURES,\n output_dim = EMBEDDING_DIMS,\n input_length= MAXLEN))\nft_model.add(GlobalAveragePooling1D())\nft_model.add(Dense(1, activation='sigmoid'))\n```\nModels and Training data construction\nThe first step for CBOW and skip-gram\nOur training corpus is a collection of sentences, Tweets, emails, comments, or even longer documents. It is something composed of words. Each word takes is turn being the \"target\" word, and we collect the n words behind it and n words which follow it. This n is referred to as window size. If our example document is the sentence \"I love deep learning\" and the window size is 1, we'd get:\n * I, love\n * I, love, deep\n * love, deep, learning\n * deep, learning\nThe target word is bold.\nSkip-gram model training data\nSkip-gram means form word pairs with a target word and all words in the window. These become the \"positive\" (1) samples for the skip-gram algorithm. In our \"I love deep learning\" example we'd get (eliminating repeated pairs):\n\n(I, love) = 1\n(love, deep) = 1\n(deep, learning) = 1\n\nTo create negative samples (0), we pair random vocabulary words with the target word. Yes, it's possible to unluckily pick a negative sample that usually appears around the target word.\nFor our prediction task, we'll take the dot product of the words in each pair (a small step away from the cosine similarity). The training will keep tweaking the word vectors to make this product as close to unity as possible for our positive samples, and zero for our negative samples.\nHappily, Keras include a function for creating skipgrams from text. It even does the negative sampling for us.",
"from keras.preprocessing.sequence import skipgrams\nfrom keras.preprocessing.text import Tokenizer, text_to_word_sequence\n\ntext1 = \"I love deep learning.\"\ntext2 = \"Read Douglas Adams as much as possible.\"\n\ntokenizer = Tokenizer()\ntokenizer.fit_on_texts([text1, text2])\n\nword2id = tokenizer.word_index\nword2id.items()",
"Note word id's are numbered from 1, not zero",
"id2word = { wordid: word for word, wordid in word2id.items()}\nid2word\n\nencoded_text = [word2id[word] for word in text_to_word_sequence(text1)]\nencoded_text\n\n[word2id[word] for word in text_to_word_sequence(text2)]\n\nsg = skipgrams(encoded_text, vocabulary_size=len(word2id.keys()), window_size=1)\nsg\n\nfor i in range(len(sg[0])):\n print \"({0},{1})={2}\".format(id2word[sg[0][i][0]], id2word[sg[0][i][1]], sg[1][i])",
"Model parameters",
"VOCAB_SIZE = len(word2id.keys())\nVOCAB_SIZE\n\nDENSEVEC_DIM = 50",
"Model build",
"import keras\n\nfrom keras.layers.embeddings import Embedding\nfrom keras.constraints import unit_norm\nfrom keras.layers.merge import Dot\nfrom keras.layers.core import Activation\nfrom keras.layers.core import Flatten\n\nfrom keras.layers import Input, Dense\nfrom keras.models import Model",
"Create a dense vector for each word in the pair. The output of Embedding has shape (batch_size, sequence_length, output_dim) which in our case is (batch_size, 1, DENSEVEC_DIM). We'll use Flatten to get rid of that pesky middle dimension (1), so going into the dot product we'll have shape (batch_size, DENSEVEC_DIM).",
"word1 = Input(shape=(1,), dtype='int64', name='word1')\nword2 = Input(shape=(1,), dtype='int64', name='word2')\n\nshared_embedding = Embedding(\n input_dim=VOCAB_SIZE+1, \n output_dim=DENSEVEC_DIM, \n input_length=1, \n embeddings_constraint = unit_norm(),\n name='shared_embedding')\n\nembedded_w1 = shared_embedding(word1)\nembedded_w2 = shared_embedding(word2)\n\nw1 = Flatten()(embedded_w1)\nw2 = Flatten()(embedded_w2)\n\ndotted = Dot(axes=1, name='dot_product')([w1, w2])\n\nprediction = Dense(1, activation='sigmoid', name='output_layer')(dotted)\n\nsg_model = Model(inputs=[word1, word2], outputs=prediction)\n\nsg_model.compile(optimizer='adam', loss='mean_squared_error')",
"At this point you can check out how the data flows through your compiled model.",
"sg_model.layers\n\ndef print_layer(model, num):\n print model.layers[num]\n print model.layers[num].input_shape\n print model.layers[num].output_shape\n\nprint_layer(sg_model,3)",
"Let's try training it with our toy data set!",
"import numpy as np\n\npairs = np.array(sg[0])\ntargets = np.array(sg[1])\n\ntargets\n\npairs\n\nw1_list = np.reshape(pairs[:, 0], (len(pairs), 1))\nw1_list\n\nw2_list = np.reshape(pairs[:, 1], (len(pairs), 1))\nw2_list\n\nw2_list.shape\n\nw2_list.dtype\n\nsg_model.fit(x=[w1_list, w2_list], y=targets, epochs=10)\n\nsg_model.layers[2].weights",
"Continuous Bag of Words (CBOW) model\nCBOW means we take all the words in the window and use them to predict the target word. Note we are trying to predict an actual word (or a probability distribution over words) with CBOW, whereas in skip-gram we are trying to predict a similarity score. \nFastText Model\nFastText is creating dense document vectors using the words in the document enhanced with n-grams. These are embedded, averaged, and fed through a hidden dense layer, with a sigmoid activation. The prediction task is some binary classification of the documents. As usual, after training we can extract the dense vectors from the model.\nFastText Model Data Prep",
"MAX_FEATURES = 20000 # number of unique words in the dataset\nMAXLEN = 400 # max word (feature) length of a review \nEMBEDDING_DIMS = 50\nNGRAM_RANGE = 2",
"Some data prep functions lifted from the example",
"def create_ngram_set(input_list, ngram_value=2):\n \"\"\"\n Extract a set of n-grams from a list of integers.\n \"\"\"\n return set(zip(*[input_list[i:] for i in range(ngram_value)]))\n\ncreate_ngram_set([1, 2, 3, 4, 5], ngram_value=2)\n\ncreate_ngram_set([1, 2, 3, 4, 5], ngram_value=3)\n\ndef add_ngram(sequences, token_indice, ngram_range=2):\n \"\"\"\n Augment the input list of list (sequences) by appending n-grams values.\n \"\"\"\n new_sequences = []\n for input_list in sequences:\n new_list = input_list[:]\n for i in range(len(new_list) - ngram_range + 1):\n for ngram_value in range(2, ngram_range + 1):\n ngram = tuple(new_list[i:i + ngram_value])\n if ngram in token_indice:\n new_list.append(token_indice[ngram])\n new_sequences.append(new_list)\n\n return new_sequences\n\nsequences = [[1,2,3,4,5, 6], [6,7,8]]\ntoken_indice = {(1,2): 20000, (4,5): 20001, (6,7,8): 20002}\n\nadd_ngram(sequences, token_indice, ngram_range=2)\n\nadd_ngram(sequences, token_indice, ngram_range=3)",
"load canned training data",
"from keras.datasets import imdb\n\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=MAX_FEATURES)\n\nx_train[0:2]\n\ny_train[0:2]",
"Add n-gram features",
"ngram_set = set()\nfor input_list in x_train:\n for i in range(2, NGRAM_RANGE + 1):\n set_of_ngram = create_ngram_set(input_list, ngram_value=i)\n ngram_set.update(set_of_ngram)\n\nlen(ngram_set)",
"Assign id's to the new features",
"ngram_set.pop()\n\nstart_index = MAX_FEATURES + 1\ntoken_indice = {v: k + start_index for k, v in enumerate(ngram_set)}\nindice_token = {token_indice[k]: k for k in token_indice}",
"Update MAX_FEATURES",
"import numpy as np\n\nMAX_FEATURES = np.max(list(indice_token.keys())) + 1\nMAX_FEATURES",
"Add n-grams to the input data",
"x_train = add_ngram(x_train, token_indice, NGRAM_RANGE)\nx_test = add_ngram(x_test, token_indice, NGRAM_RANGE)",
"Make all input sequences the same length by padding with zeros",
"from keras.preprocessing import sequence\n\nsequence.pad_sequences([[1,2,3,4,5], [6,7,8]], maxlen=10)\n\nx_train = sequence.pad_sequences(x_train, maxlen=MAXLEN)\nx_test = sequence.pad_sequences(x_test, maxlen=MAXLEN)\n\nx_train.shape\n\nx_test.shape",
"FastText Model",
" Image('diagrams/fasttext.png')\n\nfrom keras.models import Sequential\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers.pooling import GlobalAveragePooling1D\nfrom keras.layers import Dense\n\nft_model = Sequential()\n\nft_model.add(Embedding(\n input_dim = MAX_FEATURES,\n output_dim = EMBEDDING_DIMS,\n input_length= MAXLEN))\n\nft_model.add(GlobalAveragePooling1D())\n\nft_model.add(Dense(1, activation='sigmoid'))\n\nft_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nft_model.layers\n\nprint_layer(ft_model, 0)\n\nprint_layer(ft_model, 1)\n\nprint_layer(ft_model, 2)\n\nft_model.fit(x_train, y_train, batch_size=100, epochs=3, validation_data=(x_test, y_test))",
"fastText classifier vs. convolutional neural network (CNN) vs. long short-term memory (LSTM) classifier: Fight!\nA CNN takes the dot product of various \"filters\" (some new vector) with each word window down the sentence. For each convolutional layer in your model, you can choose the size of the filter (for example, 3 word vectors long) and the number of filters in the layer (for example, ten 3-word filters, or five 3-word filters). \nAdd a bias to each dot product of the filter and word window, and run it through an activation function. This produces a number.\nRunning a single filter down a sentence produces a series of numbers. Generally the maximum value is taken to represent the alignment of the sentence with that particular filter. All of this is just another way of extracting features from a sentence. In fastText, we extracted features in a human-readable way (n-grams) and tacked them onto the input data. With a CNN we take a different approach, letting the algorithm figure out what makes good features for the dataset.\ninsert filter operating on sentence image here",
"Image('diagrams/text-cnn-classifier.png')",
"Diagram from Convolutional Neural Networks for Sentence Classification, Kim Yoon (2014)\nA CNN sentence classifier",
"embedding_dim = 50 # we'll get a vector representation of words as a by-product\nfilter_sizes = (2, 3, 4) # we'll make one convolutional layer for each filter we specify here\nnum_filters = 10 # each layer will contain this many filters\n\ndropout_prob = (0.2, 0.2)\nhidden_dims = 50\n\n# Prepossessing parameters\nsequence_length = 400\nmax_words = 5000",
"Canned input data",
"from keras.datasets import imdb\n\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_words) # limits vocab to num_words\n\n?imdb.load_data\n\nfrom keras.preprocessing import sequence\n\nx_train = sequence.pad_sequences(x_train, maxlen=sequence_length, padding=\"post\", truncating=\"post\")\nx_test = sequence.pad_sequences(x_test, maxlen=sequence_length, padding=\"post\", truncating=\"post\")\n\nx_train[0]\n\nvocabulary = imdb.get_word_index() # word to integer map\n\nvocabulary['good']\n\nlen(vocabulary)",
"Model build",
"from keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers import Embedding\nfrom keras.layers import Dropout\nfrom keras.layers import Conv1D\nfrom keras.layers import MaxPooling1D\nfrom keras.layers import Flatten\nfrom keras.layers import Dense\nfrom keras.layers.merge import Concatenate\n\n# Input, embedding, and dropout layers\ninput_shape = (sequence_length,)\nmodel_input = Input(shape=input_shape)\nz = Embedding(\n input_dim=len(vocabulary) + 1, \n output_dim=embedding_dim, \n input_length=sequence_length, \n name=\"embedding\")(model_input)\nz = Dropout(dropout_prob[0])(z)\n\n# Convolutional block\n# parallel set of n convolutions; output of all n are\n# concatenated into one vector\nconv_blocks = []\nfor sz in filter_sizes:\n conv = Conv1D(filters=num_filters, kernel_size=sz, activation=\"relu\" )(z)\n conv = MaxPooling1D(pool_size=2)(conv)\n conv = Flatten()(conv)\n conv_blocks.append(conv)\n \nz = Concatenate()(conv_blocks) if len(conv_blocks) > 1 else conv_blocks[0]\nz = Dropout(dropout_prob[1])(z)\n\n# Hidden dense layer and output layer\nz = Dense(hidden_dims, activation=\"relu\")(z)\nmodel_output = Dense(1, activation=\"sigmoid\")(z)\n\ncnn_model = Model(model_input, model_output)\ncnn_model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n\ncnn_model.layers\n\nprint_layer(cnn_model, 12)\n\nprint_layer(cnn_model, 12)\n\ncnn_model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_test, y_test))\n\ncnn_model.layers[1].weights\n\ncnn_model.layers[1].get_weights()\n\ncnn_model.layers[3].weights",
"An LSTM sentence classifier",
"Image('diagrams/LSTM.png')\n\nfrom keras.models import Sequential\nfrom keras.layers import Embedding\nfrom keras.layers.core import SpatialDropout1D\nfrom keras.layers.core import Dropout\nfrom keras.layers.recurrent import LSTM\nfrom keras.layers.core import Dense\n\nhidden_dims = 50\nembedding_dim = 50\n\nlstm_model = Sequential()\nlstm_model.add(Embedding(len(vocabulary) + 1, embedding_dim, input_length=sequence_length, name=\"embedding\"))\nlstm_model.add(SpatialDropout1D(Dropout(0.2)))\nlstm_model.add(LSTM(hidden_dims, dropout=0.2, recurrent_dropout=0.2)) # first arg, like Dense, is dim of output\nlstm_model.add(Dense(1, activation='sigmoid'))\n\nlstm_model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n\nlstm_model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_test, y_test))\n\nlstm_model.layers\n\nlstm_model.layers[2].input_shape\n\nlstm_model.layers[2].output_shape",
"Appendix: Our own data download and preparation\nWe'll use the Large Movie Review Dataset v1.0 for our corpus. While Keras has its own data samples you can import for modeling (including this one), I think it's very important to get and process your own data. Otherwise, the results appear to materialize out of thin air and it's more difficult to get on with your own research.",
"%matplotlib inline\nimport pandas as pd\n\nimport glob\n\ndatapath = \"/Users/pfigliozzi/aclImdb/train/unsup\"\nfiles = glob.glob(datapath+\"/*.txt\")[:1000] #first 1000 (there are 50k)\n\ndf = pd.concat([pd.read_table(filename, header=None, names=['raw']) for filename in files], ignore_index=True) \n\ndf.raw.map(lambda x: len(x)).plot.hist()\n\n50000. * 2000. / 10**6"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pedrosiracusa/pedrosiracusa.github.io | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | mit | [
"Construindo redes sociais com dados de coleções biológicas\nEm um artigo anterior fiz uma breve caracterização das redes sociais por trás do Herbário da UnB, mostrando uma nova perspectiva de aplicação para dados de coleções biológicas. Tal abordagem consiste em derivar interações sociais de colaboração entre coletores e caracterizar seus interesses taxonômicos a partir de registros de ocorrências de espécies, e incorpora conceitos e ferramentas vindos do campo de analítica de redes sociais.\nTive a oportunidade de desenvolver estas ideias durante minha pesquisa de mestrado, que resultou na síntese de dois modelos baseados em redes: as Redes Espécie-Coletor (SCN); e as Redes de Colaboração de Coletores (CWNs). Caso você ainda não tenha ouvido falar nestes modelos, recomendo a leitura do meu artigo de divulgação antes de continuar.\nNeste artigo demonstrarei o processo de construção destes modelos em 3 etapas, a partir de um conjunto de dados de ocorrência de espécies e usando a biblioteca Caryocar (escrita na linguagem Python). Aqui usarei novamente o conjunto de dados do Herbário da UnB (sigla UB), que podem ser baixados através da plataforma GBIF.\nVamos começar importando as classes que implementam os modelos SCN e CWN:",
"# este pedaço de código só é necessário para atualizar o PATH do Python\nimport sys,os\nsys.path.insert(0,os.path.expanduser('~/Documents/caryocar'))\n\nfrom caryocar.models import CWN, SCN",
"O pacote Caryocar também fornece algumas funções e classes auxiliares para realizar a limpeza dos dados.",
"from caryocar.cleaning import NamesAtomizer, namesFromString\nfrom caryocar.cleaning import normalize, read_NamesMap_fromJson\nfrom caryocar.cleaning import getNamesIndexes",
"Etapa 1. Leitura do conjunto de dados\nO primeiro passo é ler o conjunto de dados de ocorrência de espécies.\nPara isso vamos extender as funcionalidades da linguagem Python usando uma biblioteca muito útil para a análise de dados: a Pandas.\nCom esta biblioteca, podemos carregar, transformar e analisar nosso conjunto de dados no ambiente de programação.",
"import pandas as pd",
"Com a função read_csv do Pandas, carregaremos nossos dados que estão no arquivo CSV e os colocaremos na estrutura de um Data Frame, que é basicamente uma tabela.\nEsta função espera receber o nome do arquivo CSV que contém os dados, bem como uma lista com os nomes das colunas que estamos interessados em carregar.\nEspecificarei o caminho para o arquivo na variável dsetPath e a lista das colunas de interesse em cols.\nO dataframe ficará armazenado na variável occs_df.\nPara deixar este artigo o mais simples possível usarei apenas os campos essenciais:\n* recordedBy: Armazena os nomes dos coletores responsáveis pelo registro. Caso haja mais que 1 coletor, os nomes são separados por ponto-e-vírgula;\n* species: Armazena o nome científico, a nível de espécie, determinado para o espécime em questão.",
"dsetPath = '/home/pedro/datasets/ub_herbarium/occurrence.csv'\ncols = ['recordedBy','species']\n\noccs_df = pd.read_csv(dsetPath, sep='\\t', usecols=cols)",
"Vamos dar uma olhada no jeitão do dataframe. Para isso, vamos pedir as 10 primeira linhas apenas.",
"occs_df.head(10)",
"Etapa 2: Limpeza dos dados\nAntes de construir o modelo, precisamos fazer uma limpeza de dados para garantir que eles estejam no formato adequado para a construção dos modelos. \nO primeiro passo é filtrar os registros com elementos nulos (NaN) para cada um dos campos do dataframe. Um elemento nulo significa ausência de informação, e portanto não ajudará muito na construção dos nossos modelos.\nVejamos o número de nulos em cada campo:",
"occs_df.isnull().sum()",
"A informação de coletor está ausente em apenas 9 dos registros. Vamos simplesmente eliminá-los. Um outro ponto é que para simplificar nossa modelagem, vou apenas usar registros que tenham sido identificados ao nível de espécie. Isso significa que teremos que descartar 32711 registros, nos quais a informação sobre a identidade de espécie está ausente.",
"occs_df.dropna(how='any', inplace=True)",
"Agora não temos mais nulos em nenhuma das colunas, e podemos prosseguir:",
"occs_df.isnull().sum()",
"Atomização dos nomes de coletores\nO campo de coletores (recordedBy) é fundamental para nossa modelagem, mas infelizmente costuma ser um pouco problemático.\nO primeiro problema é que os nomes dos coletores não são atômicos. Isso significa múltiplos nomes podem ser codificados em um mesmo valor (no caso, a lista de nomes é codificada como uma única string, sendo cada nome separado por um ponto-e-vígula). \nSegundo as recomendações do Biodiversity Information Standards (TDWG), nomes de coletores devem ser incluídos, em geral, usando a seguinte regra: sobrenome com a primeira letra maiúscula, seguido por vírgula e espaço e iniciais do nome em letras maiúsculas, separadas por pontos (ex. Proença, C.E.B.).\nAlém disso, o TDWG recomenda que o separador utilizado para delimitar nomes de coletore deva ser o caractere pipe ( | ).\nNo entanto, o caractere usado no dataset do UB é o ponto-e-vírgula.\nIsso não será um grande problema no nosso caso, já que neste dataset o ponto-e-vírgula é usado de forma consistente, em quase todos os registros.\nPara proceder com a atomização dos nomes utilizaremos uma classe auxiliar, chamada NamesAtomizer. Criaremos o objeto atomizador e atribuiremos à variável na. Passaremos a função namesFromString que especifica as regras usadas para separar os nomes.",
"na = NamesAtomizer(atomizeOp=namesFromString)",
"O atomizador de nomes resolve a grande maioria dos casos. Mas existem alguns poucos registros com erros na delimitação dos nomes. Neste caso a correção deve ser feita fazendo a substituição em cada registro pela sua forma correta.\nPara o dataset do UB, estas substituições estão especificadas no arquivo armazenado na variável names_replaces_file, abaixo:",
"names_replaces_file = '/home/pedro/data/ub_collectors_replaces.json'",
"Só por curiosidade, vejamos o conteúdo deste arquivo:",
"! cat {names_replaces_file}",
"Prosseguindo com a substituição:",
"na.read_replaces(names_replaces_file)",
"Agora, com o auxílio do atomizador de nomes, vamos adicionar uma nova coluna ao dataframe, contendo os nomes dos coletores atomizados. Ela se chamará recordedBy_atomized:",
"occs_df['recordedBy_atomized'] = na.atomize(occs_df['recordedBy'])",
"Normalização e mapeamento de nomes\nUm segundo problema é que nomes de coletores podem ter sido escritos de algumas formas diferentes, seja por conta de erros ou omissão de partes do nome.\nPor exemplo, o nome 'Proença, C.E.B.' pode ter alguns variantes, incluindo 'Proenca, C.E.B,', 'Proença, C.E.', Proença, C.'.\nPrecisamos pensar em uma forma para ligar todas essas variantes a um nome principal.\nA solução para este problema até o momento é armazenar um mapa ligando cada variante a uma forma normal do nome. O processo de normalização inclui a transformação do nome para uma forma simplificada. Isso significa que só usaremos caracteres em caixa-baixo, omitiremos acentos e pontuações, e removeremos caracteres não-alfanuméricos. \nNo exemplo citado acima, todos os nomes seriam mapeados para 'proenca,ceb'.\nPara o conjunto de dados do UB, já tenho um mapa de nomes pronto, guardado no seguinte arquivo:",
"namesMap_file = '/home/pedro/data/ub_namesmap.json'",
"Este arquivo é grande, mas vamos ver as 20 primeiras linhas para termos uma ideia:",
"! head {namesMap_file} -n 20",
"Note que alguns nomes de coletores que não eram nulos porêm remetem à falta da informação (por exemplo '.', '?') são mapeados para uma string vazia. Mais tarde iremos filtrar estes nomes.\nVamos agora ler o mapa de nomes do arquivo e armazená-lo na variável nm.",
"nm = read_NamesMap_fromJson(namesMap_file, normalizationFunc=normalize)",
"Caso haja nomes de coletores que não estão no arquivo, vamos nos assegurar de que eles serão inseridos:",
"collectors_names = list(set( n for n,st,num in na.getCachedNames() ))\nnm.addNames(collectors_names)",
"Assim, este mapa nos permite buscar, para cada variante do nome, sua forma normal:",
"nm.getMap()['Proença, CEB']\n\nnm.getMap()['Proença, C']",
"A figura abaixo ilustra as etapas envolvidas no preprocessamento do campo dos coletores, conforme descrito.\n{:width=\"700px\"}\nO índice de nomes\nFinalmente, vamos construir um índice de nomes, apenas para mantermos a referência de quais linhas do dataframe cada coletor aparece. Para isso usaremos a função getNamesIndexes. Precisamos informar o nome do dataframe, o nome da coluna que armazena os nomes atomizados e o mapa de nomes. Mas enfatizo que este passo não é necessário para a construção dos modelos (apesar de ser útil para algumas análises).",
"ni = getNamesIndexes(occs_df,'recordedBy_atomized', namesMap=nm.getMap())",
"Etapa 3: Construindo os modelos\nChegamos na etapa que realmente interessa. Já temos um dataframe com os dados minimamente limpos e estruturados, e podemos então construir os modelos!\nRede Espécie-Coletor (SCN)\nRedes espécie-coletor modelam relações de interesse, envolvendo necessariamente um coletor e uma espécie. A semântica destas relações pode ser descrita como coletor -[registra]-> espécie ou, por outro lado, espécie-[é registrada por]-> coletor. A figura abaixo exemplifica esta estrutura (a).\nComo o modelo envolve duas classes de entidades (coletores e espécies), existem duas perspectivas adicionais que podem ser exploradas: Podemos investigar o quão fortemente dois coletores estão associados entre si em termos de seus interesses em comum (b); bem como quão fortemente duas espécies estão associadas entre si em termos do conjunto de coletores que as registram (c).\nNos referimos às perspectivas (b) e (c) como projeções da rede (a). Estas projeções são obtidas simplesmente ligando entidades da mesma classe tomando como base o número de entidades da classe oposta que eles compartilham, na estrutura (a).\n{:width=\"500px\"}\nVamos então ao código. Construiremos a rede espécie-coletor usando a classe SCN, disponível no pacote Caryocar. Para sua construção, devemos fornecer:\n* Uma lista de espécies, neste caso a coluna do dataframe occs_df['species'];\n* Uma lista contendo listas de coletores, neste caso a coluna do dataframe occs_df['recordedBy_atomized'];\n* Um mapa de nomes.",
"scn = SCN(species=occs_df['species'], collectors=occs_df['recordedBy_atomized'], namesMap=nm)",
"Após a construção do modelo, vamos remover nomes de coletores indevidos, como 'etal', 'ilegivel', 'incognito'.",
"cols_to_filter = ['','ignorado','ilegivel','incognito','etal']\nscn.remove_nodes_from(cols_to_filter)",
"Vejamos então um pequeno resumo sobre esta rede. Este pedaço de código pode ser um pouco feio, mas o que importa mesmo aqui são as informações imprimidas abaixo dele.",
"n_cols = len(scn.listCollectorsNodes())\ncols_degrees = scn.degree(scn.listCollectorsNodes())\nn_spp = len(scn.listSpeciesNodes())\nspp_degrees = scn.degree(scn.listSpeciesNodes())\n\nprint(\nf\"\"\"Rede Espécie-Coletor (SCN)\n==========================\nNúmero total de coletores:{n_cols}\nNúmero total de espécies: {n_spp}\nEm média, um coletor registra {round( sum( k for n,k in cols_degrees)/n_cols)} espécies distintas\nEm média, uma espécie é registrada por {round( sum( k for n,k in spp_degrees)/n_spp)} coletores distintos\nNúmero total de arestas: {len(scn.edges)}\\n\"\"\")\nprint(\"Top-10 coletores mais produtivos:\")\nfor n,k in sorted(cols_degrees,key=lambda x:x[1],reverse=True)[:10]:\n print(f\" {n} ({k} especies distintas)\")\nprint(\"\\nTop-10 espécies coletadas:\")\nfor n,k in sorted(spp_degrees,key=lambda x:x[1],reverse=True)[:10]:\n print(f\" {n} ({k} coletores distintos)\")",
"Um aspecto interessante a ser notado é a distribuição de grau (número de conexões de um vértice) nesta rede.\nEmbora em média um coletor registre 21 espécies diferentes, os coletores mais produtivos registraram mais de 1000!\nDe forma simlar, embora em média uma espécie seja registrada por 9 coletores distintos, as primeiras 10 foram registradas por mais de 200 coletores cada.\nEmbora esteja fora do escopo deste artigo, é fácil mostrar que a distribuição desta rede está longe de ser normal. Na verdade, é aproximada por uma lei de potência.\nIsso significa que enquanto uma grande maioria de coletores registra pouquíssimas espécies diferentes, alguns poucos (chamados hubs, ou coletores-chave) registram um número muito acima da média.\nDe forma análoga enquanto uma grande maioria de espécies foi coletadas por apenas um ou poucos coletores diferentes, algumas poucas foram coletadas por um grande número de coletores distintos.\nRede de Colaboração de Coletores (CWN)\nRedes de colaboração de coletores (CWNs), como o nome sugere, modelam relações de colaboração que se estabelecem entre coletores enquanto registram espécies em campo. Uma ligação entre pares de coletores é criada ou fortalecida cada vez que eles co-autoram um registro de espécie. Sendo assim, a semântica destas relações é descrita como coletor -[coleta espécime com]-> coletor. A figura abaixo ilustra a estrutura destas redes. É importante notar que, diferente das SCNs, nas CWNs a identidade taxonômica de cada registro não é representada em sua estrutura. Coletores que nunca colaboraram aparecem como vértices isolados na rede.\n{:width=\"300px\"}\nO pacote Caryocar também fornece a classe SCN, que facilita a construção de redes de colaboração de coletores. Para sua construção, devemos fornecer:\n\nUma lista contendo listas de coletores (cliques), neste caso a coluna do dataframe occs_df['recordedBy_atomized'];\nUm mapa de nomes.",
"cwn = CWN(cliques=occs_df['recordedBy_atomized'],namesMap=nm)",
"Assim como fizemos com a SCN, vamos remover nomes de coletores indevidos",
"cols_to_filter = ['','ignorado','ilegivel','incognito','etal']\ncwn.remove_nodes_from(cols_to_filter)",
"Vejamos um resumo sobre a rede:",
"n_cols = len(cwn.nodes)\ncols_degrees = cwn.degree()\n\nprint(\nf\"\"\"Rede de Colaboração de Coletores (CWN)\n======================================\nNúmero total de coletores:{n_cols}\nNúmero total de arestas: {len(cwn.edges)}\nEm média, um coletor colabora com {round( sum(k for n,k in cols_degrees)/n_cols )} pares ao longo de sua carreira\nNo total {len([ n for n,k in cols_degrees if k==0 ])} coletores nunca colaboraram\nNo total, {len([ n for n,k in cols_degrees if k>3 ])} coletores colaboraram com mais que 3 colegas\\n\"\"\")\nprint(\"Top-10 coletores mais colaborativos:\")\nfor n,k in sorted(cols_degrees,key=lambda x:x[1],reverse=True)[:10]:\n print(f\" {n} ({k} colegas)\")\n \nprint(\"\\nTop-10 coletores sem colaborações com maior número de registros:\") \nfor n,k, in sorted([ (n,d['count']) for n,d in cwn.nodes(data=True) if cwn.degree(n)==0 ],key=lambda x: x[1], reverse=True)[:10]:\n print(f\" {n} ({cwn.nodes[n]['count']} registros, 0 colaborações)\")",
"Considerações finais\nMeu objetivo neste artigo foi demonstrar o processo de construção dos modelos SCN e CWN a partir de um conjunto de dados de ocorrência de espécies, usando o pacote Caryocar.\nMostrei também como proceder com a limpeza dos dados de ocorrência de espécies, fazendo todas as transformações necessárias para que o conjunto de dados torne-se adequado para a modelagem.\nUm ponto a ser ressaltado é sobre a importância em se verificar a qualidade do campo do coletor, normalmente subutilizado na maioria das aplicações de dados de coleções biológicas.\nDentre os problemas mais comuns estão a inclusão apenas do coletor principal (sendo os coletores auxiliares omitidos ou agrupados no nome 'et. al'), o não-cumprimento dos padrões recomendados para a escrita dos nomes, e a ausência de identificadores únicos para a distinção dos coletores.\nDe forma geral, estes fatores são a maior limitação para a construção dos modelos.\nA alta qualidade deste campo no conjunto de dados do herbário da UnB foi um dos fatores decisivos para que fosse escolhido como prova de conceito.\nOs modelos SCN e CWN permitem investigar o aspecto humano envolvido na formação de coleções biológicas, e portanto abrem portas para novos tipos de análises e aplicações para dados de ocorrência de espécies.\nPretendo explorar e demonstrar algumas dessas possibilidades em artigos futuros."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cloudera/ibis | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | apache-2.0 | [
"Expressions, lazy mode and logging queries\nSo far, we have seen Ibis in interactive mode. Interactive mode (also known as eager mode) makes Ibis return the\nresults of an operation immediately.\nIn most cases, instead of using interactive mode, it makes more sense to use the default lazy mode.\nIn lazy mode, Ibis won't be executing the operations automatically, but instead, will generate an\nexpression to be executed at a later time.\nLet's see this in practice, starting with the same example as in previous tutorials - the geography database.",
"import os\nimport ibis\n\n\nconnection = ibis.sqlite.connect(os.path.join('data', 'geography.db'))\ncountries = connection.table('countries')",
"In previous tutorials, we set interactive mode to True, and we obtained the result\nof every operation.",
"ibis.options.interactive = True\n\ncountries['name', 'continent', 'population'].limit(3)",
"But now let's see what happens if we leave the interactive option to False (the default),\nand we operate in lazy mode.",
"ibis.options.interactive = False\n\ncountries['name', 'continent', 'population'].limit(3)",
"What we find is the graph of the expressions that would return the desired result instead of the result itself.\nLet's analyze the expressions in the graph:\n\nWe query the countries table (all rows and all columns)\nWe select the name, continent and population columns\nWe limit the results to only the first 3 rows\n\nNow consider that the data is in a database, possibly in a different host than the one executing Ibis.\nAlso consider that the results returned to the user need to be moved to the memory of the host executing Ibis.\nWhen using interactive (or eager) mode, if we perform one operation at a time, we would do:\n\nWe would move all the rows and columns from the backend (database, big data system, etc.) into memory\nOnce in memory, we would discard all the columns but name, continent and population\nAfter that, we would discard all the rows, except the first 3\n\nThis is not very efficient. If you consider that the table can have millions of rows, backed by a\nbig data system like Spark or Impala, this may not even be possible (not enough memory to load all the data).\nThe solution is to use lazy mode. In lazy mode, instead of obtaining the results after each operation,\nwe build an expression (a graph) of all the operations that need to be done. After all the operations\nare recorded, the graph is sent to the backend which will perform the operation in an efficient way - only\nmoving to memory the required data.\nYou can think of this as writing a shopping list and requesting someone to go to the supermarket and buy\neverything you need once the list is complete. As opposed as getting someone to bring all the products of\nthe supermarket to your home and then return everything you don't want.\nLet's continue with our example, save the expression in a variable countries_expression, and check its type.",
"countries_expression = countries['name', 'continent', 'population'].limit(3)\ntype(countries_expression)",
"The type is an Ibis TableExpr, since the result is a table (in a broad way, you can consider it a dataframe).\nNow we have our query instructions (our expression, fetching only 3 columns and 3 rows) in the variable countries_expression.\nAt this point, nothing has been requested from the database. We have defined what we want to extract, but we didn't\nrequest it from the database yet. We can continue building our expression if we haven't finished yet. Or once we\nare done, we can simply request it from the database using the method .execute().",
"countries_expression.execute()",
"We can build other types of expressions, for example, one that instead of returning a table,\nreturns a columns.",
"population_in_millions = (countries['population'] / 1_000_000).name('population_in_millions')\npopulation_in_millions",
"If we check its type, we can see how it is a FloatingColumn expression.",
"type(population_in_millions)",
"We can combine the previous expression to be a column of a table expression.",
"countries['name', 'continent', population_in_millions].limit(3)",
"Since we are in lazy mode (not interactive), those expressions don't request any data from the database\nunless explicitly requested with .execute().\nLogging queries\nFor SQL backends (and for others when it makes sense), the query sent to the database can be logged.\nThis can be done by setting the verbose option to True.",
"ibis.options.verbose = True\n\ncountries['name', 'continent', population_in_millions].limit(3).execute()",
"By default, the logging is done to the terminal, but we can process the query with a custom function.\nThis allows us to save executed queries to a file, save to a database, send them to a web service, etc.\nFor example, to save queries to a file, we can write a custom function that given a query, saves it to a\nlog file.",
"import os\nimport datetime\n\ndef log_query_to_file(query):\n \"\"\"\n Log queries to `data/tutorial_queries.log`.\n \n Each file is a query. Line breaks in the query are represented with the string '\\n'.\n \n A timestamp of when the query is executed is added.\n \"\"\"\n fname = os.path.join('data', 'tutorial_queries.log')\n query_in_a_single_line = query.replace('\\n', r'\\n')\n with open(fname, 'a') as f:\n f.write(f'{datetime.datetime.now()} - {query_in_a_single_line}\\n')",
"Then we can set the verbose_log option to the custom function, execute one query,\nwait one second, and execute another query.",
"import time\n\nibis.options.verbose_log = log_query_to_file\n\ncountries.execute()\ntime.sleep(1.)\ncountries['name', 'continent', population_in_millions].limit(3).execute()",
"This has created a log file in data/tutorial_queries.log where the executed queries have been logged.",
"!cat data/tutorial_queries.log"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
transcranial/keras-js | notebooks/layers/convolutional/ZeroPadding2D.ipynb | mit | [
"import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.convolutional import ZeroPadding2D\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()",
"ZeroPadding2D\n[convolutional.ZeroPadding2D.0] padding (1,1) on 3x5x2 input, data_format='channels_last'",
"data_in_shape = (3, 5, 2)\nL = ZeroPadding2D(padding=(1, 1), data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(250)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.ZeroPadding2D.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.ZeroPadding2D.1] padding (1,1) on 3x5x2 input, data_format='channels_first'",
"data_in_shape = (3, 5, 2)\nL = ZeroPadding2D(padding=(1, 1), data_format='channels_first')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(251)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.ZeroPadding2D.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.ZeroPadding2D.2] padding (3,2) on 2x6x4 input, data_format='channels_last'",
"data_in_shape = (2, 6, 4)\nL = ZeroPadding2D(padding=(3, 2), data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(252)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.ZeroPadding2D.2'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.ZeroPadding2D.3] padding (3,2) on 2x6x4 input, data_format='channels_first'",
"data_in_shape = (2, 6, 4)\nL = ZeroPadding2D(padding=(3, 2), data_format='channels_first')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(253)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.ZeroPadding2D.3'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.ZeroPadding2D.4] padding ((1,2),(3,4)) on 2x6x4 input, data_format='channels_last'",
"data_in_shape = (2, 6, 4)\nL = ZeroPadding2D(padding=((1,2),(3,4)), data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(254)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.ZeroPadding2D.4'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.ZeroPadding2D.5] padding 2 on 2x6x4 input, data_format='channels_last'",
"data_in_shape = (2, 6, 4)\nL = ZeroPadding2D(padding=2, data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(255)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.ZeroPadding2D.5'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"export for Keras.js tests",
"import os\n\nfilename = '../../../test/data/layers/convolutional/ZeroPadding2D.json'\nif not os.path.exists(os.path.dirname(filename)):\n os.makedirs(os.path.dirname(filename))\nwith open(filename, 'w') as f:\n json.dump(DATA, f)\n\nprint(json.dumps(DATA))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
matousc89/PPSI | podklady/notebooks/skriptovani_v_pythonu.ipynb | mit | [
"Skriptování v Pythonu\nV tomto tutoriálu jsou představeny základní funkce a objekty potřebné pro nejzákladnější skriptování v Pythonu.\nPoznámky:\n* Komentáře v Pythonu se se dělají pomocí #. Cokoliv za tímto znakem až do konce řádku je komentář",
"a = 1 # tohle je komentar",
"V Jupyter notebook (nástroj v kterém je tento tutoriál vytvořen) každý blok kódu ukončení proměnnou nebo výrazem vytiskne obsah této proměnné. Toho je v tomto tutoriálu využívano. Následuje příklad:",
"a = 1 # tady si vytvorim promennou\na # tohle vytiskne hodnotu bez uziti prikazu print",
"List\nList je pravděpodobně nejznámější kontejner na data v jazyce Python. Položky v listu se můžou opakovat a mají pořadí položek dané při vytvoření listu. Položky v listu je možné měnit, mazat a přidávat. Následují příklady.",
"[1, 1, 2] # list celych cisel, polozka 1 se opakukuje\n\n[\"abc\", 1, 0.5] # list obsahujici ruzne datove typy\n\n[] # prazdny list\n\n[[1,2], \"abc\", {1, \"0\", 3}] # list obsahujici take dalsi listy\n\n[1, \"a\", 2] + [5, 3, 5] # spojeni dvou listu\n\n[1, 2, 3]*5 # opakovani listu",
"Indexování a porcování\nV Pythonu se pro indexování užívají hranaté závorky []. Symbol : zastupuje všechny položky v daném rozsahu. Indexuje se od 0. Indexování a porcování listu je ukázáno na nasledujících příkladech.",
"a = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\"] # ukázkový list\n\na[3] # vrati objekt z indexem 3 (ctvrty objekt)\n\na[:2] # vrati prvni dva objekty\n\na[3:] # vrati vse od objektu 3 dal\n\na[2:5] # vse mezi objekty s indexy 2 a 5\n\na[-3] # treti objekt od konce\n\na[0:-1:2] # kazdy druhy objekt od zacatku do konce\n\na[::2] # kratsi ekvivalent predchoziho prikladu\n\nb = [[1, 2, 3], [4, 5, 6]] # priklad vnorenych listu\n\nb[1] # vraci druhy list\n\nb[0][0:2] # vraci prvni dve polozky z druheho listu",
"Přepisování, přidávání, vkládání a mazání položek z listu\nUkázáno na následujících příkladech.",
"a = [\"a\", \"b\", \"c\", \"d\"]\n\na[2] = \"x\" # prepsani objektu s indexem 2\nprint(a)\n\na.append(\"h\") # pridani objektu h na konec\nprint(a)\n\na.insert(2, \"y\") # pridani objektu y na pozici 2\nprint(a)\n\ndel a[2] # odebere objekt na pozici 2\nprint (a)",
"Podmínka If, else, elif\nPodmínky slouží k implementaci logiky. Logika operuje s proměnnou bool, která nabývá pouze hodnot True nebo False.\nVýrazy a jejich vyhodnocení\nNásleduje ukázka vyhodnocení pravdivosti několika výrazů.",
"a = 1\na == 1\n\na == 1\n\nnot a == 1\n\na > 1\n\na >= 1\n\n1 in [1, 2]\n\nnot (1 in [1, 2]) == (not 1 > 0)",
"Podmínky a jejich vyhodnocení\nPodmínka if testuje, zda výraz pravdivé hodnoty - pokud ano, podmínka vykoná svůj kód. Následuje příklad.",
"fruit = \"apple\"\ncolor = \"No color\"\n\nif fruit == \"apple\":\n color = \"green\"\n \ncolor",
"Podmínka else umožňuje nastavit alternativní kód pro případ kdy podmínka if není splněna. Příklad následuje.",
"fruit = \"orange\"\n\nif fruit == \"apple\":\n color = \"red\"\nelse:\n color = \"orange\"\n\ncolor",
"Podmínka elif umožňuje zadat více podmínek pro případ nesplnění podmínky if. Podmínek elif je možné umístit více za jednu podmínku if. Příklad následuje.",
"fruit = \"apple\"\n\nif fruit == \"apple\":\n color = \"red\"\nelif fruit == \"orange\":\n color = \"orange\"\nelif fruit == \"pear\":\n color = \"green\"\nelse:\n color = \"yellow\"\n \ncolor",
"Smyčky\nIterace je jedna z nejčastější operací v programování. Následující ukázky se vztahují k rovnici\n$\\forall i \\in {2,\\ldots,9}.\\ a_i = a_{i-1} + a_{i-2}$.\nFor smyčka\nFor smyčka je navržena pro iterování přes předem daný iterovatelný objekt. Příklad následuje.",
"a = [] # list na vysledky\na.append(1) # prvni pocatecni podminka\na.append(1) # druha pocatecni podminka\nfor i in [2, 3, 4, 5, 6, 7, 8]: # rozsah pres ktery iterovat\n a.append(a[i-1] + a[i-2]) # pridavani vypoctenych polozek do listu\nprint(a)",
"Vylepšení předchozího příkladu následuje.",
"a = [0]*9 # list na vysledky\na[0:2] = [1, 1] # pocatecni podminky\nfor i in range(2,9): # fukce range\n a[i] = a[i-1] + a[i-2] # realizace vypoctu\nprint(a)",
"V případě že je potřeba přerušit smyčku před koncem, je možné použít příkaz break.\nWhile smyčka\nTato smyčka iteruje dokud není splněna podmínka. Příklad následuje.",
"a = [0]*9 \na[0:2] = [1, 1]\ni = 2 # nastaveni pomocne promenne\nwhile i < 9: # iteruj dokud pomocna promenna nesplni podminku\n a[i] = a[i-1] + a[i-2]\n i += 1 # pridej 1 k pomocne promenne\nprint(a)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/asl-ml-immersion | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | apache-2.0 | [
"Decision Trees and Random Forests in Python\nLearning Objectives\n\nExplore and analyze data using a Pairplot\nTrain a single Decision Tree\nPredict and evaluate the Decision Tree\nCompare the Decision Tree model to a Random Forest\n\nIntroduction\nIn this lab, you explore and analyze data using a Pairplot, train a single Decision Tree, predict and evaluate the Decision Tree, and compare the Decision Tree model to a Random Forest. Recall that the Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, the decision tree algorithm can be used for solving both regression and classification problems too. Simply, the goal of using a Decision Tree is to create a training model that can use to predict the class or value of the target variable by learning simple decision rules inferred from prior data(training data).\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook\nLoad necessary libraries\nWe will start by importing the necessary libraries for this lab.",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n%matplotlib inline",
"Get the Data",
"df = pd.read_csv(\"../kyphosis.csv\")\n\ndf.head()",
"Exploratory Data Analysis\nLab Task #1: Check a pairplot for this small dataset.",
"# TODO 1\n# TODO -- Your code here.",
"Train Test Split\nLet's split up the data into a training set and a test set!",
"from sklearn.model_selection import train_test_split\n\nX = df.drop(\"Kyphosis\", axis=1)\ny = df[\"Kyphosis\"]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)",
"Decision Trees\nLab Task #2: Train a single decision tree.",
"from sklearn.tree import DecisionTreeClassifier\n\ndtree = DecisionTreeClassifier()\n\n# TODO 2\n# TODO -- Your code here.",
"Prediction and Evaluation\nLet's evaluate our decision tree.",
"predictions = dtree.predict(X_test)\n\nfrom sklearn.metrics import classification_report, confusion_matrix\n\n# TODO 3a\n# TODO -- Your code here.\n\n# TODO 3b\nprint(confusion_matrix(y_test, predictions))",
"Tree Visualization\nScikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:",
"import pydot\nfrom IPython.display import Image\nfrom six import StringIO\nfrom sklearn.tree import export_graphviz\n\nfeatures = list(df.columns[1:])\nfeatures\n\ndot_data = StringIO()\nexport_graphviz(\n dtree, out_file=dot_data, feature_names=features, filled=True, rounded=True\n)\n\ngraph = pydot.graph_from_dot_data(dot_data.getvalue())\nImage(graph[0].create_png())",
"Random Forests\nLab Task #4: Compare the decision tree model to a random forest.",
"from sklearn.ensemble import RandomForestClassifier\n\nrfc = RandomForestClassifier(n_estimators=100)\nrfc.fit(X_train, y_train)\n\nrfc_pred = rfc.predict(X_test)\n\n# TODO 4a\n# TODO -- Your code here.\n\n# TODO 4b\n# TODO -- Your code here.",
"Copyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
JoseGuzman/myIPythonNotebooks | Optimization/Maximum_likelihood_estimation.ipynb | gpl-2.0 | [
"<H2>Parameter estimation by maximum likelihood method<H2>",
"%pylab inline\n\nfrom scipy.stats import norm\nfrom lmfit import minimize, Parameters",
"<H2> Create a normally distributed random variable</H2>",
"# create some data\nmymean = 28.74\nmysigma = 8.33 # standard deviation!\nrv_norm = norm(loc = mymean, scale = mysigma)\ndata = rv_norm.rvs(size = 150)\nplt.hist(data, bins=20, facecolor='red', alpha=.3);\n\nplt.ylabel('Number of events');\nplt.xlim(0,100);",
"<H2> Define a model function</H2>",
"def mynorm(x, params):\n mu, sigma = params\n # scipy implementation\n mynorm = norm(loc = mu, scale = sigma)\n return mynorm.pdf(x)\n\nmynorm(0, [0,1]) # 0.39",
"<H2> Loglikelihood function to be minimize </H2>",
"def loglikelihood(params, data):\n mu = params['mean'].value\n sigma = params['std'].value\n \n l1 = np.log( mynorm(data, [mu, sigma]) ).sum()\n return(-l1) # return negative loglikelihood to minimize\n\nmyfoo = Parameters()\nmyfoo.add('mean', value = 20)\nmyfoo.add('std', value = 5.0)\nloglikelihood(myfoo, data)\n\nmyparams = Parameters()\nmyparams.add('mean', value = 20.3)\nmyparams.add('std', value = 5.0)\n\nout = minimize(fcn = loglikelihood, params=myparams, method='nelder', args=(data,))\nprint(out.userfcn(myparams, data)) # ~523.631337424\n\nfrom lmfit import report_errors\nreport_errors(myparams)",
"The estimated mean and standard deviation should be identical to the mean\nand the standard deviation of the sample population",
"np.mean(data), np.std(data)",
"<H2> Plot histogram and model together </H2>",
"# Compute binwidth\ncounts, binedge = np.histogram(data, bins=20);\n\nbincenter = [0.5 * (binedge[i] + binedge[i+1]) for i in xrange(len(binedge)-1)]\nbinwidth = (max(bincenter) - min(bincenter)) / len(bincenter) \n\n# Adjust PDF function to data\nivar = np.linspace(0, 100, 100)\nparams = [ myparams['mean'].value, myparams['std'].value ]\nmynormpdf = mynorm(ivar, params)*binwidth*len(data)\n\n# Plot everything together\nplt.hist(data, bins=20, facecolor='white', histtype='stepfilled');\n\nplt.plot(ivar, mynormpdf);\nplt.ylabel('Number of events');"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
goddoe/CADL | session-5/session-5-part-1-new.ipynb | apache-2.0 | [
"Session 5: Generative Networks\nAssignment: Generative Adversarial Networks and Recurrent Neural Networks\n<p class=\"lead\">\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning with Google's Tensorflow</a><br />\n<a href=\"http://pkmital.com\">Parag K. Mital</a><br />\n<a href=\"https://www.kadenze.com\">Kadenze, Inc.</a>\n</p>\n\nTable of Contents\n<!-- MarkdownTOC autolink=\"true\" autoanchor=\"true\" bracket=\"round\" -->\n\n\nOverview\nLearning Goals\nPart 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)\nIntroduction\nBuilding the Encoder\nBuilding the Discriminator for the Training Samples\nBuilding the Decoder\nBuilding the Generator\nBuilding the Discriminator for the Generated Samples\nGAN Loss Functions\nBuilding the Optimizers w/ Regularization\nLoading a Dataset\nTraining\nEquilibrium\nPart 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)\nBatch Normalization\nBuilding the Encoder\nBuilding the Variational Layer\nBuilding the Decoder\nBuilding VAE/GAN Loss Functions\nCreating the Optimizers\nLoading the Dataset\nTraining\nPart 3 - Latent-Space Arithmetic\nLoading the Pre-Trained Model\nExploring the Celeb Net Attributes\nFind the Latent Encoding for an Attribute\nLatent Feature Arithmetic\nExtensions\nPart 4 - Character-Level Language Model\nPart 5 - Pretrained Char-RNN of Donald Trump\nGetting the Trump Data\nBasic Text Analysis\nLoading the Pre-trained Trump Model\nInference: Keeping Track of the State\nProbabilistic Sampling\nInference: Temperature\nInference: Priming\n\n\nAssignment Submission\n\n<!-- /MarkdownTOC -->\n\n<a name=\"overview\"></a>\nOverview\nThis is certainly the hardest session and will require a lot of time and patience to complete. Also, many elements of this session may require further investigation, including reading of the original papers and additional resources in order to fully grasp their understanding. The models we cover are state of the art and I've aimed to give you something between a practical and mathematical understanding of the material, though it is a tricky balance. I hope for those interested, that you delve deeper into the papers for more understanding. And for those of you seeking just a practical understanding, that these notebooks will suffice.\nThis session covered two of the most advanced generative networks: generative adversarial networks and recurrent neural networks. During the homework, we'll see how these work in more details and try building our own. I am not asking you train anything in this session as both GANs and RNNs take many days to train. However, I have provided pre-trained networks which we'll be exploring. We'll also see how a Variational Autoencoder can be combined with a Generative Adversarial Network to allow you to also encode input data, and I've provided a pre-trained model of this type of model trained on the Celeb Faces dataset. We'll see what this means in more details below.\nAfter this session, you are also required to submit your final project which can combine any of the materials you have learned so far to produce a short 1 minute clip demonstrating any aspect of the course you want to invesitgate further or combine with anything else you feel like doing. This is completely open to you and to encourage your peers to share something that demonstrates creative thinking. Be sure to keep the final project in mind while browsing through this notebook!\n<a name=\"learning-goals\"></a>\nLearning Goals\n\nLearn to build the components of a Generative Adversarial Network and how it is trained\nLearn to combine the Variational Autoencoder with a Generative Adversarial Network\nLearn to use latent space arithmetic with a pre-trained VAE/GAN network\nLearn to build the components of a Character Recurrent Neural Network and how it is trained\nLearn to sample from a pre-trained CharRNN model",
"# First check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n',\n 'You should consider updating to Python 3.4.0 or',\n 'higher as the libraries built for this course',\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda'\n 'and then restart `jupyter notebook`:\\n',\n 'https://www.continuum.io/downloads\\n\\n')\n\n# Now get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n from scipy.ndimage.filters import gaussian_filter\n import IPython.display as ipyd\n import tensorflow as tf\n from libs import utils, gif, datasets, dataset_utils, nb_utils\nexcept ImportError as e:\n print(\"Make sure you have started notebook in the same directory\",\n \"as the provided zip file which includes the 'libs' folder\",\n \"and the file 'utils.py' inside of it. You will NOT be able\",\n \"to complete this assignment unless you restart jupyter\",\n \"notebook inside the directory created by extracting\",\n \"the zip file or cloning the github repo.\")\n print(e)\n\n# We'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')\n\n# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")",
"<a name=\"part-1---generative-adversarial-networks-gan--deep-convolutional-gan-dcgan\"></a>\nPart 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)\n<a name=\"introduction\"></a>\nIntroduction\nRecall from the lecture that a Generative Adversarial Network is two networks, a generator and a discriminator. The \"generator\" takes a feature vector and decodes this feature vector to become an image, exactly like the decoder we built in Session 3's Autoencoder. The discriminator is exactly like the encoder of the Autoencoder, except it can only have 1 value in the final layer. We use a sigmoid to squash this value between 0 and 1, and then interpret the meaning of it as: 1, the image you gave me was real, or 0, the image you gave me was generated by the generator, it's a FAKE! So the discriminator is like an encoder which takes an image and then perfoms lie detection. Are you feeding me lies? Or is the image real? \nConsider the AE and VAE we trained in Session 3. The loss function operated partly on the input space. It said, per pixel, what is the difference between my reconstruction and the input image? The l2-loss per pixel. Recall at that time we suggested that this wasn't the best idea because per-pixel differences aren't representative of our own perception of the image. One way to consider this is if we had the same image, and translated it by a few pixels. We would not be able to tell the difference, but the per-pixel difference between the two images could be enormously high.\nThe GAN does not use per-pixel difference. Instead, it trains a distance function: the discriminator. The discriminator takes in two images, the real image and the generated one, and learns what a similar image should look like! That is really the amazing part of this network and has opened up some very exciting potential future directions for unsupervised learning. Another network that also learns a distance function is known as the siamese network. We didn't get into this network in this course, but it is commonly used in facial verification, or asserting whether two faces are the same or not.\nThe GAN network is notoriously a huge pain to train! For that reason, we won't actually be training it. Instead, we'll discuss an extension to this basic network called the VAEGAN which uses the VAE we created in Session 3 along with the GAN. We'll then train that network in Part 2. For now, let's stick with creating the GAN.\nLet's first create the two networks: the discriminator and the generator. We'll first begin by building a general purpose encoder which we'll use for our discriminator. Recall that we've already done this in Session 3. What we want is for the input placeholder to be encoded using a list of dimensions for each of our encoder's layers. In the case of a convolutional network, our list of dimensions should correspond to the number of output filters. We also need to specify the kernel heights and widths for each layer's convolutional network.\nWe'll first need a placeholder. This will be the \"real\" image input to the discriminator and the discrimintator will encode this image into a single value, 0 or 1, saying, yes this is real, or no, this is not real.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# We'll keep a variable for the size of our image.\nn_pixels = 32\nn_channels = 3\ninput_shape = [None, n_pixels, n_pixels, n_channels]\n\n# And then create the input image placeholder\nX = tf.placeholder(dtype=tf.float32, name='X', shape=[None, n_pixels, n_pixels,n_channels])",
"<a name=\"building-the-encoder\"></a>\nBuilding the Encoder\nLet's build our encoder just like in Session 3. We'll create a function which accepts the input placeholder, a list of dimensions describing the number of convolutional filters in each layer, and a list of filter sizes to use for the kernel sizes in each convolutional layer. We'll also pass in a parameter for which activation function to apply.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"def encoder(x, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):\n # Set the input to a common variable name, h, for hidden layer\n h = x\n\n # Now we'll loop over the list of dimensions defining the number\n # of output filters in each layer, and collect each hidden layer\n hs = []\n for layer_i in range(len(channels)):\n \n with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):\n # Convolve using the utility convolution function\n # This requirs the number of output filter,\n # and the size of the kernel in `k_h` and `k_w`.\n # By default, this will use a stride of 2, meaning\n # each new layer will be downsampled by 2.\n h, W = utils.conv2d(h, channels[layer_i], k_h = filter_sizes[layer_i], k_w=filter_sizes[layer_i],reuse=reuse)\n\n\n # Now apply the activation function\n h = activation(h)\n \n # Store each hidden layer\n hs.append(h)\n\n # Finally, return the encoding.\n return h, hs",
"<a name=\"building-the-discriminator-for-the-training-samples\"></a>\nBuilding the Discriminator for the Training Samples\nFinally, let's take the output of our encoder, and make sure it has just 1 value by using a fully connected layer. We can use the libs/utils module's, linear layer to do this, which will also reshape our 4-dimensional tensor to a 2-dimensional one prior to using the fully connected layer.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"def discriminator(X,\n channels=[50, 50, 50, 50],\n filter_sizes=[4, 4, 4, 4],\n activation=utils.lrelu,\n reuse=None):\n\n # We'll scope these variables to \"discriminator_real\"\n with tf.variable_scope('discriminator', reuse=reuse):\n # Encode X:\n H, Hs = encoder(X, channels, filter_sizes, activation, reuse)\n \n # Now make one last layer with just 1 output. We'll\n # have to reshape to 2-d so that we can create a fully\n # connected layer:\n shape = H.get_shape().as_list()\n H = tf.reshape(H, [-1, shape[1] * shape[2] * shape[3]])\n \n # Now we can connect our 2D layer to a single neuron output w/\n # a sigmoid activation:\n D, W = utils.linear(H, 1, activation=tf.nn.sigmoid, reuse=reuse, name='FCNN')\n return D",
"Now let's create the discriminator for the real training data coming from X:",
"D_real = discriminator(X)",
"And we can see what the network looks like now:",
"graph = tf.get_default_graph()\nnb_utils.show_graph(graph.as_graph_def())",
"<a name=\"building-the-decoder\"></a>\nBuilding the Decoder\nNow we're ready to build the Generator, or decoding network. This network takes as input a vector of features and will try to produce an image that looks like our training data. We'll send this synthesized image to our discriminator which we've just built above.\nLet's start by building the input to this network. We'll need a placeholder for the input features to this network. We have to be mindful of how many features we have. The feature vector for the Generator will eventually need to form an image. What we can do is create a 1-dimensional vector of values for each element in our batch, giving us [None, n_features]. We can then reshape this to a 4-dimensional Tensor so that we can build a decoder network just like in Session 3.\nBut how do we assign the values from our 1-d feature vector (or 2-d tensor with Batch number of them) to the 3-d shape of an image (or 4-d tensor with Batch number of them)? We have to go from the number of features in our 1-d feature vector, let's say n_latent to height x width x channels through a series of convolutional transpose layers. One way to approach this is think of the reverse process. Starting from the final decoding of height x width x channels, I will use convolution with a stride of 2, so downsample by 2 with each new layer. So the second to last decoder layer would be, height // 2 x width // 2 x ?. If I look at it like this, I can use the variable n_pixels denoting the height and width to build my decoder, and set the channels to whatever I want.\nLet's start with just our 2-d placeholder which will have None x n_features, then convert it to a 4-d tensor ready for the decoder part of the network (a.k.a. the generator).",
"# We'll need some variables first. This will be how many\n# channels our generator's feature vector has. Experiment w/\n# this if you are training your own network.\nn_code = 16\n\n# And in total how many feature it has, including the spatial dimensions.\nn_latent = (n_pixels // 16) * (n_pixels // 16) * n_code\n\n# Let's build the 2-D placeholder, which is the 1-d feature vector for every\n# element in our batch. We'll then reshape this to 4-D for the decoder.\nZ = tf.placeholder(name='Z', shape=[None, n_latent], dtype=tf.float32)\n\n# Now we can reshape it to input to the decoder. Here we have to\n# be mindful of the height and width as described before. We need\n# to make the height and width a factor of the final height and width\n# that we want. Since we are using strided convolutions of 2, then\n# we can say with 4 layers, that first decoder's layer should be:\n# n_pixels / 2 / 2 / 2 / 2, or n_pixels / 16:\nZ_tensor = tf.reshape(Z, [-1, n_pixels // 16, n_pixels // 16, n_code])",
"Now we'll build the decoder in much the same way as we built our encoder. And exactly as we've done in Session 3! This requires one additional parameter \"channels\" which is how many output filters we want for each net layer. We'll interpret the dimensions as the height and width of the tensor in each new layer, the channels is how many output filters we want for each net layer, and the filter_sizes is the size of the filters used for convolution. We'll default to using a stride of two which will downsample each layer. We're also going to collect each hidden layer h in a list. We'll end up needing this for Part 2 when we combine the variational autoencoder w/ the generative adversarial network.",
"def decoder(z, dimensions, channels, filter_sizes,\n activation=tf.nn.relu, reuse=None):\n h = z\n hs = []\n for layer_i in range(len(dimensions)):\n with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):\n h, W = utils.deconv2d(x=h,\n n_output_h=dimensions[layer_i],\n n_output_w=dimensions[layer_i],\n n_output_ch=channels[layer_i],\n k_h=filter_sizes[layer_i],\n k_w=filter_sizes[layer_i],\n reuse=reuse)\n h = activation(h)\n hs.append(h)\n return h, hs",
"<a name=\"building-the-generator\"></a>\nBuilding the Generator\nNow we're ready to use our decoder to take in a vector of features and generate something that looks like our training images. We have to ensure that the last layer produces the same output shape as the discriminator's input. E.g. we used a [None, 64, 64, 3] input to the discriminator, so our generator needs to also output [None, 64, 64, 3] tensors. In other words, we have to ensure the last element in our dimensions list is 64, and the last element in our channels list is 3.",
"# Explore these parameters.\ndef generator(Z,\n dimensions=[n_pixels//8, n_pixels//4, n_pixels//2, n_pixels],\n channels=[50, 50, 50, n_channels],\n filter_sizes=[4, 4, 4, 4],\n activation=utils.lrelu):\n\n with tf.variable_scope('generator'):\n G, Hs = decoder(Z_tensor, dimensions, channels, filter_sizes, activation)\n\n return G",
"Now let's call the generator function with our input placeholder Z. This will take our feature vector and generate something in the shape of an image.",
"G = generator(Z)\n\ngraph = tf.get_default_graph()\nnb_utils.show_graph(graph.as_graph_def())",
"<a name=\"building-the-discriminator-for-the-generated-samples\"></a>\nBuilding the Discriminator for the Generated Samples\nLastly, we need another discriminator which takes as input our generated images. Recall the discriminator that we have made only takes as input our placeholder X which is for our actual training samples. We'll use the same function for creating our discriminator and reuse the variables we already have. This is the crucial part! We aren't making new trainable variables, but reusing the ones we have. We just create a new set of operations that takes as input our generated image. So we'll have a whole new set of operations exactly like the ones we have created for our first discriminator. But we are going to use the exact same variables as our first discriminator, so that we optimize the same values.",
"D_fake = discriminator(G, reuse=True)",
"Now we can look at the graph and see the new discriminator inside the node for the discriminator. You should see the original discriminator and a new graph of a discriminator within it, but all the weights are shared with the original discriminator.",
"nb_utils.show_graph(graph.as_graph_def())",
"<a name=\"gan-loss-functions\"></a>\nGAN Loss Functions\nWe now have all the components to our network. We just have to train it. This is the notoriously tricky bit. We will have 3 different loss measures instead of our typical network with just a single loss. We'll later connect each of these loss measures to two optimizers, one for the generator and another for the discriminator, and then pin them against each other and see which one wins! Exciting times!\nRecall from Session 3's Supervised Network, we created a binary classification task: music or speech. We again have a binary classification task: real or fake. So our loss metric will again use the binary cross entropy to measure the loss of our three different modules: the generator, the discriminator for our real images, and the discriminator for our generated images.\nTo find out the loss function for our generator network, answer the question, what makes the generator successful? Successfully fooling the discriminator. When does that happen? When the discriminator for the fake samples produces all ones. So our binary cross entropy measure will measure the cross entropy with our predicted distribution and the true distribution which has all ones.",
"with tf.variable_scope('loss/generator'):\n loss_G = tf.reduce_mean(utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake)))",
"What we've just written is a loss function for our generator. The generator is optimized when the discriminator for the generated samples produces all ones. In contrast to the generator, the discriminator will have 2 measures to optimize. One which is the opposite of what we have just written above, as well as 1 more measure for the real samples. Try writing these two losses and we'll combine them using their average. We want to optimize the Discriminator for the real samples producing all 1s, and the Discriminator for the fake samples producing all 0s:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"with tf.variable_scope('loss/discriminator/real'):\n loss_D_real = utils.binary_cross_entropy(D_real, tf.zeros_like(D_real))\nwith tf.variable_scope('loss/discriminator/fake'):\n loss_D_fake = utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake))\nwith tf.variable_scope('loss/discriminator'):\n loss_D = tf.reduce_mean((loss_D_real + loss_D_fake) / 2)\n\nnb_utils.show_graph(graph.as_graph_def())",
"With our loss functions, we can create an optimizer for the discriminator and generator:\n<a name=\"building-the-optimizers-w-regularization\"></a>\nBuilding the Optimizers w/ Regularization\nWe're almost ready to create our optimizers. We just need to do one extra thing. Recall that our loss for our generator has a flow from the generator through the discriminator. If we are training both the generator and the discriminator, we have two measures which both try to optimize the discriminator, but in opposite ways: the generator's loss would try to optimize the discriminator to be bad at its job, and the discriminator's loss would try to optimize it to be good at its job. This would be counter-productive, trying to optimize opposing losses. What we want is for the generator to get better, and the discriminator to get better. Not for the discriminator to get better, then get worse, then get better, etc... The way we do this is when we optimize our generator, we let the gradient flow through the discriminator, but we do not update the variables in the discriminator. Let's try and grab just the discriminator variables and just the generator variables below:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Grab just the variables corresponding to the discriminator\n# and just the generator:\nvars_d = [v for v in tf.trainable_variables()\n if v.name.startswith('discriminator')]\nprint('Training discriminator variables:')\n[print(v.name) for v in tf.trainable_variables()\n if v.name.startswith('discriminator')]\n\nvars_g = [v for v in tf.trainable_variables()\n if v.name.startswith('generator')\nprint('Training generator variables:')\n[print(v.name) for v in tf.trainable_variables()\n if v.name.startswith('generator')]",
"We can also apply regularization to our network. This will penalize weights in the network for growing too large.",
"d_reg = tf.contrib.layers.apply_regularization(\n tf.contrib.layers.l2_regularizer(1e-6), vars_d)\ng_reg = tf.contrib.layers.apply_regularization(\n tf.contrib.layers.l2_regularizer(1e-6), vars_g)",
"The last thing you may want to try is creating a separate learning rate for each of your generator and discriminator optimizers like so:",
"learning_rate = 0.0001\n\nlr_g = tf.placeholder(tf.float32, shape=[], name='learning_rate_g')\nlr_d = tf.placeholder(tf.float32, shape=[], name='learning_rate_d')",
"Now you can feed the placeholders to your optimizers. If you run into errors creating these, then you likely have a problem with your graph's definition! Be sure to go back and reset the default graph and check the sizes of your different operations/placeholders.\nWith your optimizers, you can now train the network by \"running\" the optimizer variables with your session. You'll need to set the var_list parameter of the minimize function to only train the variables for the discriminator and same for the generator's optimizer:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"opt_g = tf.train.AdamOptimizer(learning_rate=lr_g).minimize(loss_G + g_reg, var_list=vars_g)\n\nopt_d = tf.train.AdamOptimizer(learning_rate=lr_d).minimize(loss_D + d_reg, var_list=vars_d)",
"<a name=\"loading-a-dataset\"></a>\nLoading a Dataset\nLet's use the Celeb Dataset just for demonstration purposes. In Part 2, you can explore using your own dataset. This code is exactly the same as we did in Session 3's homework with the VAE.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# You'll want to change this to your own data if you end up training your own GAN.\nbatch_size = 64\nn_epochs = 1\ncrop_shape = [n_pixels, n_pixels, 3]\ncrop_factor = 0.8\ninput_shape = [218, 178, 3]\n\nfiles = datasets.CELEB()\nbatch = dataset_utils.create_input_pipeline(\n files=files,\n batch_size=batch_size,\n n_epochs=n_epochs,\n crop_shape=crop_shape,\n crop_factor=crop_factor,\n shape=input_shape)",
"<a name=\"training\"></a>\nTraining\nWe'll now go through the setup of training the network. We won't actually spend the time to train the network but just see how it would be done. This is because in Part 2, we'll see an extension to this network which makes it much easier to train.",
"ckpt_name = './gan.ckpt'\n\nsess = tf.Session()\nsaver = tf.train.Saver()\nsess.run(tf.global_variables_initializer())\ncoord = tf.train.Coordinator()\ntf.get_default_graph().finalize()\nthreads = tf.train.start_queue_runners(sess=sess, coord=coord)\n\nif os.path.exists(ckpt_name + '.index') or os.path.exists(ckpt_name):\n saver.restore(sess, ckpt_name)\n print(\"VAE model restored.\")\n\nn_examples = 10\n\nzs = np.random.uniform(0.0, 1.0, [4, n_latent]).astype(np.float32)\nzs = utils.make_latent_manifold(zs, n_examples)",
"<a name=\"equilibrium\"></a>\nEquilibrium\nEquilibrium is at 0.693. Why? Consider what the cost is measuring, the binary cross entropy. If we have random guesses, then we have as many 0s as we have 1s. And on average, we'll be 50% correct. The binary cross entropy is:\n\\begin{align}\n\\sum_i \\text{X}_i * \\text{log}(\\tilde{\\text{X}}_i) + (1 - \\text{X}_i) * \\text{log}(1 - \\tilde{\\text{X}}_i)\n\\end{align}\nWhich is written out in tensorflow as:\npython\n(-(x * tf.log(z) + (1. - x) * tf.log(1. - z)))\nWhere x is the discriminator's prediction of the true distribution, in the case of GANs, the input images, and z is the discriminator's prediction of the generated images corresponding to the mathematical notation of $\\tilde{\\text{X}}$. We sum over all features, but in the case of the discriminator, we have just 1 feature, the guess of whether it is a true image or not. If our discriminator guesses at chance, i.e. 0.5, then we'd have something like:\n\\begin{align}\n0.5 * \\text{log}(0.5) + (1 - 0.5) * \\text{log}(1 - 0.5) = -0.693\n\\end{align}\nSo this is what we'd expect at the start of learning and from a game theoretic point of view, where we want things to remain. So unlike our previous networks, where our loss continues to drop closer and closer to 0, we want our loss to waver around this value as much as possible, and hope for the best.",
"equilibrium = 0.693\nmargin = 0.2",
"When we go to train the network, we switch back and forth between each optimizer, feeding in the appropriate values for each optimizer. The opt_g optimizer only requires the Z and lr_g placeholders, while the opt_d optimizer requires the X, Z, and lr_d placeholders.\nDon't train this network for very long because GANs are a huge pain to train and require a lot of fiddling. They very easily get stuck in their adversarial process, or get overtaken by one or the other, resulting in a useless model. What you need to develop is a steady equilibrium that optimizes both. That will likely take two weeks just trying to get the GAN to train and not have enough time for the rest of the assignment. They require a lot of memory/cpu and can take many days to train once you have settled on an architecture/training process/dataset. Just let it run for a short time and then interrupt the kernel (don't restart!), then continue to the next cell.\nFrom there, we'll go over an extension to the GAN which uses a VAE like we used in Session 3. By using this extra network, we can actually train a better model in a fraction of the time and with much more ease! But the network's definition is a bit more complicated. Let's see how the GAN is trained first and then we'll train the VAE/GAN network instead. While training, the \"real\" and \"fake\" cost will be printed out. See how this cost wavers around the equilibrium and how we enforce it to try and stay around there by including a margin and some simple logic for updates. This is highly experimental and the research does not have a good answer for the best practice on how to train a GAN. I.e., some people will set the learning rate to some ratio of the performance between fake/real networks, others will have a fixed update schedule but train the generator twice and the discriminator only once.",
"t_i = 0\nbatch_i = 0\nepoch_i = 0\nn_files = len(files)\n\nif not os.path.exists('imgs'):\n os.makedirs('imgs')\n \nwhile epoch_i < n_epochs:\n\n batch_i += 1\n batch_xs = sess.run(batch) / 255.0\n batch_zs = np.random.uniform(\n 0.0, 1.0, [batch_size, n_latent]).astype(np.float32)\n\n real_cost, fake_cost = sess.run([\n loss_D_real, loss_D_fake],\n feed_dict={\n X: batch_xs,\n Z: batch_zs})\n real_cost = np.mean(real_cost)\n fake_cost = np.mean(fake_cost)\n \n if (batch_i % 20) == 0:\n print(batch_i, 'real:', real_cost, '/ fake:', fake_cost)\n\n gen_update = True\n dis_update = True\n\n if real_cost > (equilibrium + margin) or \\\n fake_cost > (equilibrium + margin):\n gen_update = False\n\n if real_cost < (equilibrium - margin) or \\\n fake_cost < (equilibrium - margin):\n dis_update = False\n\n if not (gen_update or dis_update):\n gen_update = True\n dis_update = True\n\n if gen_update:\n sess.run(opt_g,\n feed_dict={\n Z: batch_zs,\n lr_g: learning_rate})\n if dis_update:\n sess.run(opt_d,\n feed_dict={\n X: batch_xs,\n Z: batch_zs,\n lr_d: learning_rate})\n\n if batch_i % (n_files // batch_size) == 0:\n batch_i = 0\n epoch_i += 1\n print('---------- EPOCH:', epoch_i)\n \n # Plot example reconstructions from latent layer\n recon = sess.run(G, feed_dict={Z: zs})\n\n recon = np.clip(recon, 0, 1)\n m1 = utils.montage(recon.reshape([-1] + crop_shape),\n 'imgs/manifold_%08d.png' % t_i)\n\n recon = sess.run(G, feed_dict={Z: batch_zs})\n\n recon = np.clip(recon, 0, 1)\n m2 = utils.montage(recon.reshape([-1] + crop_shape),\n 'imgs/reconstructions_%08d.png' % t_i)\n \n fig, axs = plt.subplots(1, 2, figsize=(15, 10))\n axs[0].imshow(m1)\n axs[1].imshow(m2)\n plt.show()\n t_i += 1\n\n # Save the variables to disk.\n save_path = saver.save(sess, \"./\" + ckpt_name,\n global_step=batch_i,\n write_meta_graph=False)\n print(\"Model saved in file: %s\" % save_path)\n\n# Tell all the threads to shutdown.\ncoord.request_stop()\n\n# Wait until all threads have finished.\ncoord.join(threads)\n\n# Clean up the session.\nsess.close()",
"<a name=\"part-2---variational-auto-encoding-generative-adversarial-network-vaegan\"></a>\nPart 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)\nIn our definition of the generator, we started with a feature vector, Z. This feature vector was not connected to anything before it. Instead, we had to randomly create its values using a random number generator of its n_latent values from -1 to 1, and this range was chosen arbitrarily. It could have been 0 to 1, or -3 to 3, or 0 to 100. In any case, the network would have had to learn to transform those values into something that looked like an image. There was no way for us to take an image, and find the feature vector that created it. In other words, it was not possible for us to encode an image.\nThe closest thing to an encoding we had was taking an image and feeding it to the discriminator, which would output a 0 or 1. But what if we had another network that allowed us to encode an image, and then we used this network for both the discriminator and generative parts of the network? That's the basic idea behind the VAEGAN: https://arxiv.org/abs/1512.09300. It is just like the regular GAN, except we also use an encoder to create our feature vector Z.\nWe then get the best of both worlds: a GAN that looks more or less the same, but uses the encoding from an encoder instead of an arbitrary feature vector; and an autoencoder that can model an input distribution using a trained distance function, the discriminator, leading to nicer encodings/decodings.\nLet's try to build it! Refer to the paper for the intricacies and a great read. Luckily, by building the encoder and decoder functions, we're almost there. We just need a few more components and will change these slightly.\nLet's reset our graph and recompose our network as a VAEGAN:",
"tf.reset_default_graph()",
"<a name=\"batch-normalization\"></a>\nBatch Normalization\nYou may have noticed from the VAE code that I've used something called \"batch normalization\". This is a pretty effective technique for regularizing the training of networks by \"reducing internal covariate shift\". The basic idea is that given a minibatch, we optimize the gradient for this small sample of the greater population. But this small sample may have different characteristics than the entire population's gradient. Consider the most extreme case, a minibatch of 1. In this case, we overfit our gradient to optimize the gradient of the single observation. If our minibatch is too large, say the size of the entire population, we aren't able to manuvuer the loss manifold at all and the entire loss is averaged in a way that doesn't let us optimize anything. What we want to do is find a happy medium between a too-smooth loss surface (i.e. every observation), and a very peaky loss surface (i.e. a single observation). Up until now we only used mini-batches to help with this. But we can also approach it by \"smoothing\" our updates between each mini-batch. That would effectively smooth the manifold of the loss space. Those of you familiar with signal processing will see this as a sort of low-pass filter on the gradient updates.\nIn order for us to use batch normalization, we need another placeholder which is a simple boolean: True or False, denoting when we are training. We'll use this placeholder to conditionally update batch normalization's statistics required for normalizing our minibatches. Let's create the placeholder and then I'll get into how to use this.",
"# placeholder for batch normalization\nis_training = tf.placeholder(tf.bool, name='istraining')",
"The original paper that introduced the idea suggests to use batch normalization \"pre-activation\", meaning after the weight multipllication or convolution, and before the nonlinearity. We can use the tensorflow.contrib.layers.batch_norm module to apply batch normalization to any input tensor give the tensor and the placeholder defining whether or not we are training. Let's use this module and you can inspect the code inside the module in your own time if it interests you.",
"from tensorflow.contrib.layers import batch_norm\nhelp(batch_norm)",
"<a name=\"building-the-encoder-1\"></a>\nBuilding the Encoder\nWe can now change our encoder to accept the is_training placeholder and apply batch_norm just before the activation function is applied:",
"def encoder(x, is_training, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):\n # Set the input to a common variable name, h, for hidden layer\n h = x\n\n print('encoder/input:', h.get_shape().as_list())\n # Now we'll loop over the list of dimensions defining the number\n # of output filters in each layer, and collect each hidden layer\n hs = []\n for layer_i in range(len(channels)):\n \n with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):\n # Convolve using the utility convolution function\n # This requirs the number of output filter,\n # and the size of the kernel in `k_h` and `k_w`.\n # By default, this will use a stride of 2, meaning\n # each new layer will be downsampled by 2.\n h, W = utils.conv2d(h, channels[layer_i],\n k_h=filter_sizes[layer_i],\n k_w=filter_sizes[layer_i],\n d_h=2,\n d_w=2,\n reuse=reuse)\n \n h = batch_norm(h, is_training=is_training)\n\n # Now apply the activation function\n h = activation(h)\n print('layer:', layer_i, ', shape:', h.get_shape().as_list())\n \n # Store each hidden layer\n hs.append(h)\n\n # Finally, return the encoding.\n return h, hs",
"Let's now create the input to the network using a placeholder. We can try a slightly larger image this time. But be careful experimenting with much larger images as this is a big network.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"n_pixels = 64\nn_channels = 3\ninput_shape = [None, n_pixels, n_pixels, n_channels]\n\n# placeholder for the input to the network\nX = tf.placeholder(...)",
"And now we'll connect the input to an encoder network. We'll also use the tf.nn.elu activation instead. Explore other activations but I've found this to make the training much faster (e.g. 10x faster at least!). See the paper for more details: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"channels = [64, 64, 64]\nfilter_sizes = [5, 5, 5]\nactivation = tf.nn.elu\nn_hidden = 128\n\nwith tf.variable_scope('encoder'):\n H, Hs = encoder(...\n Z = utils.linear(H, n_hidden)[0]",
"<a name=\"building-the-variational-layer\"></a>\nBuilding the Variational Layer\nIn Session 3, we introduced the idea of Variational Bayes when we used the Variational Auto Encoder. The variational bayesian approach requires a richer understanding of probabilistic graphical models and bayesian methods which we we're not able to go over in this course (it requires a few courses all by itself!). For that reason, please treat this as a \"black box\" in this course.\nFor those of you that are more familiar with graphical models, Variational Bayesian methods attempt to model an approximate joint distribution of $Q(Z)$ using some distance function to the true distribution $P(X)$. Kingma and Welling show how this approach can be used in a graphical model resembling an autoencoder and can be trained using KL-Divergence, or $KL(Q(Z) || P(X))$. The distribution Q(Z) is the variational distribution, and attempts to model the lower-bound of the true distribution $P(X)$ through the minimization of the KL-divergence. Another way to look at this is the encoder of the network is trying to model the parameters of a known distribution, the Gaussian Distribution, through a minimization of this lower bound. We assume that this distribution resembles the true distribution, but it is merely a simplification of the true distribution. To learn more about this, I highly recommend picking up the book by Christopher Bishop called \"Pattern Recognition and Machine Learning\" and reading the original Kingma and Welling paper on Variational Bayes.\nNow back to coding, we'll create a general variational layer that does exactly the same thing as our VAE in session 3. Treat this as a black box if you are unfamiliar with the math. It takes an input encoding, h, and an integer, n_code defining how many latent Gaussians to use to model the latent distribution. In return, we get the latent encoding from sampling the Gaussian layer, z, the mean and log standard deviation, as well as the prior loss, loss_z.",
"def variational_bayes(h, n_code):\n # Model mu and log(\\sigma)\n z_mu = tf.nn.tanh(utils.linear(h, n_code, name='mu')[0])\n z_log_sigma = 0.5 * tf.nn.tanh(utils.linear(h, n_code, name='log_sigma')[0])\n\n # Sample from noise distribution p(eps) ~ N(0, 1)\n epsilon = tf.random_normal(tf.stack([tf.shape(h)[0], n_code]))\n\n # Sample from posterior\n z = z_mu + tf.multiply(epsilon, tf.exp(z_log_sigma))\n\n # Measure loss\n loss_z = -0.5 * tf.reduce_sum(\n 1.0 + 2.0 * z_log_sigma - tf.square(z_mu) - tf.exp(2.0 * z_log_sigma),\n 1)\n\n return z, z_mu, z_log_sigma, loss_z",
"Let's connect this layer to our encoding, and keep all the variables it returns. Treat this as a black box if you are unfamiliar with variational bayes!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Experiment w/ values between 2 - 100\n# depending on how difficult the dataset is\nn_code = 32\n\nwith tf.variable_scope('encoder/variational'):\n Z, Z_mu, Z_log_sigma, loss_Z = variational_bayes(h=Z, n_code=n_code)",
"<a name=\"building-the-decoder-1\"></a>\nBuilding the Decoder\nIn the GAN network, we built a decoder and called it the generator network. Same idea here. We can use these terms interchangeably. Before we connect our latent encoding, Z to the decoder, we'll implement batch norm in our decoder just like we did with the encoder. This is a simple fix: add a second argument for is_training and then apply batch normalization just after the deconv2d operation and just before the nonlinear activation.",
"def decoder(z, is_training, dimensions, channels, filter_sizes,\n activation=tf.nn.elu, reuse=None):\n h = z\n for layer_i in range(len(dimensions)):\n with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):\n h, W = utils.deconv2d(x=h,\n n_output_h=dimensions[layer_i],\n n_output_w=dimensions[layer_i],\n n_output_ch=channels[layer_i],\n k_h=filter_sizes[layer_i],\n k_w=filter_sizes[layer_i],\n reuse=reuse)\n h = batch_norm(h, is_training=is_training)\n h = activation(h)\n return h",
"Now we'll build a decoder just like in Session 3, and just like our Generator network in Part 1. In Part 1, we created Z as a placeholder which we would have had to feed in as random values. However, now we have an explicit coding of an input image in X stored in Z by having created the encoder network.",
"dimensions = [n_pixels // 8, n_pixels // 4, n_pixels // 2, n_pixels]\nchannels = [30, 30, 30, n_channels]\nfilter_sizes = [4, 4, 4, 4]\nactivation = tf.nn.elu\nn_latent = n_code * (n_pixels // 16)**2\n\nwith tf.variable_scope('generator'):\n Z_decode = utils.linear(\n Z, n_output=n_latent, name='fc', activation=activation)[0]\n Z_decode_tensor = tf.reshape(\n Z_decode, [-1, n_pixels//16, n_pixels//16, n_code], name='reshape')\n G = decoder(\n Z_decode_tensor, is_training, dimensions,\n channels, filter_sizes, activation)",
"Now we need to build our discriminators. We'll need to add a parameter for the is_training placeholder. We're also going to keep track of every hidden layer in the discriminator. Our encoder already returns the Hs of each layer. Alternatively, we could poll the graph for each layer in the discriminator and ask for the correspond layer names. We're going to need these layers when building our costs.",
"def discriminator(X,\n is_training,\n channels=[50, 50, 50, 50],\n filter_sizes=[4, 4, 4, 4],\n activation=tf.nn.elu,\n reuse=None):\n\n # We'll scope these variables to \"discriminator_real\"\n with tf.variable_scope('discriminator', reuse=reuse):\n H, Hs = encoder(\n X, is_training, channels, filter_sizes, activation, reuse)\n shape = H.get_shape().as_list()\n H = tf.reshape(\n H, [-1, shape[1] * shape[2] * shape[3]])\n D, W = utils.linear(\n x=H, n_output=1, activation=tf.nn.sigmoid, name='fc', reuse=reuse)\n return D, Hs",
"Recall the regular GAN and DCGAN required 2 discriminators: one for the generated samples in Z, and one for the input samples in X. We'll do the same thing here. One discriminator for the real input data, X, which the discriminator will try to predict as 1s, and another discriminator for the generated samples that go from X through the encoder to Z, and finally through the decoder to G. The discriminator will be trained to try and predict these as 0s, whereas the generator will be trained to try and predict these as 1s.",
"D_real, Hs_real = discriminator(X, is_training)\nD_fake, Hs_fake = discriminator(G, is_training, reuse=True)",
"<a name=\"building-vaegan-loss-functions\"></a>\nBuilding VAE/GAN Loss Functions\nLet's now see how we can compose our loss. We have 3 losses for our discriminator. Along with measuring the binary cross entropy between each of them, we're going to also measure each layer's loss from our two discriminators using an l2-loss, and this will form our loss for the log likelihood measure. The details of how these are constructed are explained in more details in the paper: https://arxiv.org/abs/1512.09300 - please refer to this paper for more details that are way beyond the scope of this course! One parameter within this to pay attention to is gamma, which the authors of the paper suggest control the weighting between content and style, just like in Session 4's Style Net implementation.",
"with tf.variable_scope('loss'):\n # Loss functions\n loss_D_llike = 0\n for h_real, h_fake in zip(Hs_real, Hs_fake):\n loss_D_llike += tf.reduce_sum(tf.squared_difference(\n utils.flatten(h_fake), utils.flatten(h_real)), 1)\n\n eps = 1e-12\n loss_real = tf.log(D_real + eps)\n loss_fake = tf.log(1 - D_fake + eps)\n loss_GAN = tf.reduce_sum(loss_real + loss_fake, 1)\n \n gamma = 0.75\n loss_enc = tf.reduce_mean(loss_Z + loss_D_llike)\n loss_dec = tf.reduce_mean(gamma * loss_D_llike - loss_GAN)\n loss_dis = -tf.reduce_mean(loss_GAN)\n\nnb_utils.show_graph(tf.get_default_graph().as_graph_def())",
"<a name=\"creating-the-optimizers\"></a>\nCreating the Optimizers\nWe now have losses for our encoder, decoder, and discriminator networks. We can connect each of these to their own optimizer and start training! Just like with Part 1's GAN, we'll ensure each network's optimizer only trains its part of the network: the encoder's optimizer will only update the encoder variables, the generator's optimizer will only update the generator variables, and the discriminator's optimizer will only update the discriminator variables.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"learning_rate = 0.0001\n\nopt_enc = tf.train.AdamOptimizer(\n learning_rate=learning_rate).minimize(\n loss_enc,\n var_list=[var_i for var_i in tf.trainable_variables()\n if ...])\n\nopt_gen = tf.train.AdamOptimizer(\n learning_rate=learning_rate).minimize(\n loss_dec,\n var_list=[var_i for var_i in tf.trainable_variables()\n if ...])\n\nopt_dis = tf.train.AdamOptimizer(\n learning_rate=learning_rate).minimize(\n loss_dis,\n var_list=[var_i for var_i in tf.trainable_variables()\n if var_i.name.startswith('discriminator')])",
"<a name=\"loading-the-dataset\"></a>\nLoading the Dataset\nWe'll now load our dataset just like in Part 1. Here is where you should explore with your own data!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"from libs import datasets, dataset_utils\n\nbatch_size = 64\nn_epochs = 100\ncrop_shape = [n_pixels, n_pixels, n_channels]\ncrop_factor = 0.8\ninput_shape = [218, 178, 3]\n\n# Try w/ CELEB first to make sure it works, then explore w/ your own dataset.\nfiles = datasets.CELEB()\nbatch = dataset_utils.create_input_pipeline(\n files=files,\n batch_size=batch_size,\n n_epochs=n_epochs,\n crop_shape=crop_shape,\n crop_factor=crop_factor,\n shape=input_shape)",
"We'll also create a latent manifold just like we've done in Session 3 and Part 1. This is a random sampling of 4 points in the latent space of Z. We then interpolate between them to create a \"hyper-plane\" and show the decoding of 10 x 10 points on that hyperplane.",
"n_samples = 10\nzs = np.random.uniform(\n -1.0, 1.0, [4, n_code]).astype(np.float32)\nzs = utils.make_latent_manifold(zs, n_samples)",
"Now create a session and create a coordinator to manage our queues for fetching data from the input pipeline and start our queue runners:",
"# We create a session to use the graph\nsess = tf.Session()\ninit_op = tf.global_variables_initializer()\n\nsaver = tf.train.Saver()\ncoord = tf.train.Coordinator()\nthreads = tf.train.start_queue_runners(sess=sess, coord=coord)\nsess.run(init_op)",
"Load an existing checkpoint if it exists to continue training.",
"if os.path.exists(\"vaegan.ckpt\"):\n saver.restore(sess, \"vaegan.ckpt\")\n print(\"GAN model restored.\")",
"We'll also try resynthesizing a test set of images. This will help us understand how well the encoder/decoder network is doing:",
"n_files = len(files)\ntest_xs = sess.run(batch) / 255.0\nif not os.path.exists('imgs'):\n os.mkdir('imgs')\nm = utils.montage(test_xs, 'imgs/test_xs.png')\nplt.imshow(m)",
"<a name=\"training-1\"></a>\nTraining\nAlmost ready for training. Let's get some variables which we'll need. These are the same as Part 1's training process. We'll keep track of t_i which we'll use to create images of the current manifold and reconstruction every so many iterations. And we'll keep track of the current batch number within the epoch and the current epoch number.",
"t_i = 0\nbatch_i = 0\nepoch_i = 0\nckpt_name = './vaegan.ckpt'",
"Just like in Part 1, we'll train trying to maintain an equilibrium between our Generator and Discriminator networks. You should experiment with the margin depending on how the training proceeds.",
"equilibrium = 0.693\nmargin = 0.4",
"Now we'll train! Just like Part 1, we measure the real_cost and fake_cost. But this time, we'll always update the encoder. Based on the performance of the real/fake costs, then we'll update generator and discriminator networks. This will take a long time to produce something nice, but not nearly as long as the regular GAN network despite the additional parameters of the encoder and variational networks. Be sure to monitor the reconstructions to understand when your network has reached the capacity of its learning! For reference, on Celeb Net, I would use about 5 layers in each of the Encoder, Generator, and Discriminator networks using as input a 100 x 100 image, and a minimum of 200 channels per layer. This network would take about 1-2 days to train on an Nvidia TITAN X GPU.",
"while epoch_i < n_epochs:\n if batch_i % (n_files // batch_size) == 0:\n batch_i = 0\n epoch_i += 1\n print('---------- EPOCH:', epoch_i)\n\n batch_i += 1\n batch_xs = sess.run(batch) / 255.0\n real_cost, fake_cost, _ = sess.run([\n loss_real, loss_fake, opt_enc],\n feed_dict={\n X: batch_xs,\n is_training: True})\n real_cost = -np.mean(real_cost)\n fake_cost = -np.mean(fake_cost)\n\n gen_update = True\n dis_update = True\n\n if real_cost > (equilibrium + margin) or \\\n fake_cost > (equilibrium + margin):\n gen_update = False\n\n if real_cost < (equilibrium - margin) or \\\n fake_cost < (equilibrium - margin):\n dis_update = False\n\n if not (gen_update or dis_update):\n gen_update = True\n dis_update = True\n\n if gen_update:\n sess.run(opt_gen, feed_dict={\n X: batch_xs,\n is_training: True})\n if dis_update:\n sess.run(opt_dis, feed_dict={\n X: batch_xs,\n is_training: True})\n\n if batch_i % 50 == 0:\n print('real:', real_cost, '/ fake:', fake_cost)\n\n # Plot example reconstructions from latent layer\n recon = sess.run(G, feed_dict={\n Z: zs,\n is_training: False})\n\n recon = np.clip(recon, 0, 1)\n m1 = utils.montage(recon.reshape([-1] + crop_shape),\n 'imgs/manifold_%08d.png' % t_i)\n\n # Plot example reconstructions\n recon = sess.run(G, feed_dict={\n X: test_xs,\n is_training: False})\n recon = np.clip(recon, 0, 1)\n m2 = utils.montage(recon.reshape([-1] + crop_shape),\n 'imgs/reconstruction_%08d.png' % t_i)\n \n fig, axs = plt.subplots(1, 2, figsize=(15, 10))\n axs[0].imshow(m1)\n axs[1].imshow(m2)\n plt.show()\n \n t_i += 1\n \n\n if batch_i % 200 == 0:\n # Save the variables to disk.\n save_path = saver.save(sess, \"./\" + ckpt_name,\n global_step=batch_i,\n write_meta_graph=False)\n print(\"Model saved in file: %s\" % save_path)\n\n# One of the threads has issued an exception. So let's tell all the\n# threads to shutdown.\ncoord.request_stop()\n\n# Wait until all threads have finished.\ncoord.join(threads)\n\n# Clean up the session.\nsess.close()",
"<a name=\"part-3---latent-space-arithmetic\"></a>\nPart 3 - Latent-Space Arithmetic\n<a name=\"loading-the-pre-trained-model\"></a>\nLoading the Pre-Trained Model\nWe're now going to work with a pre-trained VAEGAN model on the Celeb Net dataset. Let's load this model:",
"tf.reset_default_graph()\n\nfrom libs import celeb_vaegan as CV\n\nnet = CV.get_celeb_vaegan_model()",
"We'll load the graph_def contained inside this dictionary. It follows the same idea as the inception, vgg16, and i2v pretrained networks. It is a dictionary with the key graph_def defined, with the graph's pretrained network. It also includes labels and a preprocess key. We'll have to do one additional thing which is to turn off the random sampling from variational layer. This isn't really necessary but will ensure we get the same results each time we use the network. We'll use the input_map argument to do this. Don't worry if this doesn't make any sense, as we didn't cover the variational layer in any depth. Just know that this is removing a random process from the network so that it is completely deterministic. If we hadn't done this, we'd get slightly different results each time we used the network (which may even be desirable for your purposes).",
"sess = tf.Session()\ng = tf.get_default_graph()\ntf.import_graph_def(net['graph_def'], name='net', input_map={\n 'encoder/variational/random_normal:0': np.zeros(512, dtype=np.float32)})\nnames = [op.name for op in g.get_operations()]\nprint(names)",
"Now let's get the relevant parts of the network: X, the input image to the network, Z, the input image's encoding, and G, the decoded image. In many ways, this is just like the Autoencoders we learned about in Session 3, except instead of Y being the output, we have G from our generator! And the way we train it is very different: we use an adversarial process between the generator and discriminator, and use the discriminator's own distance measure to help train the network, rather than pixel-to-pixel differences.",
"X = g.get_tensor_by_name('net/x:0')\nZ = g.get_tensor_by_name('net/encoder/variational/z:0')\nG = g.get_tensor_by_name('net/generator/x_tilde:0')",
"Let's get some data to play with:",
"files = datasets.CELEB()\nimg_i = 50\nimg = plt.imread(files[img_i])\nplt.imshow(img)",
"Now preprocess the image, and see what the generated image looks like (i.e. the lossy version of the image through the network's encoding and decoding).",
"p = CV.preprocess(img)\nsynth = sess.run(G, feed_dict={X: p[np.newaxis]})\n\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(p)\naxs[1].imshow(synth[0] / synth.max())",
"So we lost a lot of details but it seems to be able to express quite a bit about the image. Our inner most layer, Z, is only 512 values yet our dataset was 200k images of 64 x 64 x 3 pixels (about 2.3 GB of information). That means we're able to express our nearly 2.3 GB of information with only 512 values! Having some loss of detail is certainly expected!\n<a name=\"exploring-the-celeb-net-attributes\"></a>\nExploring the Celeb Net Attributes\nLet's now try and explore the attributes of our dataset. We didn't train the network with any supervised labels, but the Celeb Net dataset has 40 attributes for each of its 200k images. These are already parsed and stored for you in the net dictionary:",
"net.keys()\n\nlen(net['labels'])\n\nnet['labels']",
"Let's see what attributes exist for one of the celeb images:",
"plt.imshow(img)\n[net['labels'][i] for i, attr_i in enumerate(net['attributes'][img_i]) if attr_i]",
"<a name=\"find-the-latent-encoding-for-an-attribute\"></a>\nFind the Latent Encoding for an Attribute\nThe Celeb Dataset includes attributes for each of its 200k+ images. This allows us to feed into the encoder some images that we know have a specific attribute, e.g. \"smiling\". We store what their encoding is and retain this distribution of encoded values. We can then look at any other image and see how it is encoded, and slightly change the encoding by adding the encoded of our smiling images to it! The result should be our image but with more smiling. That is just insane and we're going to see how to do it. First lets inspect our latent space:",
"Z.get_shape()",
"We have 512 features that we can encode any image with. Assuming our network is doing an okay job, let's try to find the Z of the first 100 images with the 'Bald' attribute:",
"bald_label = net['labels'].index('Bald')\n\nbald_label",
"Let's get all the bald image indexes:",
"bald_img_idxs = np.where(net['attributes'][:, bald_label])[0]\n\nbald_img_idxs",
"Now let's just load 100 of their images:",
"bald_imgs = [plt.imread(files[bald_img_i])[..., :3]\n for bald_img_i in bald_img_idxs[:100]]",
"Let's see if the mean image looks like a good bald person or not:",
"plt.imshow(np.mean(bald_imgs, 0).astype(np.uint8))",
"Yes that is definitely a bald person. Now we're going to try to find the encoding of a bald person. One method is to try and find every other possible image and subtract the \"bald\" person's latent encoding. Then we could add this encoding back to any new image and hopefully it makes the image look more bald. Or we can find a bunch of bald people's encodings and then average their encodings together. This should reduce the noise from having many different attributes, but keep the signal pertaining to the baldness.\nLet's first preprocess the images:",
"bald_p = np.array([CV.preprocess(bald_img_i) for bald_img_i in bald_imgs])",
"Now we can find the latent encoding of the images by calculating Z and feeding X with our bald_p images:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"bald_zs = sess.run(Z, feed_dict=...",
"Now let's calculate the mean encoding:",
"bald_feature = np.mean(bald_zs, 0, keepdims=True)\n\nbald_feature.shape",
"Let's try and synthesize from the mean bald feature now and see how it looks:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"bald_generated = sess.run(G, feed_dict=...\n\nplt.imshow(bald_generated[0] / bald_generated.max())",
"<a name=\"latent-feature-arithmetic\"></a>\nLatent Feature Arithmetic\nLet's now try to write a general function for performing everything we've just done so that we can do this with many different features. We'll then try to combine them and synthesize people with the features we want them to have...",
"def get_features_for(label='Bald', has_label=True, n_imgs=50):\n label_i = net['labels'].index(label)\n label_idxs = np.where(net['attributes'][:, label_i] == has_label)[0]\n label_idxs = np.random.permutation(label_idxs)[:n_imgs]\n imgs = [plt.imread(files[img_i])[..., :3]\n for img_i in label_idxs]\n preprocessed = np.array([CV.preprocess(img_i) for img_i in imgs])\n zs = sess.run(Z, feed_dict={X: preprocessed})\n return np.mean(zs, 0)",
"Let's try getting some attributes positive and negative features. Be sure to explore different attributes! Also try different values of n_imgs, e.g. 2, 3, 5, 10, 50, 100. What happens with different values?\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Explore different attributes\nz1 = get_features_for('Male', True, n_imgs=10)\nz2 = get_features_for('Male', False, n_imgs=10)\nz3 = get_features_for('Smiling', True, n_imgs=10)\nz4 = get_features_for('Smiling', False, n_imgs=10)\n\nb1 = sess.run(G, feed_dict={Z: z1[np.newaxis]})\nb2 = sess.run(G, feed_dict={Z: z2[np.newaxis]})\nb3 = sess.run(G, feed_dict={Z: z3[np.newaxis]})\nb4 = sess.run(G, feed_dict={Z: z4[np.newaxis]})\n\nfig, axs = plt.subplots(1, 4, figsize=(15, 6))\naxs[0].imshow(b1[0] / b1.max()), axs[0].set_title('Male'), axs[0].grid('off'), axs[0].axis('off')\naxs[1].imshow(b2[0] / b2.max()), axs[1].set_title('Not Male'), axs[1].grid('off'), axs[1].axis('off')\naxs[2].imshow(b3[0] / b3.max()), axs[2].set_title('Smiling'), axs[2].grid('off'), axs[2].axis('off')\naxs[3].imshow(b4[0] / b4.max()), axs[3].set_title('Not Smiling'), axs[3].grid('off'), axs[3].axis('off')",
"Now let's interpolate between the \"Male\" and \"Not Male\" categories:",
"notmale_vector = z2 - z1\nn_imgs = 5\namt = np.linspace(0, 1, n_imgs)\nzs = np.array([z1 + notmale_vector*amt_i for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\n\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i], 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"And the same for smiling:",
"smiling_vector = z3 - z4\namt = np.linspace(0, 1, n_imgs)\nzs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))\n ax_i.grid('off')",
"There's also no reason why we have to be within the boundaries of 0-1. We can extrapolate beyond, in, and around the space.",
"n_imgs = 5\namt = np.linspace(-1.5, 2.5, n_imgs)\nzs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i], 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"<a name=\"extensions\"></a>\nExtensions\nTom White, Lecturer at Victoria University School of Design, also recently demonstrated an alternative way of interpolating using a sinusoidal interpolation. He's created some of the most impressive generative images out there and luckily for us he has detailed his process in the arxiv preprint: https://arxiv.org/abs/1609.04468 - as well, be sure to check out his twitter bot, https://twitter.com/smilevector - which adds smiles to people :) - Note that the network we're using is only trained on aligned faces that are frontally facing, though this twitter bot is capable of adding smiles to any face. I suspect that he is running a face detection algorithm such as AAM, CLM, or ASM, cropping the face, aligning it, and then running a similar algorithm to what we've done above. Or else, perhaps he has trained a new model on faces that are not aligned. In any case, it is well worth checking out!\nLet's now try and use sinusoidal interpolation using his implementation in plat which I've copied below:",
"def slerp(val, low, high):\n \"\"\"Spherical interpolation. val has a range of 0 to 1.\"\"\"\n if val <= 0:\n return low\n elif val >= 1:\n return high\n omega = np.arccos(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high)))\n so = np.sin(omega)\n return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega)/so * high\n\namt = np.linspace(0, 1, n_imgs)\nzs = np.array([slerp(amt_i, z1, z2) for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i], 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"It's certainly worth trying especially if you are looking to explore your own model's latent space in new and interesting ways.\nLet's try and load an image that we want to play with. We need an image as similar to the Celeb Dataset as possible. Unfortunately, we don't have access to the algorithm they used to \"align\" the faces, so we'll need to try and get as close as possible to an aligned face image. One way you can do this is to load up one of the celeb images and try and align an image to it using e.g. Photoshop or another photo editing software that lets you blend and move the images around. That's what I did for my own face...",
"img = plt.imread('parag.png')[..., :3]\nimg = CV.preprocess(img, crop_factor=1.0)[np.newaxis]",
"Let's see how the network encodes it:",
"img_ = sess.run(G, feed_dict={X: img})\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(img[0]), axs[0].grid('off')\naxs[1].imshow(np.clip(img_[0] / np.max(img_), 0, 1)), axs[1].grid('off')",
"Notice how blurry the image is. Tom White's preprint suggests one way to sharpen the image is to find the \"Blurry\" attribute vector:",
"z1 = get_features_for('Blurry', True, n_imgs=25)\nz2 = get_features_for('Blurry', False, n_imgs=25)\nunblur_vector = z2 - z1\n\nz = sess.run(Z, feed_dict={X: img})\n\nn_imgs = 5\namt = np.linspace(0, 1, n_imgs)\nzs = np.array([z[0] + unblur_vector * amt_i for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"Notice that the image also gets brighter and perhaps other features than simply the bluriness of the image changes. Tom's preprint suggests that this is due to the correlation that blurred images have with other things such as the brightness of the image, possibly due biases in labeling or how photographs are taken. He suggests that another way to unblur would be to synthetically blur a set of images and find the difference in the encoding between the real and blurred images. We can try it like so:",
"from scipy.ndimage import gaussian_filter\n\nidxs = np.random.permutation(range(len(files)))\nimgs = [plt.imread(files[idx_i]) for idx_i in idxs[:100]]\nblurred = []\nfor img_i in imgs:\n img_copy = np.zeros_like(img_i)\n for ch_i in range(3):\n img_copy[..., ch_i] = gaussian_filter(img_i[..., ch_i], sigma=3.0)\n blurred.append(img_copy)\n\n# Now let's preprocess the original images and the blurred ones\nimgs_p = np.array([CV.preprocess(img_i) for img_i in imgs])\nblur_p = np.array([CV.preprocess(img_i) for img_i in blurred])\n\n# And then compute each of their latent features\nnoblur = sess.run(Z, feed_dict={X: imgs_p})\nblur = sess.run(Z, feed_dict={X: blur_p})\n\nsynthetic_unblur_vector = np.mean(noblur - blur, 0)\n\nn_imgs = 5\namt = np.linspace(0, 1, n_imgs)\nzs = np.array([z[0] + synthetic_unblur_vector * amt_i for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i], 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"For some reason, it also doesn't like my glasses very much. Let's try and add them back.",
"z1 = get_features_for('Eyeglasses', True)\nz2 = get_features_for('Eyeglasses', False)\nglass_vector = z1 - z2\n\nz = sess.run(Z, feed_dict={X: img})\n\nn_imgs = 5\namt = np.linspace(0, 1, n_imgs)\nzs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i], 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"Well, more like sunglasses then. Let's try adding everything in there now!",
"n_imgs = 5\namt = np.linspace(0, 1.0, n_imgs)\nzs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i + amt_i * smiling_vector for amt_i in amt])\ng = sess.run(G, feed_dict={Z: zs})\nfig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))\nfor i, ax_i in enumerate(axs):\n ax_i.imshow(np.clip(g[i], 0, 1))\n ax_i.grid('off')\n ax_i.axis('off')",
"Well it was worth a try anyway. We can also try with a lot of images and create a gif montage of the result:",
"n_imgs = 5\namt = np.linspace(0, 1.5, n_imgs)\nz = sess.run(Z, feed_dict={X: imgs_p})\nimgs = []\nfor amt_i in amt:\n zs = z + synthetic_unblur_vector * amt_i + amt_i * smiling_vector\n g = sess.run(G, feed_dict={Z: zs})\n m = utils.montage(np.clip(g, 0, 1))\n imgs.append(m)\n\ngif.build_gif(imgs, saveto='celeb.gif')\n\nipyd.Image(url='celeb.gif?i={}'.format(\n np.random.rand()), height=1000, width=1000)",
"Exploring multiple feature vectors and applying them to images from the celeb dataset to produce animations of a face, saving it as a GIF. Recall you can store each image frame in a list and then use the gif.build_gif function to create a gif. Explore your own syntheses and then include a gif of the different images you create as \"celeb.gif\" in the final submission. Perhaps try finding unexpected synthetic latent attributes in the same way that we created a blur attribute. You can check the documentation in scipy.ndimage for some other image processing techniques, for instance: http://www.scipy-lectures.org/advanced/image_processing/ - and see if you can find the encoding of another attribute that you then apply to your own images. You can even try it with many images and use the utils.montage function to create a large grid of images that evolves over your attributes. Or create a set of expressions perhaps. Up to you just explore!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"imgs = []\n\n... DO SOMETHING AWESOME ! ...\n\ngif.build_gif(imgs=imgs, saveto='vaegan.gif')",
"<a name=\"part-4---character-level-recurrent-neural-network\"></a>\nPart 4 - Character Level Recurrent Neural Network\nPlease visit session-5-part2.ipynb for the rest of the homework!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
evanmiltenburg/python-for-text-analysis | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | apache-2.0 | [
"Chapter 19 - More about Natural Language Processing Tools (spaCy)\nText data is unstructured. But if you want to extract information from text, then you often need to process that data into a more structured representation. The common idea for all Natural Language Processing (NLP) tools is that they try to structure or transform text in some meaningful way. You have already learned about four basic NLP steps: sentence splitting, tokenization, POS-tagging and lemmatization. For all of these, we have used the NLTK library, which is widely used in the field of NLP. However, there are some competitors out there that are worthwhile to have a look at. One of them is spaCy, which is fast and accurate and supports multiple languages. \nAt the end of this chapter, you will be able to:\n- work with spaCy\n- find some additional NLP tools\n1. The NLP pipeline\nThere are many tools and libraries designed to solve NLP problems. In Chapter 15, we have already seen the NLTK library for tokenization, sentence splitting, part-of-speech tagging and lemmatization. However, there are many more NLP tasks and off-the-shelf tools to perform them. These tasks often depend on each other and are therefore put into a sequence; such a sequence of NLP tasks is called an NLP pipeline. Some of the most common NLP tasks are:\n\nTokenization: splitting texts into individual words\nSentence splitting: splitting texts into sentences\nPart-of-speech (POS) tagging: identifying the parts of speech of words in context (verbs, nouns, adjectives, etc.)\nMorphological analysis: separating words into morphemes and identifying their classes (e.g. tense/aspect of verbs)\nStemming: identifying the stems of words in context by removing inflectional/derivational affixes, such as 'troubl' for 'trouble/troubling/troubled'\nLemmatization: identifying the lemmas (dictionary forms) of words in context, such as 'go' for 'go/goes/going/went'\nWord Sense Disambiguation (WSD): assigning the correct meaning to words in context\nStop words recognition: identifying commonly used words (such as 'the', 'a(n)', 'in', etc.) in text, possibly to ignore them in other tasks\nNamed Entity Recognition (NER): identifying people, locations, organizations, etc. in text\nConstituency/dependency parsing: analyzing the grammatical structure of a sentence\nSemantic Role Labeling (SRL): analyzing the semantic structure of a sentence (who does what to whom, where and when)\nSentiment Analysis: determining whether a text is mostly positive or negative\nWord Vectors (or Word Embeddings) and Semantic Similarity: representating the meaning of words as rows of real valued numbers where each point captures a dimension of the word's meaning and where semantically similar words have similar vectors (very popular these days)\n\nYou don't always need all these modules. But it's important to know that they are\nthere, so that you can use them when the need arises.\n1.1 How can you use these modules?\nLet's be clear about this: you don't always need to use Python for this. There are\nsome very strong NLP programs out there that don't rely on Python. You can typically\ncall these programs from the command line. Some examples are:\n\n\nTreetagger is a POS-tagger\n and lemmatizer in one. It provides support for many different languages. If you want to\n call Treetagger from Python, use treetaggerwrapper.\n Treetagger-python also works, but is much slower.\n\n\nStanford's CoreNLP is a very powerful system\n that is able to process English, German, Spanish, French, Chinese and Arabic. (Each to\n a different extent, though. The pipeline for English is most complete.) There are also\n Python wrappers available, such as py-corenlp.\n\n\nThe Maltparser has models for English, Swedish, French, and Spanish.\n\n\nHaving said that, there are many NLP-tools that have been developed for Python:\n\nNatural Language ToolKit (NLTK): Incredibly versatile library with a bit of everything.\n The only downside is that it's not the fastest library out there, and it lags behind the\n state-of-the-art.\nAccess to several corpora.\nCreate a POS-tagger. (Some of these are actually state-of-the-art if you have enough training data.)\nPerform corpus analyses.\nInterface with WordNet.\n\n\nPattern: A module that describes itself as a 'web mining module'. Implements a\n tokenizer, tagger, parser, and sentiment analyzer for multiple different languages.\n Also provides an API for Google, Twitter, Wikipedia and Bing.\nTextblob: Another general NLP library that builds on the NLTK and Pattern.\nGensim: For building vector spaces and topic models.\nCorpkit is a module for corpus building and corpus management. Includes an interface to the Stanford CoreNLP parser.\nSpaCy: Tokenizer, POS-tagger, parser and named entity recogniser for English, German, Spanish, Portugese, French, Italian and Dutch (more languages in progress). It can also predict similarity using word embeddings.\n\n2. spaCy\nspaCy provides a rather complete NLP pipeline: it takes a raw document and performs tokenization, POS-tagging, stop word recognition, morphological analysis, lemmatization, sentence splitting, dependency parsing and Named Entity Recognition (NER). It also supports similarity prediction, but that is outside of the scope of this notebook. The advantage of SpaCy is that it is really fast, and it has a good accuracy. In addition, it currently supports multiple languages, among which: English, German, Spanish, Portugese, French, Italian and Dutch. \nIn this notebook, we will show you the basic usage. If you want to learn more, please visit spaCy's website; it has extensive documentation and provides excellent user guides. \n2.1 Installing and loading spaCy\nTo install spaCy, check out the instructions here. On this page, it is explained exactly how to install spaCy for your operating system, package manager and desired language model(s). Simply run the suggested commands in your terminal or cmd. Alternatively, you can probably also just run the following cells in this notebook:",
"pip install -U spacy\n\n%%bash\npython -m spacy download en_core_web_sm",
"Now, let's first load spaCy. We import the spaCy module and load the English tokenizer, tagger, parser, NER and word vectors.",
"import spacy\nnlp = spacy.load('en_core_web_sm') # other languages: de, es, pt, fr, it, nl",
"nlp is now a Python object representing the English NLP pipeline that we can use to process a text. \nEXTRA: Larger models\nFor English, there are three models ranging from 'small' to 'large':\n\nen_core_web_sm\nen_core_web_md\nen_core_web_lg\n\nBy default, the smallest one is loaded. Larger models should have a better accuracy, but take longer to load. If you like, you can use them instead. You will first need to download them.",
"#%%bash\n#python -m spacy download en_core_web_md\n\n#%%bash\n#python -m spacy download en_core_web_lg\n\n# uncomment one of the lines below if you want to load the medium or large model instead of the small one\n# nlp = spacy.load('en_core_web_md') \n# nlp = spacy.load('en_core_web_lg') ",
"2.2 Using spaCy\nParsing a text with spaCy after loading a language model is as easy as follows:",
"doc = nlp(\"I have an awesome cat. It's sitting on the mat that I bought yesterday.\")",
"doc is now a Python object of the class Doc. It is a container for accessing linguistic annotations and a sequence of Token objects.\nDoc, Token and Span objects\nAt this point, there are three important types of objects to remember:\n\nA Doc is a sequence of Token objects.\nA Token object represents an individual token — i.e. a word, punctuation symbol, whitespace, etc. It has attributes representing linguistic annotations. \nA Span object is a slice from a Doc object and a sequence of Token objects.\n\nSince Doc is a sequence of Token objects, we can iterate over all of the tokens in the text as shown below, or select a single token from the sequence:",
"# Iterate over the tokens\nfor token in doc:\n print(token)\nprint()\n\n# Select one single token by index\nfirst_token = doc[0]\nprint(\"First token:\", first_token)",
"Please note that even though these look like strings, they are not:",
"for token in doc:\n print(token, \"\\t\", type(token))",
"These Token objects have many useful methods and attributes, which we can list by using dir(). We haven't really talked about attributes during this course, but while methods are operations or activities performed by that object, attributes are 'static' features of the objects. Methods are called using parantheses (as we have seen with str.upper(), for instance), while attributes are indicated without parantheses. We will see some examples below.\nYou can find more detailed information about the token methods and attributes in the documentation.",
"dir(first_token)",
"Let's inspect some of the attributes of the tokens. Can you figure out what they mean? Feel free to try out a few more.",
"# Print attributes of tokens\nfor token in doc:\n print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_)",
"Notice that some of the attributes end with an underscore. For example, tokens have both lemma and lemma_ attributes. The lemma attribute represents the id of the lemma (integer), while the lemma_ attribute represents the unicode string representation of the lemma. In practice, you will mostly use the lemma_ attribute.",
"for token in doc:\n print(token.lemma, token.lemma_)",
"You can also use spacy.explain to find out more about certain labels:",
"# try out some more, such as NN, ADP, PRP, VBD, VBP, VBZ, WDT, aux, nsubj, pobj, dobj, npadvmod\nspacy.explain(\"VBZ\")",
"You can create a Span object from the slice doc[start : end]. For instance, doc[2:5] produces a span consisting of tokens 2, 3 and 4. Stepped slices (e.g. doc[start : end : step]) are not supported, as Span objects must be contiguous (cannot have gaps). You can use negative indices and open-ended ranges, which have their normal Python semantics.",
"# Create a Span\na_slice = doc[2:5]\nprint(a_slice, type(a_slice))\n\n# Iterate over Span\nfor token in a_slice:\n print(token.lemma_, token.pos_)",
"Text, sentences and noun_chunks\nIf you call the dir() function on a Doc object, you will see that it has a range of methods and attributes. You can read more about them in the documentation. Below, we highlight three of them: text, sents and noun_chunks.",
"dir(doc)",
"First of all, text simply gives you the whole document as a string:",
"print(doc.text)\nprint(type(doc.text))",
"sents can be used to get all the sentences. Notice that it will create a so-called 'generator'. For now, you don't have to understand exactly what a generator is (if you like, you can read more about them online). Just remember that we can use generators to iterate over an object in a fast and efficient way.",
"# Get all the sentences as a generator \nprint(doc.sents, type(doc.sents))\n\n# We can use the generator to loop over the sentences; each sentence is a span of tokens\nfor sentence in doc.sents:\n print(sentence, type(sentence))",
"If you find this difficult to comprehend, you can also simply convert it to a list and then loop over the list. Remember that this is less efficient, though.",
"# You can also store the sentences in a list and then loop over the list \nsentences = list(doc.sents)\nfor sentence in sentences:\n print(sentence, type(sentence))",
"The benefit of converting it to a list is that we can use indices to select certain sentences. For example, in the following we only print some information about the tokens in the second sentence.",
"# Print some information about the tokens in the second sentence.\nsentences = list(doc.sents)\nfor token in sentences[1]:\n data = '\\t'.join([token.orth_,\n token.lemma_,\n token.pos_,\n token.tag_,\n str(token.i), # Turn index into string\n str(token.idx)]) # Turn index into string\n print(data)",
"Similarly, noun_chunks can be used to create a generator for all noun chunks in the text.",
"# Get all the noun chunks as a generator \nprint(doc.noun_chunks, type(doc.noun_chunks))\n\n# You can loop over a generator; each noun chunk is a span of tokens\nfor chunk in doc.noun_chunks:\n print(chunk, type(chunk))\n print()",
"Named Entities\nFinally, we can also very easily access the Named Entities in a text using ents. As you can see below, it will create a tuple of the entities recognized in the text. Each entity is again a span of tokens, and you can access the type of the entity with the label_ attribute of Span.",
"# Here's a slightly longer text, from the Wikipedia page about Harry Potter.\nharry_potter = \"Harry Potter is a series of fantasy novels written by British author J. K. Rowling.\\\nThe novels chronicle the life of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley,\\\nall of whom are students at Hogwarts School of Witchcraft and Wizardry.\\\nThe main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal,\\\noverthrow the wizard governing body known as the Ministry of Magic, and subjugate all wizards and Muggles.\"\n\ndoc = nlp(harry_potter)\nprint(doc.ents)\nprint(type(doc.ents))\n\n# Each entity is a span of tokens and is labeled with the type of entity\nfor entity in doc.ents:\n print(entity, \"\\t\", entity.label_, \"\\t\", type(entity))",
"Pretty cool, but what does NORP mean? Again, you can use spacy.explain() to find out:\n3. EXTRA: Stanford CoreNLP\nAnother very popular NLP pipeline is Stanford CoreNLP. You can use the tool from the command line, but there are also some useful Python wrappers that make use of the Stanford CoreNLP API, such as pycorenlp. As you might want to use this in the future, we will provide you with a quick start guide. To use the code below, you will have to do the following:\n\nDownload Stanford CoreNLP here.\nInstall pycorenlp (run pip install pycorenlp in your terminal, or simply run the cell below).\nOpen a terminal and run the following commands (replace with the correct directory names):\ncd LOCATION_OF_CORENLP/stanford-corenlp-full-2018-02-27\njava -mx4g -cp \"*\" edu.stanford.nlp.pipeline.StanfordCoreNLPServer\n This step you will always have to do if you want to use the Stanford CoreNLP API.",
"%%bash\npip install pycorenlp\n\nfrom pycorenlp import StanfordCoreNLP\nnlp = StanfordCoreNLP('http://localhost:9000')",
"Next, you will want to define which annotators to use and which output format should be produced (text, json, xml, conll, conllu, serialized). Annotating the document then is very easy. Note that Stanford CoreNLP uses some large models that can take a long time to load. You can read more about it here.",
"harry_potter = \"Harry Potter is a series of fantasy novels written by British author J. K. Rowling.\\\nThe novels chronicle the life of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley,\\\nall of whom are students at Hogwarts School of Witchcraft and Wizardry.\\\nThe main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal,\\\noverthrow the wizard governing body known as the Ministry of Magic, and subjugate all wizards and Muggles.\"\n\n# Define annotators and output format\nproperties= {'annotators': 'tokenize, ssplit, pos, lemma, parse',\n 'outputFormat': 'json'}\n\n# Annotate the string with CoreNLP\ndoc = nlp.annotate(harry_potter, properties=properties)",
"In the next cells, we will simply show some examples of how to access the linguistic annotations if you use the properties as shown above. If you'd like to continue working with Stanford CoreNLP in the future, you will likely have to experiment a bit more.",
"doc.keys()\n\nsentences = doc[\"sentences\"]\nfirst_sentence = sentences[0]\nfirst_sentence.keys()\n\nfirst_sentence[\"parse\"]\n\nfirst_sentence[\"basicDependencies\"]\n\nfirst_sentence[\"tokens\"]\n\nfor sent in doc[\"sentences\"]:\n for token in sent[\"tokens\"]:\n word = token[\"word\"]\n lemma = token[\"lemma\"]\n pos = token[\"pos\"]\n print(word, lemma, pos)\n\n# find out what the entity label 'NORP' means\nspacy.explain(\"NORP\")",
"4. NLTK vs. spaCy vs. CoreNLP\nThere might be different reasons why you want to use NLTK, spaCy or Stanford CoreNLP. There are differences in efficiency, quality, user friendliness, functionalities, output formats, etc. At this moment, we advise you to go with spaCy because of its ease in use and high quality performance.\nHere's an example of both NLTK and spaCy in action. \n\nThe example text is a case in point. What goes wrong here?\nTry experimenting with the text to see what the differences are.",
"import nltk\nimport spacy\n\nnlp = spacy.load('en_core_web_sm')\n\ntext = \"I like cheese very much\"\n\nprint(\"NLTK results:\")\nnltk_tagged = nltk.pos_tag(text.split())\nprint(nltk_tagged)\n\nprint()\n\nprint(\"spaCy results:\")\ndoc = nlp(text)\nspacy_tagged = []\nfor token in doc:\n tag_data = (token.orth_, token.tag_,)\n spacy_tagged.append(tag_data)\nprint(spacy_tagged)",
"Do you want to learn more about the differences between NLTK, spaCy and CoreNLP? Here are some links:\n- Facts & Figures (spaCy)\n- About speed (CoreNLP vs. spaCy)\n- NLTK vs. spaCy: Natural Language Processing in Python \n- What are the advantages of Spacy vs NLTK? \n- 5 Heroic Python NLP Libraries\n5. Some other useful modules for cleaning and preprocessing\nData is often messy, noisy or includes irrelevant information. Therefore, chances are big that you will need to do some cleaning before you can start with your analysis. This is especially true for social media texts, such as tweets, chats, and emails. Typically, these texts are informal and notoriously noisy. Normalising them to be able to process them with NLP tools is a NLP challenge in itself and fully discussing it goes beyond the scope of this course. However, you may find the following modules useful in your project:\n\ntweet-preprocessor: This library makes it easy to clean, parse or tokenize the tweets. It supports cleaning, tokenizing and parsing of URLs, hashtags, reserved words, mentions, emojis and smileys.\nemot: Emot is a python library to extract the emojis and emoticons from a text (string). All the emojis and emoticons are taken from a reliable source, i.e. Wikipedia.org.\nautocorrect: Spelling corrector (Python 3).\nhtml: Can be used to remove HTML tags.\nchardet: Universal encoding detector for Python 2 and 3.\nftfy: Fixes broken unicode strings.\n\nIf you are interested in reading more about these topic, these papers discuss preprocessing and normalization:\n\nAssessing the Consequences of Text Preprocessing Decisions (Denny & Spirling 2016). This paper is a bit long, but it provides a nice discussion of common preprocessing steps and their potential effects.\nWhat to do about bad language on the internet (Eisenstein 2013). This is a quick read that we recommend everyone to at least look through.\n\nAnd here is a nice blog about character encoding.\nExercises",
"import spacy\nnlp = spacy.load('en_core_web_sm')",
"Exercise 1:\n\n\nWhat is the difference between token.pos_ and token.tag_? Read the docs to find out.\n\n\nWhat do the different labels mean? Use space.explain to inspect some of them. You can also refer to this page for a complete overview.",
"doc = nlp(\"I have an awesome cat. It's sitting on the mat that I bought yesterday.\")\nfor token in doc:\n print(token.pos_, token.tag_)\n\nspacy.explain(\"PRON\")",
"Exercise 2:\nLet's practice a bit with processing files. Open the file charlie.txt for reading and use read() to read its content as a string. Then use spaCy to annotate this string and print the information below. Remember: you can use dir() to remind yourself of the attributes.\nFor each token in the text:\n1. Text \n2. Lemma\n3. POS tag\n4. Whether it's a stopword or not\n5. Whether it's a punctuation mark or not\nFor each sentence in the text:\n1. The complete text\n2. The number of tokens\n3. The complete text in lowercase letters\n4. The text, lemma and POS of the first word\nFor each noun chunk in the text:\n1. The complete text\n2. The number of tokens\n3. The complete text in lowercase letters\n4. The text, lemma and POS of the first word\nFor each named entity in the text:\n1. The complete text\n2. The number of tokens\n3. The complete text in lowercase letters\n4. The text, lemma and POS of the first word",
"filename = \"../Data/Charlie/charlie.txt\"\n\n# read the file and process with spaCy\n\n# print all information about the tokens\n\n# print all information about the sentences\n\n# print all information about the noun chunks\n\n# print all information about the entities",
"Exercise 3:\nRemember how we can use the os and glob modules to process multiple files? For example, we can read all .txt files in the dreams folder like this:",
"import glob\nfilenames = glob.glob(\"../Data/dreams/*.txt\")\nprint(filenames)",
"Now create a function called get_vocabulary that takes one positional parameter filenames. It should read in all filenames and return a set called unique_words, that contains all unique words in the files.",
"def get_vocabulary(filenames):\n # your code here\n\n# test your function here\nunique_words = get_vocabulary(filenames)\nprint(unique_words, len(unique_words))\nassert len(unique_words) == 415 # if your code is correct, this should not raise an error",
"Exercise 4:\nCreate a function called get_sentences_with_keyword that takes one positional parameter filenames and one keyword parameter filenames with default value None. It should read in all filenames and return a list called sentences that contains all sentences (the complete texts) with the keyword. \nHints:\n- It's best to check for the lemmas of each token\n- Lowercase both your keyword and the lemma",
"import glob\nfilenames = glob.glob(\"../Data/dreams/*.txt\")\nprint(filenames)\n\ndef get_sentences_with_keyword(filenames, keyword=None):\n #your code here\n\n# test your function here\nsentences = get_sentences_with_keyword(filenames, keyword=\"toy\")\nprint(sentences)\nassert len(sentences) == 4 # if your code is correct, this should not raise an error"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
opesci/devito | examples/seismic/abc_methods/02_damping.ipynb | mit | [
"2 - Damping\n2.1 - Introduction\nIn this notebook we describe a simple method for reduction of reflections at the computational boundaries of the domain $\\Omega$ when we simulate the acoustic wave equation. This method, called Damping, has been proposed by Sochaki. It adds a damping term, modifying the original wave equation at a boundary layer. We saw in the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a> that the (artificial) wave reflections on the computational boundaries lead to a very noisy solution of the acoustic problem. \nWe describe this method in the next Sections, omitting information already discussed in the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a>, highlighting only the new elements necessary to apply Damping.\n2.2 - Acoustic Problem with Damping\nWe define an extension of the spatial domain $\\Omega=\\left[x_{I}-L_{x},x_{F}+L_{x}\\right] \\times\\left[z_{I},z_{F}+L_{z}\\right]$, in which we added an absorption region to the previous spatial domain\n$\\Omega_{0}=\\left[x_{I},x_{F}\\right]\\times\\left[z_{I},z_{F}\\right]$.\nThe absorption region is composed by two bands of length $L_{x}$ at the beginning and end of the domain in the direction $x$ and of a band of length $L_{z}$ at the end of the domain in the $z$ direction. Again, $\\partial\\Omega$ denotes the boundary of $\\Omega$. The figure below shows the extended domain $\\Omega$, with the absorption region highlighted in blue.\n<img src='domain2.png' width=500>\nThe damping acoustic problem equation is given by:\n\\begin{equation}\nu_{tt}(x,z,t)+c^2(x,z)\\zeta(x,z)u_t(x,z,t)-c^2(x,z)\\Delta(u(x,z,t))=c^2(x,z)f(x,z,t),\n\\end{equation}\nwhere $u(x,z,t)$, $f(x,z,t)$ and $c(x,z)$ are as before. The wave equation has been modified by the introduction of the damping term $c^2(x,z)\\zeta(x,z)u_t(x,z,t)$, where $\\zeta$ is different from zero only in the absorption region, growing smoothly along the absorption bands from zero to its maximum at the outer boundary. The actual form of\n$\\zeta$ used in this notebook will be given ahead. We still use the same initial conditions\n\\begin{equation}\nu(x,z,0) = 0.0 \\hspace{.5cm} \\mbox{ and } \\hspace{.5cm} u_t(x,z,0)= 0.0.\n\\end{equation}\nand Dirichlet null boundary conditions at the (outer) bottom and lateral boundaries. At the surface we\nuse a zero Neumman boundary condition.\nThe source term and the velocity field are defined as before.\n2.3 - Finite Difference Operators and Discretization of Spatial and Temporal Domains\nThe only difference with respect to the discretization used in the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a> is the extra damping term. The temporal derivative of $u$ is approximated by a centered difference:\n$$ u_t(x_i,z_j,t_k) = \\frac{u_{i,j,k+1}-u_{i,j,k-1}}{2\\Delta t} $$. All the other terms are discretized as before.\n2.4 - Standard Problem\nRedeeming the Standard Problem definitions discussed on the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a> we have that:\n\n$x_{I}$ = 0.0 Km;\n$x_{F}$ = 1.0 Km = 1000 m;\n$z_{I}$ = 0.0 Km;\n$z_{F}$ = 1.0 Km = 1000 m;\n$L_x$ and $L_z$ will be defined ahead;\n\nThe spatial discretization parameters are given by:\n- $\\Delta x$ = 0.01 km = 10m;\n- $\\Delta z$ = 0.01 km = 10m;\nLet's consider a $I$ the time domain with the following limitations:\n\n$t_{I}$ = 0 s = 0 ms;\n$t_{F}$ = 1 s = 1000 ms;\n\nThe temporal discretization parameters are given by:\n\n$\\Delta t$ $\\approx$ 0.0016 s = 1.6 ms;\n$NT$ = 626.\n\nWith respect to the $f(x,z,t)$ external force term, we will consider a Ricker source with the following properties:\n\nPosition at $x:$ $\\bar{x} = 500 m = 0.5 Km$;\nPosition at $z:$ $\\bar{z} = 10 m = 0.01 Km$;\nPeak frequency: $f_{0} = 10 Hz = 0.01 Khz$;\n\nThe graph of $f(\\bar{x},\\bar{z},t)$ will be generated when building the code. We will use a velocity profile $c(x, z)$ with the following properties:\n\nMinimum propagation velocity: $v_{min} = 1500 m/s = 1,5 Km/s$;\nMaximum propagation velocity: $v_{max} = 2500 m/s = 2,5 Km/s$;\n\nThe figure of the velocity profile will be generated when building the code. We introduce receivers along the $x$ direction, that is, at all discrete points between $0.0$ Km and $1.0$ Km , at depth $z=0.01$ Km to generate the seismogram.\n2.5 - Damping Functions\nSochaki proposed various forms for the damping function $\\zeta$, including linear, cubic or exponential functions. In general, the damping functions have a similar characteristic: they are zero in the \"interior\" domain $\\Omega_{0}$ and increase toward the outer boundary $\\partial\\Omega$. \nOur particular damping function will be chosen as follows.\n We define the pair of functions $\\zeta_{1}(x,z)$ and $\\zeta_{2}(x,z)$ given, respectively, by:\n\\begin{equation}\n\\zeta_{1}(x,z)=\\left{ \\begin{array}{ll}\n0, & \\textrm{if $x\\in \\left(x_{I},x_{F}\\right)$,}\\ \\bar{\\zeta}{1}(x,z)\\left(\\displaystyle\\frac{\\vert x-x{I} \\vert}{L_{x}}-\\displaystyle\\frac{1}{2\\pi}\\sin\\left(\\displaystyle\\frac{2\\pi\\vert x-x_{I} \\vert}{L_{x}}\\right)\\right) , & \\textrm{if $x_{I}-L_{x}\\leq x \\leq x_{I}$,}\\ \\bar{\\zeta}{1}(x,z)\\left(\\displaystyle\\frac{\\vert x-x{F} \\vert}{L_{x}}-\\displaystyle\\frac{1}{2\\pi}\\sin\\left(\\displaystyle\\frac{2\\pi\\vert x-x_{F} \\vert}{L_{x}}\\right)\\right) , & \\textrm{if $x_{F}\\leq x \\leq x_{F}+L_{x}$.}\\end{array}\\right.\n\\end{equation} \n\\begin{equation}\n\\zeta_{2}(x,z)=\\left{ \\begin{array}{ll}\n0, & \\textrm{if $z\\in \\left(z_{I},z_{F}\\right)$,} \\ \\bar{\\zeta}{2}(x,z)\\left(\\displaystyle\\frac{\\vert z-z{F} \\vert}{L_{z}}-\\displaystyle\\frac{1}{2\\pi}\\sin\\left(\\displaystyle\\frac{2\\pi\\vert z-z_{F} \\vert}{L_{z}}\\right)\\right) , & \\textrm{if $z_{F}\\leq z \\leq z_{F}+L_{z}$.}\\end{array}\\right.\n\\end{equation} \nThus, we define the function $\\zeta(x,z)$ as being the following function:\n\\begin{equation}\n\\zeta(x,z) = \\displaystyle\\frac{1}{v_{max}}\\left(\\displaystyle\\frac{\\zeta_{1}(x,z)}{\\Delta x}+\\displaystyle\\frac{\\zeta_{2}(x,z)}{\\Delta z} \\right) ,\n\\end{equation}\nwhere $v_{max}$denotes the maximum velocity of propagation of $c(x,z)$. Below we display the shape of the function $\\zeta_1(x,z)$ with $\\bar{\\zeta_{1}}(x,z)=0.26$ at the left band of the domain. It is similar at the other ones. The figures of the damping profiles will be generated when building the code.\n2.6 - Numerical Simulations\nIn the numerical simulations we import the following Python and Devito packages:",
"# NBVAL_IGNORE_OUTPUT\n\nimport numpy as np\nimport matplotlib.pyplot as plot\nimport math as mt\nimport matplotlib.ticker as mticker \nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom matplotlib import cm",
"From Devito's library of examples we import the following structures:",
"# NBVAL_IGNORE_OUTPUT\n\n%matplotlib inline\nfrom examples.seismic import TimeAxis\nfrom examples.seismic import RickerSource\nfrom examples.seismic import Receiver\nfrom devito import SubDomain, Grid, NODE, TimeFunction, Function, Eq, solve, Operator",
"The mesh parameters define the domain $\\Omega_{0}$. The absorption region will be included bellow.",
"nptx = 101\nnptz = 101\nx0 = 0.\nx1 = 1000. \ncompx = x1-x0\nz0 = 0.\nz1 = 1000.\ncompz = z1-z0;\nhxv = (x1-x0)/(nptx-1)\nhzv = (z1-z0)/(nptz-1)",
"Observation: In this code we need to work with symbolic values and the real values of $\\Delta x$ and $\\Delta z$, then the numerica values of $\\Delta x$ and $\\Delta z$ are represented by hxv and hzv, respectively. The symbolic values of $\\Delta x$ and $\\Delta z$ will be given after.\nIn this case, we need to define the size of the bands $L_{x}$ and $L_{z}$ that extend the domain $\\Omega_{0}$ for $\\Omega$. The code that we will implement will build the values $L_{x}$ and $L_{z}$ from choosing a certain amount of points in each direction. Without loss of generality, we say that the size $L_{x}$ is such that:\n\n$L_{x}$ = npmlx*$\\Delta x$;\n0<npmlx<nptx;\n\nSimilarly, we have $L_{z}$ such that:\n\n$L_{z}$ = npmlz*$\\Delta z$;\n0<npmlz<nptz; \n\nSo, we can explicitly define the lengths $L_{x}$ and $L_{z}$ depending on the number of points npmlx and npmlz. Thus, we choose these values as being:",
"npmlx = 20\nnpmlz = 20",
"And we define $L_{x}$ and $L_{z}$ as beeing:",
"lx = npmlx*hxv\nlz = npmlz*hzv",
"Thus, from the nptx points, the first and the last npmlx points are in the absorption region of the x direction. Similarly, from the nptz points, the last npmlz points are in the absorption region of the z direction. Considering the construction of grid, we also have the following elements:",
"nptx = nptx + 2*npmlx\nnptz = nptz + 1*npmlz\nx0 = x0 - hxv*npmlx\nx1 = x1 + hxv*npmlx\ncompx = x1-x0\nz0 = z0\nz1 = z1 + hzv*npmlz\ncompz = z1-z0\norigin = (x0,z0)\nextent = (compx,compz)\nshape = (nptx,nptz)\nspacing = (hxv,hzv)",
"The $\\zeta(x,z)$ function is non zero only in the blue region in the figure that represents the domain. In this way, the wave equation can be divided into 2 situations:\n\nIn the region in blue:\n\n\\begin{equation}\nu_{tt}(x,z,t)+c^2(x,z)\\zeta(x,z)u_t(x,z,t)-c^2(x,z)^\\Delta(u(x,z,t))=c^2(x,z)f(x,z,t),\n\\end{equation}\n\nIn the white region:\n\n\\begin{equation}\nu_{tt}(x,z,t)-c^2(x,z)^\\Delta(u(x,z,t))=c^2(x,z)f(x,z,t),\n\\end{equation}\nFor this reason, we use the structure of the subdomains to represent the white region and the blue region.\nObservation: Note that we can describe the blue region in different ways, that is, the way we choose here is not the only possible discretization for that region.\nFirst, we define the white region, naming this region as d0, which is defined by the following pairs of points $(x,z)$:\n\n$x\\in{npmlx,nptx-npmlx}$ and $z\\in{0,nptz-npmlz}$.\n\nIn the language of subdomains *d0 it is written as:",
"class d0domain(SubDomain):\n name = 'd0'\n def define(self, dimensions):\n x, z = dimensions\n return {x: ('middle', npmlx, npmlx), z: ('middle', 0, npmlz)}\nd0_domain = d0domain()",
"The blue region will be the union of the following regions:\n\nd1 represents the left range in the direction x, where the pairs $(x,z)$ satisfy: $x\\in{0,npmlx}$ and $z\\in{0,nptz}$;\nd2 represents the rigth range in the direction x, where the pairs $(x,z)$ satisfy: $x\\in{nptx-npmlx,nptx}$ and $z\\in{0,nptz}$;\nd3 represents the left range in the direction y, where the pairs $(x,z)$ satisfy: $x\\in{npmlx,nptx-npmlx}$ and $z\\in{nptz-npmlz,nptz}$;\n\nThus, the regions d1, d2 and d3 are described as follows in the language of subdomains:",
"class d1domain(SubDomain):\n name = 'd1'\n def define(self, dimensions):\n x, z = dimensions\n return {x: ('left',npmlx), z: z}\nd1_domain = d1domain()\n\nclass d2domain(SubDomain):\n name = 'd2'\n def define(self, dimensions):\n x, z = dimensions\n return {x: ('right',npmlx), z: z}\nd2_domain = d2domain()\n\nclass d3domain(SubDomain):\n name = 'd3'\n def define(self, dimensions):\n x, z = dimensions\n return {x: ('middle', npmlx, npmlx), z: ('right',npmlz)}\nd3_domain = d3domain()",
"The figure below represents the division of domains that we did previously:\n<img src='domain2.png' width=500>\nThe advantage of dividing into regions is that the equations will be calculated where they actually operate and thus we gain computational efficiency, as we decrease the number of operations to be done. After defining the spatial parameters and constructing the subdomains, we set the spatial grid with the following command:",
"grid = Grid(origin=origin, extent=extent, shape=shape, subdomains=(d0_domain,d1_domain,d2_domain,d3_domain), dtype=np.float64)",
"Again, we use a velocity field given by a binary file. The reading and scaling of the velocity field for the Devito work units is done with the following commands:",
"v0 = np.zeros((nptx,nptz)) \nX0 = np.linspace(x0,x1,nptx)\nZ0 = np.linspace(z0,z1,nptz)\n \nx10 = x0+lx\nx11 = x1-lx\n \nz10 = z0\nz11 = z1 - lz\n\nxm = 0.5*(x10+x11)\nzm = 0.5*(z10+z11)\n \npxm = 0\npzm = 0\n \nfor i in range(0,nptx):\n if(X0[i]==xm): pxm = i\n \nfor j in range(0,nptz):\n if(Z0[j]==zm): pzm = j\n \np0 = 0 \np1 = pzm\np2 = nptz\n \nv0[0:nptx,p0:p1] = 1.5\nv0[0:nptx,p1:p2] = 2.5",
"Previously we introduce the local variables x10,x11,z10,z11,xm,zm,pxm and pzm that help us to create a specific velocity field, where we consider the whole domain (including the absorpion region). Below we include a routine to plot the velocity field.",
"def graph2dvel(vel):\n plot.figure()\n plot.figure(figsize=(16,8))\n fscale = 1/10**(3)\n scale = np.amax(vel[npmlx:-npmlx,0:-npmlz])\n extent = [fscale*(x0+lx),fscale*(x1-lx), fscale*(z1-lz), fscale*(z0)]\n fig = plot.imshow(np.transpose(vel[npmlx:-npmlx,0:-npmlz]), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.title('Velocity Profile')\n plot.grid()\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n cbar.set_label('Velocity [km/s]')\n plot.show()",
"Below we include the plot of velocity field.",
"# NBVAL_IGNORE_OUTPUT\n\ngraph2dvel(v0)",
"Time parameters are defined and constructed by the following sequence of commands:",
"t0 = 0.\ntn = 1000. \nCFL = 0.4\nvmax = np.amax(v0) \ndtmax = np.float64((min(hxv,hzv)*CFL)/(vmax))\nntmax = int((tn-t0)/dtmax)+1\ndt0 = np.float64((tn-t0)/ntmax)",
"With the temporal parameters, we generate the time informations with TimeAxis as follows:",
"time_range = TimeAxis(start=t0,stop=tn,num=ntmax+1)\nnt = time_range.num - 1",
"The symbolic values associated with the spatial and temporal grids that are used in the composition of the equations are given by:",
"(hx,hz) = grid.spacing_map \n(x, z) = grid.dimensions \nt = grid.stepping_dim\ndt = grid.stepping_dim.spacing",
"We chose a single Ricker source, whose frequency is $ 0.005Khz $. This source is positioned at $\\bar{x}$ = 35150m and $\\bar{z}$ = 32m. We then defined the following variables that represents our choice:",
"f0 = 0.01\nnsource = 1\nxposf = 0.5*(compx-2*npmlx*hxv)\nzposf = hzv",
"As we know, Ricker's source is generated by the RickerSource command. Using the parameters listed above, we generate and position the Ricker source with the following sequence of commands:",
"src = RickerSource(name='src',grid=grid,f0=f0,npoint=nsource,time_range=time_range,staggered=NODE,dtype=np.float64)\nsrc.coordinates.data[:, 0] = xposf\nsrc.coordinates.data[:, 1] = zposf",
"Below we include the plot of Ricker source.",
"# NBVAL_IGNORE_OUTPUT\n\nsrc.show()",
"With respect to receivers, the number of receivers is the same number of discrete points in the $x$ direction. So, we position these receivers along the direction $x$, at height $\\bar{z}$ = 10m. In this way, our variables are chosen as:",
"nrec = nptx\nnxpos = np.linspace(x0,x1,nrec)\nnzpos = hzv",
"As we know, receivers are generated by the command Receiver. Thus, we use the parameters listed above and using the Receiver command, we create and position the receivers:",
"rec = Receiver(name='rec',grid=grid,npoint=nrec,time_range=time_range,staggered=NODE,dtype=np.float64)\nrec.coordinates.data[:, 0] = nxpos\nrec.coordinates.data[:, 1] = nzpos",
"The displacement field u is a second order field in time and space, which uses points of type non-staggered. In this way, we construct the displacement field u with the command TimeFunction:",
"u = TimeFunction(name=\"u\",grid=grid,time_order=2,space_order=2,staggered=NODE,dtype=np.float64)",
"The velocity field, the source term and receivers are defined as in the previous notebook:",
"vel0 = Function(name=\"vel0\",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\nvel0.data[:,:] = v0[:,:]\n\nsrc_term = src.inject(field=u.forward,expr=src*dt**2*vel0**2)\n\nrec_term = rec.interpolate(expr=u)",
"The next step is to create the sequence of structures that reproduce the function $\\zeta(x,z)$. Initially, we define the region $\\Omega_{0}$, since the damping function uses the limits of that region. We previously defined the limits of the $\\Omega$ region to be x0, x1, z0 and z1. Now, we define the limits of the region $\\Omega_{0}$ as: x0pml and x1pml in the direction $x$ and z0pml and z1pml in the direction $z$. These points satisfy the following relationships with the lengths $L_{x}$ and $L_{z}$:\n\nx0pml = x0 + $L_{x}$;\nx1pml = x1 - $L_{x}$;\nz0pml = z0;\nz1pml = z1 - $L_{z}$;\n\nIn terms of program variables, we have the following definitions:",
"x0pml = x0 + npmlx*hxv \nx1pml = x1 - npmlx*hxv \nz0pml = z0 \nz1pml = z1 - npmlz*hzv ",
"Having built the $\\Omega$ limits, we then create a function, which we will call fdamp, which computationally represents the $\\zeta(x,z)$ function. In the fdamp function, we highlight the following elements:\n\nquibar represents a constant choice for $\\bar{\\zeta_{1}}(x,z)$ and $\\bar{\\zeta_{2}}(x,z)$, satisfying $\\bar{\\zeta_{1}}(x,z)=\\bar{\\zeta_{2}}(x,z)$;\nadamp denotes the function $\\zeta_{1}(x,z)$;\nbdamp denotes the function $\\zeta_{2}(x,z)$;\nThe terms a and b locate the $(x,z)$ points that are passed as an argument to the fdamp function.\n\nThe fdamp function is defined using the following structure:",
"def fdamp(x,z):\n\n quibar = 1.5*np.log(1.0/0.001)/(40)\n cte = 1./vmax\n \n a = np.where(x<=x0pml,(np.abs(x-x0pml)/lx),np.where(x>=x1pml,(np.abs(x-x1pml)/lx),0.))\n b = np.where(z<=z0pml,(np.abs(z-z0pml)/lz),np.where(z>=z1pml,(np.abs(z-z1pml)/lz),0.))\n adamp = quibar*(a-(1./(2.*np.pi))*np.sin(2.*np.pi*a))/hxv\n bdamp = quibar*(b-(1./(2.*np.pi))*np.sin(2.*np.pi*b))/hzv\n fdamp = cte*(adamp+bdamp)\n\n return fdamp",
"Created the damping function, we define an array that loads the damping information in the entire domain $\\Omega$. The objective is to assign this array to a Function and use it in the composition of the equations. To generate this array, we will use the function generatemdamp. In summary, this function generates a non-staggered grid and evaluates that grid in the fdamp function. At the end, we generate an array that we call D0 and which will be responsible for providing the damping value at each of the $\\Omega$ points. The generatemdamp function is expressed as follows:",
"def generatemdamp():\n \n X0 = np.linspace(x0,x1,nptx) \n Z0 = np.linspace(z0,z1,nptz) \n X0grid,Z0grid = np.meshgrid(X0,Z0) \n D0 = np.zeros((nptx,nptz)) \n D0 = np.transpose(fdamp(X0grid,Z0grid))\n \n return D0",
"Built the function generatemdamp we will execute it using the command:",
"D0 = generatemdamp();",
"Below we include a routine to plot the damping field.",
"def graph2damp(D): \n plot.figure()\n plot.figure(figsize=(16,8))\n fscale = 1/10**(-3)\n fscale = 10**(-3)\n scale = np.amax(D)\n extent = [fscale*x0,fscale*x1, fscale*z1, fscale*z0]\n fig = plot.imshow(np.transpose(D), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.title('Absorbing Layer Function')\n plot.grid()\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n cbar.set_label('Damping')\n plot.show()",
"Below we include the plot of damping field.",
"# NBVAL_IGNORE_OUTPUT\n\ngraph2damp(D0)",
"Like the velocity function $c(x,z)$, the damping function $\\zeta(x,z)$ is constant in time. Therefore, the damping function will be a second-order Function in space, which uses points of the non-staggered type and which we will evaluate with the D0 array. The symbolic name damp will be assigned to this field.",
"damp = Function(name=\"damp\",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\ndamp.data[:,:] = D0",
"The expressions for the acoustic equation with damping can be separeted between the white and blue regions.\nTranslating these expressions in terms of an eq that can be inserted in a Devito code, we have that in the white region the equation takes the form:\n\neq1 = u.dt2 - vel0 * vel0 * u.laplace,\n\nand in the blue region we have the following equation:\n\neq2 = u.dt2 + vel0 * vel0 * damp * u.dtc - vel0 * vel0 * u.laplace.\n\nHere u.dtc represents the centered derivative with respect to the variable $t$ for the field u. Then, we set the two pdes for the two regions",
"pde0 = Eq(u.dt2 - u.laplace*vel0**2)\npde1 = Eq(u.dt2 - u.laplace*vel0**2 + vel0**2*damp*u.dtc)",
"As we did on the notebook <a href=\"introduction.ipynb\">Introduction to Acoustic Problem</a>, we define the stencils for each of the pdes that we created previously. In the case of pde0 it is defined only in the white region, which is represented by subdomain d0. Then, we define the stencil0 which resolves pde0 in d0 and it is defined as follows:",
"stencil0 = Eq(u.forward, solve(pde0,u.forward),subdomain = grid.subdomains['d0'])",
"The pde1 will be applied in the blue region, the union of the subdomains d1, d2 and d3. In this way, we create a vector called subds that comprises these three subdomains, and we are ready to set the corresponding stencil",
"subds = ['d1','d2','d3']\n\nstencil1 = [Eq(u.forward, solve(pde1,u.forward),subdomain = grid.subdomains[subds[i]]) for i in range(0,len(subds))]",
"The boundary conditions of the problem are kept the same as the notebook <a href=\"1_introduction.ipynb\">Introduction to Acoustic Problem</a>. So these are placed in the term bc and have the following form:",
"bc = [Eq(u[t+1,0,z],0.),Eq(u[t+1,nptx-1,z],0.),Eq(u[t+1,x,nptz-1],0.),Eq(u[t+1,x,0],u[t+1,x,1])]",
"We then define the operator (op) that join the acoustic equation, source term, boundary conditions and receivers.\n\n\n\nThe acoustic wave equation in the d0 region: [stencil0];\n\n\n\n\nThe acoustic wave equation in the d1, d2 and d3 region: [stencil1];\n\n\n\n\nSource term: src_term;\n\n\n\n\nBoundary conditions: bc;\n\n\n\n\nReceivers: rec_term;",
"# NBVAL_IGNORE_OUTPUT\n\nop = Operator([stencil0,stencil1] + src_term + bc + rec_term,subs=grid.spacing_map)",
"We reset the field u:",
"u.data[:] = 0.",
"We assign in op the number of time steps it must execute and the size of the time step in the local variables time and dt, respectively.",
"# NBVAL_IGNORE_OUTPUT\n\nop(time=nt,dt=dt0)",
"To view the result of the displacement field at the end time, we use the graph2d routine given by:",
"def graph2d(U): \n plot.figure()\n plot.figure(figsize=(16,8))\n fscale = 1/10**(3)\n scale = np.amax(U[npmlx:-npmlx,0:-npmlz])/10.\n extent = [fscale*x0pml,fscale*x1pml,fscale*z1pml,fscale*z0pml]\n fig = plot.imshow(np.transpose(U[npmlx:-npmlx,0:-npmlz]),vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.axis('equal')\n plot.title('Map - Acoustic Problem with Devito')\n plot.grid()\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n cbar.set_label('Displacement [km]')\n plot.draw()\n plot.show()\n\n# NBVAL_IGNORE_OUTPUT\n\ngraph2d(u.data[0,:,:])",
"Note that the solution obtained here has a reduction in noise when compared to the results displayed on the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a>. To plot the result of the Receivers we use the graph2drec routine.",
"def graph2drec(rec): \n plot.figure()\n plot.figure(figsize=(16,8))\n fscaled = 1/10**(3)\n fscalet = 1/10**(3)\n scale = np.amax(rec[:,npmlx:-npmlx])/10.\n extent = [fscaled*x0pml,fscaled*x1pml, fscalet*tn, fscalet*t0]\n fig = plot.imshow(rec[:,npmlx:-npmlx], vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f s'))\n plot.axis('equal')\n plot.title('Receivers Signal Profile with Damping - Devito')\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n plot.show()\n\n# NBVAL_IGNORE_OUTPUT\n\ngraph2drec(rec.data)\n\nassert np.isclose(np.linalg.norm(rec.data), 990, rtol=1)",
"2.6 - Conclusions\n\nThe damping strategy is a simple way to reduce artificial wave reflections coming from the computational boundaries, leading to a solution with less noise at the end of the simulation, when compared to the results of the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a>. However, the level of artificial reflections on the boundaries is still high. In the following notebooks we present methods which are more effective.\n\n2.7 - Reference\n\nSochaki, J., Kubichek, R., George, J., Fletcher, W.R. and Smithson, S. (1987). \"Absorbing boundary conditions and surface waves,\" Geophysics, 52(1), 60-71. DOI: 10.1190/1.1442241. <a href=\"https://library.seg.org/doi/abs/10.1190/1.1442241\">Reference Link.</a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kazuhirokomoda/deep_learning | fast.ai/lesson1/dogscats_run.ipynb | mit | [
"lesson1: Convolutional Neural Networks with dogscats\nLet's classify images using deep learning and submit the result to Kaggle!\nPrerequisite\nThis notebook assumes Keras with Theano backend.\n- TODO: make TensorFlow version as another notebook\nIt also assumes that you will run it on either one of these two cases:\n- Floydhub (--env theano:py2 -> Theano rel-0.8.2 + Keras 1.2.2 on Python2)\n- local conda virtual environment (Theano 0.9.0 + Keras 2.0.4 on Python3)\nRefer to this FloydHub document for available FloydHub environments.\nSetup\nMake sure to have these files in the parent directory of the directory where you execute this notebook.\n\navailable in the official repo for Keras1 on Python2 (rename from original files)\nutils_keras1.py\nvgg16_keras1.py\nvgg16bn_keras1.py\n\n\navailable in the unofficial repo for Keras2 on Python3\nutils.py\nvgg16.py\nvgg16bn.py\n\n\n\nThe directory structure looks like this. Please modifiy the symlinks according to your environment.\n\n(*) only for FloydHub\n(**) only for local\n\nfloyd_requirements.txt (*)\nfloydhub.data.unzip/ (*)\nfloydhub.data.zipped/ (*)\n dogscats.zip\nlesson1/\n data/ (**)\n redux/\n train/\n cat.437.jpg\n dog.9924.jpg\n ...\n test/\n 231.jpg\n 325.jpg\n ...\n dogscats_run.ipynb\n floyd_requirements.txt -> ../floyd_requirements.txt (*)\n utils.py -> ../utils(_keras1).py\n vgg16.py -> ../vgg16(_keras1).py\n vgg16bn.py -> ../vgg16bn(_keras1).py\nutils.py\nutils_keras1.py\nvgg16.py\nvgg16_keras1.py\nvgg16bn.py\nvgg16bn_keras1.py\nPrepare data\nThe details of data preparation largely depends on which dataset you use. In this section, we will use a pre-organized dataset from http://files.fast.ai/files/dogscats.zip\nFor another example of data preparation, please refer to this notebook\nHow the dataset looks like\nAfter extracting the dogscats.zip file, the directory structure look like this.\ndogscats/\n models/\n sample/\n train/\n cats/\n cat.394.jpg\n ... (8 items)\n dogs/\n dog.1402.jpg\n ... (8 items)\n valid/\n cats/\n cat.10435.jpg\n ... (4 items)\n dogs/\n dog.10459.jpg\n ... (4 items)\n features.npy\n labels.npy\n test1/\n 1.jpg\n 10.jpg\n 100.jpg\n ... (12500 items)\n train/\n cats/\n cat.0.jpg\n cat.1.jpg\n cat.3.jpg\n ... (11500 items)\n dogs/\n cat.0.jpg\n cat.1.jpg\n cat.2.jpg\n cat.4.jpg\n ... (11500 items)\n valid/\n cats/\n cat.2.jpg\n cat.5.jpg\n ... (1000 item. these are copied from train/cats/ directory)\n dogs/\n dog.3.jpg\n dog.9.jpg\n ... (1000 item. these are copied from train/dogs/ directory)\nFloydHub\nThe cell below shows how to update data to FloydHub.\n```\nfrom the directory which this notebook is executed\ncd ../floydhub.data.zipped/; pwd\nexpected: empty\nls -l\nwget http://files.fast.ai/files/dogscats.zip\nupload the zipped dataset to floydnet, and create a floydnet dataset\nfloyd data init dogscats.zipped\nfloyd data upload\n```\nUsing the data we have just uploaded to FloydHub, let's unzip it on FloydHub.\n```\nfrom the directory which this notebook is executed\ncd ../floydhub.fast.ai.data.unzip/; pwd\nexpected: empty\nls -l\nfloyd init dogscats.unzip\nfloyd run --gpu --data [data ID of the uploaded zip] \"unzip /input/dogscats.zip -d /output\"\n```\nPlease note:\n- the data ID should be the one you see from the above step\n- the mounted data is available in /input/ directory, and you need to direct the unzipped files to /output/ directory\nlocal\nTODO\nRun the notebook\nNow let's run the notebook in the environment of your choice.\n```\nfrom the directory which this notebook is executed\ncd ./; pwd\nFloydHub\nfloyd init dogscats\nfloyd run --mode jupyter --data [data ID of unzipped data] --env theano:py2 --gpu\nalternatively, for local\njupyter notebook\n```\nand check ~/.keras/keras.json\n```\nmkdir ~/.keras\nFloydHub (Keras1)\necho '{\n \"image_dim_ordering\": \"th\",\n \"epsilon\": 1e-07,\n \"floatx\": \"float32\",\n \"backend\": \"theano\"\n}' > ~/.keras/keras.json\nalternatively, for local (Keras2)\necho '{\n\"image_data_format\": \"channels_first\",\n\"backend\": \"theano\",\n\"floatx\": \"float32\",\n\"epsilon\": 1e-07\n}' > ~/.keras/keras.json\n```\nFinally, let's start running the notebook.",
"# make some Python3 functions available on Python2\nfrom __future__ import division, print_function\n\nimport sys\nprint(sys.version_info)\n\nimport theano\nprint(theano.__version__)\n\nimport keras\nprint(keras.__version__)\n\n# FloydHub: check data\n%ls /input/dogscats/\n\n# check current directory\n%pwd\n%ls\n\n# see some files are loaded fine\n%cat floyd_requirements.txt\n\n# check no Keras2 specific function is used (when Keras1 is used)\n%cat utils.py\n\n#Create references to important directories we will use over and over\nimport os, sys\ncurrent_dir = os.getcwd()\n\nLESSON_HOME_DIR = current_dir\n\n# FloydHub\nDATA_HOME_DIR = \"/input/dogscats/\"\nOUTPUT_HOME_DIR = \"/output/\"\n\n# alternatively, for local\n#DATA_HOME_DIR = current_dir+'/data/redux'\n\n#import modules\nfrom utils import *\nfrom vgg16 import Vgg16\n\n#Instantiate plotting tool\n#In Jupyter notebooks, you will need to run this command before doing any plotting\n%matplotlib inline",
"Finetuning and Training",
"%cd $DATA_HOME_DIR\n\n#Set path to sample/ path if desired\npath = DATA_HOME_DIR + '/' #'/sample/'\ntest_path = DATA_HOME_DIR + '/test1/' #We use all the test data\n\n# FloydHub\n# data needs to be output under /output\n# if results_path cannot be created, execute mkdir directly in the terminal\nresults_path = OUTPUT_HOME_DIR + '/results/'\n%mkdir results_path\n\ntrain_path = path + '/train/'\nvalid_path = path + '/valid/'",
"Use a pretrained VGG model with our Vgg16 class",
"# As large as you can, but no larger than 64 is recommended.\n#batch_size = 8\nbatch_size = 64\n\nno_of_epochs=3",
"The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http://image-net.org/challenges/LSVRC/2014/browse-synsets)\nIn order to classify images into the categories which we prepare (2 categories of dogs/cats, in this notebook), fine-tuning technology is useful. It:\n- keeps the most weights from the pre-trained Vgg16 model, but modifies only a few parts of the weights\n- changes the dimension of the output layer (from 1000 to 2, in this notebook)",
"vgg = Vgg16()\n\n# Grab a few images at a time for training and validation.\nbatches = vgg.get_batches(train_path, batch_size=batch_size)\nval_batches = vgg.get_batches(valid_path, batch_size=batch_size*2)\n\n# Finetune: note that the vgg model is compiled inside the finetune method.\nvgg.finetune(batches)\n\n# Fit: note that we are passing in the validation dataset to the fit() method\n# For each epoch we test our model against the validation set\nlatest_weights_filename = None\n\n# FloydHub (Keras1)\nfor epoch in range(no_of_epochs):\n print(\"Running epoch: %d\" % epoch)\n vgg.fit(batches, val_batches, nb_epoch=1)\n latest_weights_filename = 'ft%d.h5' % epoch\n vgg.model.save_weights(results_path+latest_weights_filename)\nprint(\"Completed %s fit operations\" % no_of_epochs)\n\n# alternatively, for local (Keras2)\n\"\"\"\nfor epoch in range(no_of_epochs):\n print(\"Running epoch: %d\" % epoch)\n vgg.fit(batches, val_batches, batch_size, nb_epoch=1)\n latest_weights_filename = 'ft%d.h5' % epoch\n vgg.model.save_weights(results_path+latest_weights_filename)\nprint(\"Completed %s fit operations\" % no_of_epochs)\n\"\"\"",
"Generate Predictions",
"# OUTPUT_HOME_DIR, not DATA_HOME_DIR due to FloydHub restriction\n%cd $OUTPUT_HOME_DIR\n%mkdir -p test1/unknown\n\n%cd $OUTPUT_HOME_DIR/test1\n%cp $test_path/*.jpg unknown/\n\n# rewrite test_path\ntest_path = OUTPUT_HOME_DIR + '/test1/' #We use all the test data\n\nbatches, preds = vgg.test(test_path, batch_size = batch_size*2)\n\nprint(preds[:5])\n\nfilenames = batches.filenames\nprint(filenames[:5])\n\n# You can verify the column ordering by viewing some images\nfrom PIL import Image\nImage.open(test_path + filenames[2])\n\n#Save our test results arrays so we can use them again later\nsave_array(results_path + 'test_preds.dat', preds)\nsave_array(results_path + 'filenames.dat', filenames)",
"Validate Predictions\nCalculate predictions on validation set, so we can find correct and incorrect examples:",
"vgg.model.load_weights(results_path+latest_weights_filename)\n\nval_batches, probs = vgg.test(valid_path, batch_size = batch_size)\n\nfilenames = val_batches.filenames\nexpected_labels = val_batches.classes #0 or 1\n\n#Round our predictions to 0/1 to generate labels\nour_predictions = probs[:,0]\nour_labels = np.round(1-our_predictions)",
"(TODO) look at data to improve model\nconfusion matrix",
"from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(expected_labels, our_labels)\n\nplot_confusion_matrix(cm, val_batches.class_indices)",
"Submit Predictions to Kaggle!\nThis section also depends on which dataset you use (and which Kaggle competition you are participating)",
"#Load our test predictions from file\npreds = load_array(results_path + 'test_preds.dat')\nfilenames = load_array(results_path + 'filenames.dat')\n\n#Grab the dog prediction column\nisdog = preds[:,1]\nprint(\"Raw Predictions: \" + str(isdog[:5]))\nprint(\"Mid Predictions: \" + str(isdog[(isdog < .6) & (isdog > .4)]))\nprint(\"Edge Predictions: \" + str(isdog[(isdog == 1) | (isdog == 0)]))\n\n# sneaky trick to round down our edge predictions\n# Swap all ones with .95 and all zeros with .05\nisdog = isdog.clip(min=0.05, max=0.95)\n\n#Extract imageIds from the filenames in our test/unknown directory \nfilenames = batches.filenames\nids = np.array([int(f[8:f.find('.')]) for f in filenames])\n\nsubm = np.stack([ids,isdog], axis=1)\nsubm[:5]\n\n# FloydHub\n%cd $OUTPUT_HOME_DIR\n\n# alternatively, for local\n#%cd $DATA_HOME_DIR\n\nsubmission_file_name = 'submission1.csv'\nnp.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')\n\nfrom IPython.display import FileLink\n\n# FloydHub\n%cd $OUTPUT_HOME_DIR\nFileLink(submission_file_name)\n\n# alternatively, for local\n#%cd $LESSON_HOME_DIR\n#FileLink('data/redux/'+submission_file_name)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/tf-estimator-tutorials | 01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb | apache-2.0 | [
"import tensorflow as tf\nfrom tensorflow import data\nimport shutil\nimport math\nfrom datetime import datetime\nfrom tensorflow.python.feature_column import feature_column\n\nfrom tensorflow.contrib.learn import learn_runner\nfrom tensorflow.contrib.learn import make_export_strategy\n\nprint(tf.__version__)",
"Steps to use the TF Experiment APIs\n\nDefine dataset metadata\nDefine data input function to read the data from csv files + feature processing\nCreate TF feature columns based on metadata + extended feature columns\nDefine an estimator (DNNRegressor) creation function with the required feature columns & parameters\nDefine a serving function to export the model\nRun an Experiment with learn_runner to train, evaluate, and export the model\nEvaluate the model using test data\nPerform predictions",
"MODEL_NAME = 'reg-model-03'\n\nTRAIN_DATA_FILES_PATTERN = 'data/train-*.csv'\nVALID_DATA_FILES_PATTERN = 'data/valid-*.csv'\nTEST_DATA_FILES_PATTERN = 'data/test-*.csv'\n\nRESUME_TRAINING = False\nPROCESS_FEATURES = True\nEXTEND_FEATURE_COLUMNS = True\nMULTI_THREADING = True",
"1. Define Dataset Metadata\n\nCSV file header and defaults\nNumeric and categorical feature names\nTarget feature name\nUnused columns",
"HEADER = ['key','x','y','alpha','beta','target']\nHEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], [0.0]]\n\nNUMERIC_FEATURE_NAMES = ['x', 'y'] \n\nCATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']}\nCATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys())\n\nFEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES\n\nTARGET_NAME = 'target'\n\nUNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME})\n\nprint(\"Header: {}\".format(HEADER))\nprint(\"Numeric Features: {}\".format(NUMERIC_FEATURE_NAMES))\nprint(\"Categorical Features: {}\".format(CATEGORICAL_FEATURE_NAMES))\nprint(\"Target: {}\".format(TARGET_NAME))\nprint(\"Unused Features: {}\".format(UNUSED_FEATURE_NAMES))",
"2. Define Data Input Function\n\nInput csv files name pattern\nUse TF Dataset APIs to read and process the data\nParse CSV lines to feature tensors\nApply feature processing\nReturn (features, target) tensors\n\na. parsing and preprocessing logic",
"def parse_csv_row(csv_row):\n \n columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)\n features = dict(zip(HEADER, columns))\n \n for column in UNUSED_FEATURE_NAMES:\n features.pop(column)\n \n target = features.pop(TARGET_NAME)\n\n return features, target\n\ndef process_features(features):\n\n features[\"x_2\"] = tf.square(features['x'])\n features[\"y_2\"] = tf.square(features['y'])\n features[\"xy\"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y']\n features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y']))\n \n return features",
"b. data pipeline input function",
"def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, \n skip_header_lines=0, \n num_epochs=None, \n batch_size=200):\n \n shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False\n \n print(\"\")\n print(\"* data input_fn:\")\n print(\"================\")\n print(\"Input file(s): {}\".format(files_name_pattern))\n print(\"Batch size: {}\".format(batch_size))\n print(\"Epoch Count: {}\".format(num_epochs))\n print(\"Mode: {}\".format(mode))\n print(\"Shuffle: {}\".format(shuffle))\n print(\"================\")\n print(\"\")\n \n file_names = tf.matching_files(files_name_pattern)\n\n dataset = data.TextLineDataset(filenames=file_names)\n dataset = dataset.skip(skip_header_lines)\n \n if shuffle:\n dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)\n\n #useful for distributed training when training on 1 data file, so it can be shareded\n #dataset = dataset.shard(num_workers, worker_index)\n \n dataset = dataset.batch(batch_size)\n dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row))\n \n if PROCESS_FEATURES:\n dataset = dataset.map(lambda features, target: (process_features(features), target))\n \n #dataset = dataset.batch(batch_size) #??? very long time\n dataset = dataset.repeat(num_epochs)\n iterator = dataset.make_one_shot_iterator()\n \n features, target = iterator.get_next()\n return features, target\n\nfeatures, target = csv_input_fn(files_name_pattern=\"\")\nprint(\"Feature read from CSV: {}\".format(list(features.keys())))\nprint(\"Target read from CSV: {}\".format(target))",
"3. Define Feature Columns\nThe input numeric columns are assumed to be normalized (or have the same scale). Otherise, a normlizer_fn, along with the normlisation params (mean, stdv) should be passed to tf.feature_column.numeric_column() constructor.",
"def extend_feature_columns(feature_columns):\n \n # crossing, bucketizing, and embedding can be applied here\n \n feature_columns['alpha_X_beta'] = tf.feature_column.crossed_column(\n [feature_columns['alpha'], feature_columns['beta']], 4)\n \n return feature_columns\n\ndef get_feature_columns():\n \n CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy']\n all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy() \n \n if PROCESS_FEATURES:\n all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES\n\n numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name)\n for feature_name in all_numeric_feature_names}\n\n categorical_column_with_vocabulary = \\\n {item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1])\n for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()}\n \n feature_columns = {}\n\n if numeric_columns is not None:\n feature_columns.update(numeric_columns)\n\n if categorical_column_with_vocabulary is not None:\n feature_columns.update(categorical_column_with_vocabulary)\n \n if EXTEND_FEATURE_COLUMNS:\n feature_columns = extend_feature_columns(feature_columns)\n \n return feature_columns\n\nfeature_columns = get_feature_columns()\nprint(\"Feature Columns: {}\".format(feature_columns))",
"4. Define an Estimator Creation Function\n\nGet dense (numeric) columns from the feature columns\nConvert categorical columns to indicator columns\nCreate Instantiate a DNNRegressor estimator given dense + indicator feature columns + params",
"def create_estimator(run_config, hparams):\n \n feature_columns = list(get_feature_columns().values())\n \n dense_columns = list(\n filter(lambda column: isinstance(column, feature_column._NumericColumn),\n feature_columns\n )\n )\n\n categorical_columns = list(\n filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) |\n isinstance(column, feature_column._BucketizedColumn),\n feature_columns)\n )\n\n indicator_columns = list(\n map(lambda column: tf.feature_column.indicator_column(column),\n categorical_columns)\n )\n \n \n estimator = tf.estimator.DNNRegressor(\n \n feature_columns= dense_columns + indicator_columns ,\n hidden_units= hparams.hidden_units,\n \n optimizer= tf.train.AdamOptimizer(),\n activation_fn= tf.nn.elu,\n dropout= hparams.dropout_prob,\n \n config= run_config\n )\n\n print(\"\")\n print(\"Estimator Type: {}\".format(type(estimator)))\n print(\"\")\n \n return estimator",
"5. Define Serving Funcion",
"def csv_serving_input_fn():\n \n SERVING_HEADER = ['x','y','alpha','beta']\n SERVING_HEADER_DEFAULTS = [[0.0], [0.0], ['NA'], ['NA']]\n\n rows_string_tensor = tf.placeholder(dtype=tf.string,\n shape=[None],\n name='csv_rows')\n \n receiver_tensor = {'csv_rows': rows_string_tensor}\n\n row_columns = tf.expand_dims(rows_string_tensor, -1)\n columns = tf.decode_csv(row_columns, record_defaults=SERVING_HEADER_DEFAULTS)\n features = dict(zip(SERVING_HEADER, columns))\n\n return tf.estimator.export.ServingInputReceiver(\n process_features(features), receiver_tensor)",
"6. Run Experiment\na. Define Experiment Function",
"def generate_experiment_fn(**experiment_args):\n\n def _experiment_fn(run_config, hparams):\n\n train_input_fn = lambda: csv_input_fn(\n files_name_pattern=TRAIN_DATA_FILES_PATTERN,\n mode = tf.contrib.learn.ModeKeys.TRAIN,\n num_epochs=hparams.num_epochs,\n batch_size=hparams.batch_size\n )\n\n eval_input_fn = lambda: csv_input_fn(\n files_name_pattern=VALID_DATA_FILES_PATTERN,\n mode=tf.contrib.learn.ModeKeys.EVAL,\n num_epochs=1,\n batch_size=hparams.batch_size\n )\n\n estimator = create_estimator(run_config, hparams)\n\n return tf.contrib.learn.Experiment(\n estimator,\n train_input_fn=train_input_fn,\n eval_input_fn=eval_input_fn,\n eval_steps=None,\n **experiment_args\n )\n\n return _experiment_fn",
"b. Set HParam and RunConfig",
"TRAIN_SIZE = 12000\nNUM_EPOCHS = 1000\nBATCH_SIZE = 500\nNUM_EVAL = 10\nCHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))\n\nhparams = tf.contrib.training.HParams(\n num_epochs = NUM_EPOCHS,\n batch_size = BATCH_SIZE,\n hidden_units=[8, 4], \n dropout_prob = 0.0)\n\nmodel_dir = 'trained_models/{}'.format(MODEL_NAME)\n\nrun_config = tf.contrib.learn.RunConfig(\n save_checkpoints_steps=CHECKPOINT_STEPS,\n tf_random_seed=19830610,\n model_dir=model_dir\n)\n\nprint(hparams)\nprint(\"Model Directory:\", run_config.model_dir)\nprint(\"\")\nprint(\"Dataset Size:\", TRAIN_SIZE)\nprint(\"Batch Size:\", BATCH_SIZE)\nprint(\"Steps per Epoch:\",TRAIN_SIZE/BATCH_SIZE)\nprint(\"Total Steps:\", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS)\nprint(\"Required Evaluation Steps:\", NUM_EVAL) \nprint(\"That is 1 evaluation step after each\",NUM_EPOCHS/NUM_EVAL,\" epochs\")\nprint(\"Save Checkpoint After\",CHECKPOINT_STEPS,\"steps\")",
"c. Run Experiment via learn_runner",
"if not RESUME_TRAINING:\n print(\"Removing previous artifacts...\")\n shutil.rmtree(model_dir, ignore_errors=True)\nelse:\n print(\"Resuming training...\") \n\n\ntf.logging.set_verbosity(tf.logging.INFO)\n\ntime_start = datetime.utcnow() \nprint(\"Experiment started at {}\".format(time_start.strftime(\"%H:%M:%S\")))\nprint(\".......................................\") \n\n\nlearn_runner.run(\n experiment_fn=generate_experiment_fn(\n\n export_strategies=[make_export_strategy(\n csv_serving_input_fn,\n exports_to_keep=1\n )]\n ),\n run_config=run_config,\n schedule=\"train_and_evaluate\",\n hparams=hparams\n)\n\ntime_end = datetime.utcnow() \nprint(\".......................................\")\nprint(\"Experiment finished at {}\".format(time_end.strftime(\"%H:%M:%S\")))\nprint(\"\")\ntime_elapsed = time_end - time_start\nprint(\"Experiment elapsed time: {} seconds\".format(time_elapsed.total_seconds()))\n ",
"7. Evaluate the Model",
"TRAIN_SIZE = 12000\nVALID_SIZE = 3000\nTEST_SIZE = 5000\n\ntrain_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.EVAL,\n batch_size= TRAIN_SIZE)\n\nvalid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.EVAL,\n batch_size= VALID_SIZE)\n\ntest_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.EVAL,\n batch_size= TEST_SIZE)\n\nestimator = create_estimator(run_config, hparams)\n\ntrain_results = estimator.evaluate(input_fn=train_input_fn, steps=1)\ntrain_rmse = round(math.sqrt(train_results[\"average_loss\"]),5)\nprint()\nprint(\"############################################################################################\")\nprint(\"# Train RMSE: {} - {}\".format(train_rmse, train_results))\nprint(\"############################################################################################\")\n\nvalid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1)\nvalid_rmse = round(math.sqrt(valid_results[\"average_loss\"]),5)\nprint()\nprint(\"############################################################################################\")\nprint(\"# Valid RMSE: {} - {}\".format(valid_rmse,valid_results))\nprint(\"############################################################################################\")\n\ntest_results = estimator.evaluate(input_fn=test_input_fn, steps=1)\ntest_rmse = round(math.sqrt(test_results[\"average_loss\"]),5)\nprint()\nprint(\"############################################################################################\")\nprint(\"# Test RMSE: {} - {}\".format(test_rmse, test_results))\nprint(\"############################################################################################\")",
"8. Prediction",
"import itertools\n\npredict_input_fn = lambda: csv_input_fn(files_name_pattern=TEST_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.PREDICT,\n batch_size= 5)\n\npredictions = estimator.predict(input_fn=predict_input_fn)\nvalues = list(map(lambda item: item[\"predictions\"][0],list(itertools.islice(predictions, 5))))\nprint()\nprint(\"Predicted Values: {}\".format(values))",
"What can we improve?\n\n\nUse .tfrecords files instead of CSV - TFRecord files are optimised for tensorflow.\n\n\nBuild a Custom Estimator - Custom Estimator APIs give you the flexibility to build custom models in a simple and standard way"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ctralie/TUMTopoTimeSeries2016 | SlidingWindow1-Basics.ipynb | apache-2.0 | [
"Geometry of Sliding Window Embeddings\nIn this module, you will interactively explore how various parameters of sliding window embeddings of 1 dimensional periodic time series data affect the geometry of their embeddings. In each code box, press ctrl-enter (or cmd-enter on a Mac) to execute the code. Progress through the lab sequentially. As you examine the plots in each experiment, answer the questions that follow.\nThis first box imports all of the necessary Python packages to run the code in this module. There will be a similar box at the beginning of every module in this series.",
"#Do all of the imports and setup inline plotting\nimport numpy as np\nfrom ripser import ripser\n\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nfrom matplotlib import gridspec\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom sklearn.decomposition import PCA\n\nfrom scipy.interpolate import InterpolatedUnivariateSpline\n\nimport ipywidgets as widgets\nfrom IPython.display import display\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nfrom IPython.display import clear_output",
"Pure Sinusoid Sliding Window\nIn this first experiment, you will alter the extent of the sliding window of a pure sinusoid and examine how the geometry of a 2-D embedding changes. \nFirst, setup and plot a pure sinusoid in NumPy:",
"# Step 1: Setup the signal\nT = 40 # The period in number of samples\nNPeriods = 4 # How many periods to go through\nN = T*NPeriods #The total number of samples\nt = np.linspace(0, 2*np.pi*NPeriods, N+1)[:N] # Sampling indices in time\nx = np.cos(t) # The final signal\nplt.plot(x);",
"Sliding Window Code\nThe code below performs a sliding window embedding on a 1D signal. The parameters are as follows:\n| | |\n|:-:|---|\n|$x$ | The 1-D signal (numpy array) |\n|dim|The dimension of the embedding|\n|$\\tau$ | The skip between samples in a given window |\n|$dT$ | The distance to slide from one window to the next |\nThat is, along the signal given by the array $x$, the first three windows will will be $$\\begin{bmatrix} x(\\tau)\\ x(2\\tau) \\ \\ldots \\ x((\\mbox{dim}-1)\\cdot\\tau)\\end{bmatrix}, \\begin{bmatrix} x(dT + \\tau)\\ x(dT +2\\tau) \\ \\ldots \\ x(dT +(\\mbox{dim}-1)\\cdot\\tau)\\end{bmatrix}, \\begin{bmatrix} x(2dT + \\tau)\\ x(2dT +2\\tau) \\ \\ldots \\ x(2dT +(\\mbox{dim}-1)\\cdot\\tau)\\end{bmatrix}$$\nSpline interpolation is used to fill in information between signal samples, which is necessary for certain combinations of parameters, such as a non-integer $\\tau$ or $dT$.\nThe function getSlidingWindow below creates an array $X$ containing the windows as its columns.",
"def getSlidingWindow(x, dim, Tau, dT):\n \"\"\"\n Return a sliding window of a time series,\n using arbitrary sampling. Use linear interpolation\n to fill in values in windows not on the original grid\n Parameters\n ----------\n x: ndarray(N)\n The original time series\n dim: int\n Dimension of sliding window (number of lags+1)\n Tau: float\n Length between lags, in units of time series\n dT: float\n Length between windows, in units of time series\n Returns\n -------\n X: ndarray(N, dim)\n All sliding windows stacked up\n \"\"\"\n N = len(x)\n NWindows = int(np.floor((N-dim*Tau)/dT))\n if NWindows <= 0:\n print(\"Error: Tau too large for signal extent\")\n return np.zeros((3, dim))\n X = np.zeros((NWindows, dim))\n spl = InterpolatedUnivariateSpline(np.arange(N), x)\n for i in range(NWindows):\n idxx = dT*i + Tau*np.arange(dim)\n start = int(np.floor(idxx[0]))\n end = int(np.ceil(idxx[-1]))+2\n # Only take windows that are within range\n if end >= len(x):\n X = X[0:i, :]\n break\n X[i, :] = spl(idxx)\n return X",
"Sliding Window Result\nWe will now perform a sliding window embedding with various choices of parameters. Principal component analysis will be performed to project the result down to 2D for visualization. \nThe first two eigenvalues computed by PCA will be printed. The closer these eigenvalues are to each other, the rounder and more close to a circle the 2D projection of the embedding is. A red vertical line will be drawn to show the product of $\\tau$ and the dimension, or \"extent\" (window length).\nAn important note: we choose to project the results to 2D (or later, to 3D). Nothing in particular tells us that this is the best choice of dimension. We merely make this choice to enable visualization. In general, when doing PCA, we want to choose enough eigenvalues to account for a significant portion of explained variance.\n Exercise: Execute the code. Using the sliders, play around with the parameters of the sliding window embedding and examine the results. Then answer the questions below.",
"def on_value_change(change):\n execute_computation1()\n \ndimslider = widgets.IntSlider(min=1,max=40,value=20,description='Dimension:',continuous_update=False)\ndimslider.observe(on_value_change, names='value')\n\nTauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description=r'\\(\\tau :\\)' ,continuous_update=False)\nTauslider.observe(on_value_change, names='value')\n\ndTslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=0.5,description='dT: ',continuous_update=False)\ndTslider.observe(on_value_change, names='value')\n\ndisplay(widgets.HBox(( dimslider,Tauslider,dTslider)))\n\nplt.figure(figsize=(9.5, 3))\ndef execute_computation1(): \n plt.clf()\n # Step 1: Setup the signal again in case x was lost\n T = 40 # The period in number of samples\n NPeriods = 4 # How many periods to go through\n N = T*NPeriods # The total number of samples\n t = np.linspace(0, 2*np.pi*NPeriods, N+1)[0:N] # Sampling indices in time\n x = np.cos(t) # The final signal\n \n # Get slider values\n dim = dimslider.value\n Tau = Tauslider.value\n dT = dTslider.value\n \n #Step 2: Do a sliding window embedding\n X = getSlidingWindow(x, dim, Tau, dT)\n extent = Tau*dim\n\n #Step 3: Perform PCA down to 2D for visualization\n pca = PCA(n_components = 2)\n Y = pca.fit_transform(X)\n eigs = pca.explained_variance_\n print(\"lambda1 = %g, lambda2 = %g\"%(eigs[0], eigs[1]))\n\n #Step 4: Plot original signal and PCA of the embedding\n ax = plt.subplot(121)\n ax.plot(x)\n ax.set_ylim((-2*max(x), 2*max(x)))\n ax.set_title(\"Original Signal\")\n ax.set_xlabel(\"Sample Number\")\n yr = np.max(x)-np.min(x)\n yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]\n ax.plot([extent, extent], yr, 'r')\n ax.plot([0, 0], yr, 'r') \n ax.plot([0, extent], [yr[0]]*2, 'r')\n ax.plot([0, extent], [yr[1]]*2, 'r')\n ax2 = plt.subplot(122)\n ax2.set_title(\"PCA of Sliding Window Embedding\")\n ax2.scatter(Y[:, 0], Y[:, 1])\n ax2.set_aspect('equal', 'datalim')\n plt.tight_layout()\n \nexecute_computation1()",
"Questions\n\nFor fixed $\\tau$:\nWhat does varying the dimension do to the extent (the length of the window)?\nwhat dimensions give eigenvalues nearest each other? (Note: dimensions! Plural!) Explain why this is the case. Explain how you might use this information to deduce the period of a signal.\n<br><br>\nWhat does varying $dT$ do to the PCA embedding? Explain this in terms of the definition of sliding windows above.\n<br><br>\nThe command \npython\nnp.random.randn(pts)\ngenerates an array of length pts filled with random values drawn from a standard normal distribution ($\\mu=0$, $\\sigma=1$). Modify the code above to add random noise to signal. \nCan you still detect the period visually by inspecting the plot of the signal?\nDoes your method of detecting the period from the first question still work?\nHow does adding noise change the geometry of the PCA embedding? \nModify the amplitude of the noise (for example, by multiplying the noise-generating command by a constant) and examine the 2D projection. What feature of the 2D projection appears to imply that the signal is periodic? At what noise amplitude does this feature appear to vanish?\n\nNon-Periodic Signal Sliding Window\nFor a contrasting example, we will now examine the sliding window embedding of a non-periodic signal which is a linear function plus Gaussian noise. The code below sets up the signal and then does the sliding window embedding, as before. \n Exercise: Execute the code. Using the sliders, play around with the parameters of the sliding window embedding and examine the results. Then answer the questions below.",
"noise = 0.05*np.random.randn(400)\n\ndef on_value_change(change):\n execute_computation2()\n \ndimslider = widgets.IntSlider(min=1,max=40,value=20,description='Dimension:',continuous_update=False)\ndimslider.observe(on_value_change, names='value')\n\nTauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description='Tau: ',continuous_update=False)\nTauslider.observe(on_value_change, names='value')\n\ndTslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=0.5,description='dT: ',continuous_update=False)\ndTslider.observe(on_value_change, names='value')\n\ndisplay(widgets.HBox(( dimslider,Tauslider,dTslider)))\n\nplt.figure(figsize=(9.5, 3))\n\ndef execute_computation2(): \n plt.clf()\n # Step 1: Set up the signal\n x = np.arange(400)\n x = x/float(len(x))\n x = x + noise # Add some noise\n \n # Get slider values\n dim = dimslider.value\n Tau = Tauslider.value\n dT = dTslider.value\n \n #Step 2: Do a sliding window embedding\n X = getSlidingWindow(x, dim, Tau, dT)\n extent = Tau*dim\n\n #Step 3: Perform PCA down to 2D for visualization\n pca = PCA(n_components = 2)\n Y = pca.fit_transform(X)\n eigs = pca.explained_variance_\n print(\"lambda1 = %g, lambda2 = %g\"%(eigs[0], eigs[1]))\n\n #Step 4: Plot original signal and PCA of the embedding\n gs = gridspec.GridSpec(1, 2)\n ax = plt.subplot(gs[0,0])\n ax.plot(x)\n ax.set_ylim((-2, 2))\n ax.set_title(\"Original Signal\")\n ax.set_xlabel(\"Sample Number\")\n yr = np.max(x)-np.min(x)\n yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]\n ax.plot([extent, extent], yr, 'r')\n ax.plot([0, 0], yr, 'r') \n ax.plot([0, extent], [yr[0]]*2, 'r')\n ax.plot([0, extent], [yr[1]]*2, 'r') \n ax2 = plt.subplot(gs[0, 1])\n ax2.set_title(\"PCA of Sliding Window Embedding\")\n ax2.scatter(Y[:, 0], Y[:, 1])\n ax2.set_aspect('equal', 'datalim')\n\nexecute_computation2()",
"Questions\n\nNotice how changing the window extent doesn't have the same impact as it did in the periodic example above. Why might this be?\n<br><br>\nWhy is the second eigenvalue always tiny?\n\nMultiple Sines Sliding Window\nWe will now go back to periodic signals, but this time we will increase the complexity by adding two sines together. If the ratio between the two sinusoids is a rational number, then they are called harmonics of each other. For example, $\\sin(t)$ and $\\sin(3t)$ are harmonics of each other. By contrast, if the ratio of the two frequencies is irrational, then the sinusoids are called incommensurate. For example, $\\sin(t)$ and $\\sin(\\pi t)$ are incommensurate.\nThe plots below are initialized with \n$$f(t) = \\sin(\\omega t) + \\sin(3\\omega t),$$\na sum of two harmonics. \nThis time, the eigenvalues of PCA will be plotted (up to the first 10), in addition to the red line showing the extent of the window. Also, 3D PCA will be displayed instead of 2D PCA, and you can click and drag your mouse to view it from different angles. Colors will be drawn to indicate the position of the window in time, with cool colors (greens and yellows) indicating earlier windows and hot colors (oranges and reds) indicating later windows (using the \"jet\" colormap).\n Exercise: Execute the code. Then play with the sliders, as well as the embedding dimension (note that for the 3-D projection, you can change the view by dragging around. Do!). Then, try changing the second sinusoid to be another multiple of the first. Try both harmonic and incommensurate values. Once you have gotten a feel for the geometries and the eigenvalues, answer the questions below.",
"def on_value_change(change):\n execute_computation3()\n\nembeddingdimbox = widgets.Dropdown(options=[2, 3],value=3,description='Embedding Dimension:',disabled=False)\nembeddingdimbox.observe(on_value_change,names='value')\n\nsecondfreq = widgets.Dropdown(options=[2, 3, np.pi],value=3,description='Second Frequency:',disabled=False)\nsecondfreq.observe(on_value_change,names='value')\n\nnoiseampslider = widgets.FloatSlider(min=0,max=6,step=0.5,value=0,description='Noise Amplitude',continuous_update=False)\nnoiseampslider.observe(on_value_change, names='value')\n\ndimslider = widgets.IntSlider(min=1,max=100,value=30,description='Dimension:',continuous_update=False)\ndimslider.observe(on_value_change, names='value')\n\nTauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description='Tau: ',continuous_update=False)\nTauslider.observe(on_value_change, names='value')\n\ndTslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=0.5,description='dT: ',continuous_update=False)\ndTslider.observe(on_value_change, names='value')\n\ndisplay(widgets.HBox(( secondfreq,embeddingdimbox,noiseampslider)))\ndisplay(widgets.HBox((dimslider,Tauslider,dTslider)))\n\nnoise = np.random.randn(10000)\n\nfig = plt.figure(figsize=(9.5, 4))\ndef execute_computation3():\n plt.clf()\n \n # Step 1: Setup the signal\n T1 = 20 # The period of the first sine in number of samples\n T2 = T1*secondfreq.value # The period of the second sine in number of samples\n NPeriods = 10 # How many periods to go through, relative to the second sinusoid\n N = T2*NPeriods # The total number of samples\n t = np.arange(N) # Time indices\n x = np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid\n x += np.cos(2*np.pi*(1.0/T2)*t) # Add the second sinusoid\n x += noiseampslider.value*noise[:len(x)]\n \n # Get widget values\n dim = dimslider.value\n Tau = Tauslider.value\n dT = dTslider.value\n embeddingdim = embeddingdimbox.value\n \n # Step 2: Do a sliding window embedding\n X = getSlidingWindow(x, dim, Tau, dT)\n extent = Tau*dim\n\n # Step 3: Perform PCA down to dimension chosen for visualization\n pca = PCA(n_components = 10)\n Y = pca.fit_transform(X)\n eigs = pca.explained_variance_\n\n # Step 4: Plot original signal and PCA of the embedding \n gs = gridspec.GridSpec(2, 2,width_ratios=[1, 2])\n \n # Plot the signal\n ax = plt.subplot(gs[0,0])\n ax.plot(x)\n yr = np.max(x)-np.min(x)\n yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]\n ax.plot([extent, extent], yr, 'r')\n ax.plot([0, 0], yr, 'r') \n ax.plot([0, extent], [yr[0]]*2, 'r')\n ax.plot([0, extent], [yr[1]]*2, 'r')\n ax.set_title(\"Original Signal\")\n ax.set_xlabel(\"Sample Number\")\n\n c = plt.get_cmap('jet')\n C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))\n C = C[:, 0:3]\n\n # Plot the PCA embedding\n if embeddingdim == 3:\n ax2 = plt.subplot(gs[:,1],projection='3d')\n ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)\n ax2.set_aspect('equal', 'datalim')\n else:\n ax2 = plt.subplot(gs[:,1])\n ax2.scatter(Y[:, 0], Y[:, 1],c=C)\n\n ax2.set_title(\"PCA of Sliding Window Embedding\")\n ax2.set_aspect('equal', 'datalim')\n\n # Plot the eigenvalues as bars\n ax3 = plt.subplot(gs[1,0])\n eigs = eigs[0:min(len(eigs), 10)]\n ax3.bar(np.arange(len(eigs)), eigs)\n ax3.set_xlabel(\"Eigenvalue Number\")\n ax3.set_ylabel(\"Eigenvalue\")\n ax3.set_title(\"PCA Eigenvalues\")\n\n plt.tight_layout()\n\n plt.show();\n\nexecute_computation3()",
"Questions\n\nComment on the relationship between the eigenvalues and the extent (width) of the window.\n<br><br>\nWhen are the eigenvalues near each other? When are they not?\n<br><br>\nComment on the change in geometry when the second sinusoid is incommensurate to the first. Specifically, comment on the intrinsic dimension of the object in the projection. Can you name the shape in the 3-D projection in the incommensurate case?\n<br><br>\nTry adding noise in like you did in the single frequency case. \nCan you distinguish in the projection between the incommensurate case and the noisy, but harmonic one with second frequency 3? Explain.\nWhat can you say about the eigenvalues in the two cases? Explain your answer.\n\nIt seems reasonable to ask what the ideal dimension to embed into is. While that question may be answerable, it would be better to bypass the question altogether. Similarly, it seems that beyond detecting the largest period, these tools are limited in detecting the secondary ones. Topological tools that we will see beginning in the next lab will allow us to make some progress toward that goal.\nPower Spectrum\nWe saw above that for a rather subtle change in frequency changing the second sinusoid from harmonic to noncommensurate, there is a marked change in the geometry. By contrast, the power spectral density functions are very close between the two, as shown below. Hence, it appears that geometric tools are more appropriate for telling the difference between these two types of signals",
"T = 20 #The period of the first sine in number of samples\nNPeriods = 10 #How many periods to go through, relative to the faster sinusoid\nN = T*NPeriods*3 #The total number of samples\nt = np.arange(N) #Time indices\n\n#Make the harmonic signal cos(t) + cos(3t)\nxH = np.cos(2*np.pi*(1.0/T)*t) + np.cos(2*np.pi*(1.0/(3*T)*t))\n \n#Make the incommensurate signal cos(t) + cos(pi*t)\nxNC = np.cos(2*np.pi*(1.0/T)*t) + np.cos(2*np.pi*(1.0/(np.pi*T)*t))\n\nplt.figure()\nP1 = np.abs(np.fft.fft(xH))**2\nP2 = np.abs(np.fft.fft(xNC))**2\nplt.plot(np.arange(len(P1)), P1)\nplt.plot(np.arange(len(P2)), P2)\nplt.xlabel(\"Frequency Index\")\nplt.legend({\"Harmonic\", \"Noncommensurate\"})\nplt.xlim([0, 50])\nplt.show();",
"Summary\n\nSignals can be transformed into geometric objects via embeddings.\n<br><br>\nSignal properties are captured by the geometry of the sliding window embedding. Periodicity corresponds to circularity, period length over window size corresponds to roundness, number of incommensurate frequencies corresponds to intrinsic dimension.\n<br><br>\nThe window extent is one of the most important parameters for determining roundness.\n<br><br>\nAdding noise makes things a little trickier to see what's going on by inspection of projections."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pulinagrawal/nupic | src/nupic/frameworks/viz/examples/Demo.ipynb | agpl-3.0 | [
"Visualizing Networks\nThe following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network.\nBefore you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository:\npip install --user .[viz]\nSetup a simple network so we have something to work with:",
"from nupic.engine import Network, Dimensions\n\n# Create Network instance\nnetwork = Network()\n\n# Add three TestNode regions to network\nnetwork.addRegion(\"region1\", \"TestNode\", \"\")\nnetwork.addRegion(\"region2\", \"TestNode\", \"\")\nnetwork.addRegion(\"region3\", \"TestNode\", \"\")\n\n# Set dimensions on first region\nregion1 = network.getRegions().getByName(\"region1\")\nregion1.setDimensions(Dimensions([1, 1]))\n\n# Link regions\nnetwork.link(\"region1\", \"region2\", \"UniformLink\", \"\")\nnetwork.link(\"region2\", \"region1\", \"UniformLink\", \"\")\nnetwork.link(\"region1\", \"region3\", \"UniformLink\", \"\")\nnetwork.link(\"region2\", \"region3\", \"UniformLink\", \"\")\n\n# Initialize network\nnetwork.initialize()",
"Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance:",
"from nupic.frameworks.viz import NetworkVisualizer\n\n# Initialize Network Visualizer\nviz = NetworkVisualizer(network)\n\n# Render to dot (stdout)\nviz.render()",
"That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else:",
"from nupic.frameworks.viz import DotRenderer\nfrom io import StringIO\n\noutp = StringIO()\nviz.render(renderer=lambda: DotRenderer(outp))",
"outp now contains the rendered output, render to an image with graphviz:",
"# Render dot to image\nfrom graphviz import Source\nfrom IPython.display import Image\n\nImage(Source(outp.getvalue()).pipe(\"png\"))",
"In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. \"bottumUpIn\" and \"bottomUpOut\", are specific to the region type. The arrows indicate links between outputs from one region to the input of another.\nI know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real!\nContinuing below, I'll instantiate a CLA model and visualize it. In this case, I'll use one of the \"hotgym\" examples.",
"from nupic.frameworks.opf.modelfactory import ModelFactory\n\n# Note: parameters copied from examples/opf/clients/hotgym/simple/model_params.py\nmodel = ModelFactory.create({'aggregationInfo': {'hours': 1, 'microseconds': 0, 'seconds': 0, 'fields': [('consumption', 'sum')], 'weeks': 0, 'months': 0, 'minutes': 0, 'days': 0, 'milliseconds': 0, 'years': 0}, 'model': 'CLA', 'version': 1, 'predictAheadTime': None, 'modelParams': {'sensorParams': {'verbosity': 0, 'encoders': {'timestamp_timeOfDay': {'type': 'DateEncoder', 'timeOfDay': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay'}, u'consumption': {'resolution': 0.88, 'seed': 1, 'fieldname': u'consumption', 'name': u'consumption', 'type': 'RandomDistributedScalarEncoder'}, 'timestamp_weekend': {'type': 'DateEncoder', 'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'weekend': 21}}, 'sensorAutoReset': None}, 'spParams': {'columnCount': 2048, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermConnected': 0.1, 'seed': 1956, 'numActiveColumnsPerInhArea': 40, 'globalInhibition': 1, 'inputWidth': 0, 'synPermInactiveDec': 0.005, 'synPermActiveInc': 0.04, 'potentialPct': 0.85, 'boostStrength': 3.0}, 'spEnable': True, 'clParams': {'implementation': 'cpp', 'alpha': 0.1, 'verbosity': 0, 'steps': '1,5', 'regionName': 'SDRClassifierRegion'}, 'inferenceType': 'TemporalMultiStep', 'tmEnable': True, 'tmParams': {'columnCount': 2048, 'activationThreshold': 16, 'pamLength': 1, 'cellsPerColumn': 32, 'permanenceInc': 0.1, 'minThreshold': 12, 'verbosity': 0, 'maxSynapsesPerSegment': 32, 'outputType': 'normal', 'initialPerm': 0.21, 'globalDecay': 0.0, 'maxAge': 0, 'permanenceDec': 0.1, 'seed': 1960, 'newSynapseCount': 20, 'maxSegmentsPerCell': 128, 'temporalImp': 'cpp', 'inputWidth': 2048}, 'trainSPNetOnlyIfRequested': False}})",
"Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline.",
"# New network, new NetworkVisualizer instance\nviz = NetworkVisualizer(model._netInfo.net)\n\n# Render to Dot output to buffer\noutp = StringIO()\nviz.render(renderer=lambda: DotRenderer(outp))\n\n# Render Dot to image, display inline\nImage(Source(outp.getvalue()).pipe(\"png\"))",
"In these examples, I'm using graphviz to render an image from the dot document in Python, but you may want to do something else. dot is a generic and flexible graph description language and there are many tools for working with dot files."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
RittmanResearch/maybrain | docs/01 - Simple Usage.ipynb | apache-2.0 | [
"Simple Usage\nBasic Principles\nThe purpose of Maybrain is to allow easy visualisation of brain connectome and related data and perform various analyses.\nThe code is built around the class Brain. It contains all the information about the brain and numerous functions to change, measure and highlight those. At its heart is a Networkx object, Brain.G, via which all Networkx functions are available.\nBesides this main class, you have four packages with other modules, algorithms, plotting, utils and resources, which will be explained throughout these notebooks.\nData Input\nSeveral types of data can be input. The basic connectome is made up of two files: a coordinate file and an adjacency matrix. In fact only the second of these is strictly required.\nThe coordinate file defines the position of each node. It is a text file where each line has four entries: the node index, x, y and z coordinates. e.g.:\n0 0.0 3.1 4.4\n1 5.3 7.6 8.4\n2 3.2 4.4 3.1\nThe adjacency matrix defines the strength of connection between each pair of nodes. For n nodes it is an n × n text matrix. Nodes in maybrain are labelled 0,1,... and the order of the rows and columns in the adjacency matrix is assumed to correspond to this. Entering an adjacency matrix with the wrong dimensions will lead to certain doom.\nImporting Maybrain\nThe main class of maybrain is contained in the brain module and can easily be achieved with a normal import.",
"from maybrain import brain as mbt",
"The constants module\nBefore going further in explaining Maybrain's functionalities, it is important to briefly refer the constants module. This module has some constants which can be used elsewhere, rather than writing the values by hand everywhere being prone to typos. \nIn further notebooks you will see this module being used in practice, but for now, also just a normal import is required:",
"from maybrain import constants as ct\n# Printing some of the constants\nprint(ct.WEIGHT)\nprint(ct.ANAT_LABEL)",
"The resources package\nMaybrain also have another package that can be useful for different things. In its essence, it is just a package with access to files like matrices, properties, etc. When importing this package, you will have access to different variables in the path for the file in your system.\nFarther in the documentation you will see this package being used in practice, but for now, also just a normal import is required:",
"from maybrain import resources as rr",
"Importing an Adjacency Matrix\nFirstly, create a Brain object:",
"a = mbt.Brain()\nprint(\"Nodes: \", a.G.nodes())\nprint(\"Edges: \", a.G.edges())\nprint(\"Adjacency matrix: \", a.adjMat)",
"This creates a brain object, where a graph (from the package NetworkX) is stored as a.G, initially empty. \nThen import the adjacency matrix. The import_adj_file() function imports the adjacency matrix to form the nodes of your graph, but does not create any edges (connections), as you can check from the following outputs.\nNote the use of the resources package. In maybrain you can access a dummy adjacency matrix (500x500) for various reasons; in this case, just for testing purposes.",
"a.import_adj_file(rr.DUMMY_ADJ_FILE_500)\n\nprint(\"Number of nodes:\\n\", a.G.number_of_nodes())\nprint(\"First 5 nodes (notice labelling starting with 0):\\n\", list(a.G.nodes())[0:5])\nprint(\"Edges:\\n\", a.G.edges())\nprint(\"Size of Adjacency matrix:\\n\", a.adjMat.shape)",
"If you wish to create a fully connected graph with all the available values in the adjacency matrix, it is necessary to threshold it, which is explained in the next section.\nThresholding\nThere are a few ways to apply a threshold, either using an absolute threshold across the whole graph to preserve a specified number of edges or percentage of total possible edges; or to apply a local thresholding that begins with the minimum spanning tree and adds successive n-nearest neighbour graphs. The advantage of local thresholding is that the graph will always be fully connected, which is necessary to collect some graph measures.\nFor an absolute threshold you have several possibilities. Note that our adjacency matrix (a.adjMat) always stays the same so we can apply all the thresholds we want to create our graph (a.G) accordingly. Also notice that in this specific case of an undirected graph, are dealing with a symmetric adjacency matrix, so although a.adjMat will always have the size of 500x500, the a.G will not.",
"# Bring everything from the adjacency matrix to a.G\na.apply_threshold()\nprint(\"Number of edges (notice it corresponds to the upper half edges of adjacency matrix):\\n\", a.G.number_of_edges())\nprint(\"Size of Adjacency matrix after 1st threshold:\\n\", a.adjMat.shape)\n\n# Retain the most strongly connected 1000 edges\na.apply_threshold(threshold_type= \"totalEdges\", value = 1000) \nprint(\"\\nNumber of edges after 2nd threshold:\\n\", a.G.number_of_edges())\nprint(\"Size of Adjacency matrix after 2nd threshold:\\n\", a.adjMat.shape)\n\n# Retain the 5% most connected edges as a percentage of the total possible number of edges\na.apply_threshold(threshold_type = \"edgePC\", value = 5) \nprint(\"\\nNumber of edges after 3rd threshold:\\n\", a.G.number_of_edges())\nprint(\"Size of Adjacency matrix after 3rd threshold:\\n\", a.adjMat.shape)\n\n# Retain edges with a weight greater than 0.3\na.apply_threshold(threshold_type= \"tVal\", value = 0.3) \nprint(\"\\nNumber of edges after 4th threshold:\\n\", a.G.number_of_edges())\nprint(\"Size of Adjacency matrix after 4th threshold:\\n\", a.adjMat.shape)\n",
"The options for local thresholding are similar. Note that a local thresholding always yield a connected graph, and in the case where no arguments are passed, the graph will be the Minimum Spanning Tree. Local thresholding can be very slow for bigger matrices because in each step it is adding successive N-nearest neighbour degree graphs.",
"a.local_thresholding()\nprint(\"Is the graph connected? \", mbt.nx.is_connected(a.G))\n\na.local_thresholding(threshold_type=\"edgePC\", value = 5)\nprint(\"Is the graph connected? \", mbt.nx.is_connected(a.G))\n\na.local_thresholding(threshold_type=\"totalEdges\", value = 10000)\nprint(\"Is the graph connected? \", mbt.nx.is_connected(a.G))",
"Absolute Thresholding\nIn a real brain network, an edge with high negative value is as strong as an edge with a high positive value. So, if you want to threshold in order to get the most strongly connected edges (both negative and positive), you just have to pass an argument use_absolute=True to apply_threshold().\nIn the case of the brain that we are using in this notebook there are not many negative edges. Thus, we have to threshold the 80% most strongly connected edges in order to see a difference (notice the use of the module constants (ct) to access the weight property of each edge):",
"# Thresholding the 80% most strongly connected edges\na.apply_threshold(threshold_type=\"edgePC\", value=80)\nfor e in a.G.edges(data=True):\n # Printing the edges with negative weight\n if e[2][ct.WEIGHT] < 0:\n print(e) # This line is never executed because a negative weighted edge is not strong enough\n\n# Absolute thresholding of the 70% most strongly connected edges \nprint(\"Edges with negative weight which belong to the 70% strongest ones:\")\na.apply_threshold(threshold_type=\"edgePC\", value=70, use_absolute=True)\nfor e in a.G.edges(data=True):\n if e[2][ct.WEIGHT] < 0:\n print(e)\n \n# Absolute thresholding of the 80% most strongly connected edges \nprint(\"\\nEdges with negative weight which belong to the 80% strongest ones:\")\na.apply_threshold(threshold_type=\"edgePC\", value=80, use_absolute=True)\nfor e in a.G.edges(data=True):\n if e[2][ct.WEIGHT] < 0:\n print(e)",
"Binary and Absolute Graphs\nIf necessary the graph can be binarised so that weights are removed. You can see that essentially this means that each edge will have a weight of 1.",
"a.binarise()\nprint(\"Do all the edges have weight of 1?\", all(e[2][ct.WEIGHT] == 1 for e in a.G.edges(data=True)))",
"Also, you can make all the weights to have an absolute value, instead of negative and positive values:",
"# Applying threshold again because of last changes\na.apply_threshold()\nprint(\"Do all the edges have a positive weight before?\", all(e[2][ct.WEIGHT] >= 0 for e in a.G.edges(data=True)))\na.make_edges_absolute()\nprint(\"Do all the edges have a positive weight?\", all(e[2][ct.WEIGHT] >= 0 for e in a.G.edges(data=True)))",
"Importing 3D Spatial Information\nYou can add spatial info to each node of your graph. You need this information if you want to use the visualisation tools of Maybrain.\nTo do so, provide Maybrain with a file that has 4 columns: an anatomical label, and x, y and z coordinates. e.g.:\n0 70.800000 30.600000 53.320000\n1 32.064909 62.154158 69.707911\n2 59.870968 92.230014 41.552595\n3 19.703504 66.398922 52.878706\nIdeally these values would be in MNI space (this makes it easier to import background images for plotting and for some other functions), but this is not absolutely necessary.\nWe are using the resources package again to get an already prepated text file with spatial information for a brain with 500 regions in the MNI template:",
"# Initially, you don't have anatomical/spatial attributes in each node:\nprint(\"Attributes: \", mbt.nx.get_node_attributes(a.G, ct.ANAT_LABEL), \"/\", mbt.nx.get_node_attributes(a.G, ct.XYZ))\n\n#After calling import_spatial_info(), you can see the node's attributes\na.import_spatial_info(rr.MNI_SPACE_COORDINATES_500)\nprint(\"Attributes of one node: \", \n mbt.nx.get_node_attributes(a.G, ct.ANAT_LABEL)[0], \n \"/\", \n mbt.nx.get_node_attributes(a.G, ct.XYZ)[0])",
"Properties in Nodes and Edges\nWe have seen already that nodes can have properties about spatial information after calling import_spatial_info(), and edges can have properties about weight after calling applying thresholds. \nYou can add properties\nto nodes or edges from a text file. The format should be as follows:\nproperty\nnode1 value\nnode2 value2\n(...)\nnode1 node2 value1\nnode3 node4 value2\n(...)\nLet's give a specific example. Imagine that you want to add properties about colours. You can use this file, which is transcribed here:\ncolour\n1 red\n3 red\n6 green\n0 blue\n1 3 green\n1 2 green\n1 0 grey\n2 3 green\n2 0 red\n3 0 green\nNote that the first line contains the property name. Subsequent lines refer to edges if they contain 3 terms and nodes if they contain 2. The above will give node 1 the property 'colour' with value 'red' and node 6 the property 'colour' with value 'green'. Nodes 0 and 3 will also have the property 'colour' but with value 'blue' and 'red', respectively.\nThe edge connecting nodes 1 and 3 will have the same property with value 'green'. All the other 5 edges will have the same property but with different values. These properties are stored in the G object from networkx.\nIn order to be easier to see the properties features, we will be importing a shorter matrix with just 4 nodes (link here).\nFrom the following code you can see that a warning is printed because we tried to add a property to a node 6, which doesn't exist. However, the other properties are added. \nNote the fact that as the brain is not directed, adding the property to the edge (1,0) is considered as adding to the edge (0,1). The same thing happens with edges (2,0) and (3,0). No property was imported to node 2 because it is not specified in the properties file.",
"# Creating a new Brain and importing the shorter adjacency matrix\nb = mbt.Brain()\nb.import_adj_file(\"data/3d_grid_adj.txt\")\nb.apply_threshold()\n\nprint(\"Edges and nodes information:\")\nfor e in b.G.edges(data=True):\n print(e)\nfor n in b.G.nodes(data=True):\n print(n)\n\n# Importing properties and showing again edges and nodes\nprint(\"\\nImporting properties...\")\nb.import_properties(\"data/3d_grid_properties.txt\")\n\nprint(\"\\nEdges and nodes information after importing properties:\")\nfor e in b.G.edges(data=True):\n print(e)\nfor n in b.G.nodes(data=True):\n print(n)\n",
"You can notice that if we threshold our brain again, edges are created from scratch and thus properties are lost. The same doesn't happen with nodes as they are always present in our G object.\nBy default, properties of the edges are not imported everytime you threshold the brain. However, you can change that behaviour by setting the field update_props_after_threshold to True.",
"# Rethresholding the brain, thus loosing information\nb.apply_threshold(threshold_type=\"totalEdges\", value=0)\nb.apply_threshold()\n\nprint(\"Edges information:\")\nfor e in b.G.edges(data=True):\n print(e)\n\n# Setting field to allow automatic importing of properties after a threshold\nprint(\"\\nSetting b.update_properties_after_threshold and rethresholding again...\")\nb.apply_threshold(threshold_type=\"totalEdges\", value=0)\nb.update_props_after_threshold = True\nb.apply_threshold() # Now, warning is thrown just like before\n\nprint(\"\\nEdges information again:\")\nfor e in b.G.edges(data=True):\n print(e)",
"You can also import the properties from a dictionary, both for nodes and edges. In the following example there are two dictionaries being created with the values of a certain property, named own_property, that will be added to brain:",
"nodes_props = {0: \"val1\", 1: \"val2\"}\nedges_props = {(0, 1): \"edge_val1\", (2,3): \"edge_val2\"}\n\nb.import_edge_props_from_dict(\"own_property\", edges_props)\nb.import_node_props_from_dict(\"own_property\", nodes_props)\n\nprint(\"\\nEdges information:\")\nfor e in b.G.edges(data=True):\n print(e)\n \nprint(\"\\nNodes information:\")\nfor n in b.G.nodes(data=True):\n print(n)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/asl-ml-immersion | notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb | apache-2.0 | [
"Exploratory Data Analysis Using Python and BigQuery\nLearning Objectives\n\nAnalyze a Pandas Dataframe\nCreate Seaborn plots for Exploratory Data Analysis in Python \nWrite a SQL query to pick up specific fields from a BigQuery dataset\nExploratory Analysis in BigQuery\n\nIntroduction\nThis lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. \nImport Libraries",
"import os\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n# Seaborn is a Python data visualization library based on matplotlib.\nimport seaborn as sns\nfrom google.cloud import bigquery\n\n%matplotlib inline",
"Load the Dataset\nHere, we create a directory called usahousing. This directory will hold the dataset that we copy from Google Cloud Storage.",
"if not os.path.isdir(\"../data/explore\"):\n os.makedirs(\"../data/explore\")",
"Next, we copy the Usahousing dataset from Google Cloud Storage.",
"!gsutil cp gs://cloud-training-demos/feat_eng/housing/housing_pre-proc.csv ../data/explore",
"Then we use the \"ls\" command to list files in the directory. This ensures that the dataset was copied.",
"!ls -l ../data/explore",
"Next, we read the dataset into a Pandas dataframe.",
"df_USAhousing = # TODO 1: Your code goes here",
"Inspect the Data",
"# Show the first five row.\n\ndf_USAhousing.head()",
"Let's check for any null values.",
"df_USAhousing.isnull().sum()\n\ndf_stats = df_USAhousing.describe()\ndf_stats = df_stats.transpose()\ndf_stats\n\ndf_USAhousing.info()",
"Let's take a peek at the first and last five rows of the data for all columns.",
"print(\"Rows : \", df_USAhousing.shape[0])\nprint(\"Columns : \", df_USAhousing.shape[1])\nprint(\"\\nFeatures : \\n\", df_USAhousing.columns.tolist())\nprint(\"\\nMissing values : \", df_USAhousing.isnull().sum().values.sum())\nprint(\"\\nUnique values : \\n\", df_USAhousing.nunique())",
"Explore the Data\nLet's create some simple plots to check out the data!",
"_ = sns.heatmap(df_USAhousing.corr())",
"Create a distplot showing \"median_house_value\".",
"# TODO 2a: Your code goes here\n\nsns.set_style(\"whitegrid\")\ndf_USAhousing[\"median_house_value\"].hist(bins=30)\nplt.xlabel(\"median_house_value\")\n\nx = df_USAhousing[\"median_income\"]\ny = df_USAhousing[\"median_house_value\"]\n\nplt.scatter(x, y)\nplt.show()",
"Create a jointplot showing \"median_income\" versus \"median_house_value\".",
"# TODO 2b: Your code goes here\n\nsns.countplot(x=\"ocean_proximity\", data=df_USAhousing)\n\n# takes numeric only?\n# plt.figure(figsize=(20,20))\ng = sns.FacetGrid(df_USAhousing, col=\"ocean_proximity\")\n_ = g.map(plt.hist, \"households\")\n\n# takes numeric only?\n# plt.figure(figsize=(20,20))\ng = sns.FacetGrid(df_USAhousing, col=\"ocean_proximity\")\n_ = g.map(plt.hist, \"median_income\")",
"You can see below that this is the state of California!",
"x = df_USAhousing[\"latitude\"]\ny = df_USAhousing[\"longitude\"]\n\nplt.scatter(x, y)\nplt.show()",
"Explore and create ML datasets\nIn this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.\nLearning Objectives\n\nAccess and explore a public BigQuery dataset on NYC Taxi Cab rides\nVisualize your dataset using the Seaborn library\n\n<h3> Extract sample data from BigQuery </h3>\n\nThe dataset that we will use is <a href=\"https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips\">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.\nLet's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.",
"%%bigquery\nSELECT\n FORMAT_TIMESTAMP(\n \"%Y-%m-%d %H:%M:%S %Z\", pickup_datetime) AS pickup_datetime,\n pickup_longitude, pickup_latitude, dropoff_longitude,\n dropoff_latitude, passenger_count, trip_distance, tolls_amount, \n fare_amount, total_amount \n# TODO 3: Set correct BigQuery public dataset for nyc-tlc yellow taxi cab trips\n# Tip: For projects with hyphens '-' be sure to escape with backticks ``\nFROM \nLIMIT 10",
"Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.\nWe will also store the BigQuery result in a Pandas dataframe named \"trips\"",
"%%bigquery trips\nSELECT\n FORMAT_TIMESTAMP(\n \"%Y-%m-%d %H:%M:%S %Z\", pickup_datetime) AS pickup_datetime,\n pickup_longitude, pickup_latitude, \n dropoff_longitude, dropoff_latitude,\n passenger_count,\n trip_distance,\n tolls_amount,\n fare_amount,\n total_amount\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n\nprint(len(trips))\n\n# We can slice Pandas dataframes as if they were arrays\ntrips[:10]",
"<h3> Exploring data </h3>\n\nLet's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.",
"# TODO 4: Visualize your dataset using the Seaborn library.\n# Plot the distance of the trip as X and the fare amount as Y.\nax = sns.regplot(\n x=\"\",\n y=\"\",\n fit_reg=False,\n ci=None,\n truncate=True,\n data=trips,\n)\nax.figure.set_size_inches(10, 8)",
"Hmm ... do you see something wrong with the data that needs addressing?\nIt appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).\nNote the extra WHERE clauses.",
"%%bigquery trips\nSELECT\n FORMAT_TIMESTAMP(\n \"%Y-%m-%d %H:%M:%S %Z\", pickup_datetime) AS pickup_datetime,\n pickup_longitude, pickup_latitude, \n dropoff_longitude, dropoff_latitude,\n passenger_count,\n trip_distance,\n tolls_amount,\n fare_amount,\n total_amount\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n # TODO 4a: Filter the data to only include non-zero distance trips and fares above $2.50\n AND \n\nprint(len(trips))\n\nax = sns.regplot(\n x=\"trip_distance\",\n y=\"fare_amount\",\n fit_reg=False,\n ci=None,\n truncate=True,\n data=trips,\n)\nax.figure.set_size_inches(10, 8)",
"What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.\nLet's also examine whether the toll amount is captured in the total amount.",
"tollrides = trips[trips[\"tolls_amount\"] > 0]\ntollrides[tollrides[\"pickup_datetime\"] == \"2012-02-27 09:19:10 UTC\"]\n\nnotollrides = trips[trips[\"tolls_amount\"] == 0]\nnotollrides[notollrides[\"pickup_datetime\"] == \"2012-02-27 09:19:10 UTC\"]",
"Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.\nLet's also look at the distribution of values within the columns.",
"trips.describe()",
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
namco1992/algorithms_in_python | algorithms/tree.ipynb | mit | [
"Tree\n定义\n一棵二叉树的定义如下。key可以存储任意的对象,亦即每棵树也可以是其他树的子树。",
"class BinaryTree():\n def __init__(self, root_obj):\n self.key = root_obj\n self.left_child = None\n self.right_child = None\n \n def insert_left(self, new_node):\n # if the tree do not have a left child\n # then create a node: one tree without children\n if self.left_child is None:\n self.left_child = BinaryTree(new_node)\n # if there is a child, then concat the child\n # under the node we inserted\n else:\n t = BinaryTree(new_node)\n t.left_child = self.left_child\n self.left_child = t\n\n def insert_right(self, new_node):\n # if the tree do not have a right child\n # then create a node: one tree without children\n if self.right_child is None:\n self.right_child = BinaryTree(new_node)\n # if there is a child, then concat the child\n # under the node we inserted\n else:\n t = BinaryTree(new_node)\n t.right_child = self.right_child\n self.right_child = t\n \n def get_right_child(self):\n return self.right_child\n \n def get_left_child(self):\n return self.left_child\n \n def set_root(self, obj):\n self.key = obj\n \n def get_root(self):\n return self.key\n\nr = BinaryTree('a')\nprint r.get_root()\nprint r.get_left_child()\nr.insert_left('b')\nprint r.get_left_child().get_root()\n\n\n",
"遍历\n\n前序\n中序\n后序",
"def preorder(tree):\n if tree:\n print tree.get_root()\n preorder(tree.get_left_child())\n preorder(tree.get_right_child())\n\ndef postorder(tree):\n if tree:\n postorder(tree.get_left_child())\n postorder(tree.get_right_child())\n print tree.get_root()\n\ndef inorder(tree):\n if tree:\n inorder(tree.get_left_child())\n print tree.get_root()\n inorder(tree.get_right_child())\n\nr = BinaryTree('root')\nr.insert_left('l1')\nr.insert_left('l2')\nr.insert_right('r1')\nr.insert_right('r2')\nr.get_left_child().insert_right('r3')\n\n\npreorder(r)\n\n\n\n",
"二叉堆实现优先队列\n二叉堆是队列的一种实现方式。\n二叉堆可以用完全二叉树来实现。所谓完全二叉树(complete binary tree),有定义如下:\n\nA complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible.\n除叶节点外,所有层都是填满的,叶节点则按照从左至右的顺序填满。\n\n完全二叉树的一个重要性质:\n当以列表表示完全二叉树时,位置 p 的父节点,其 left child 位于 2p 位置,其 right child 位于 2p+1 的位置。\n为了满足使用列表表示的性质,列表中第一个位置list[0]由 0 填充,树从list[1]开始。\n\nOperations\n\nBinaryHeap() creates a new, empty, binary heap.\ninsert(k) adds a new item to the heap.\nfindMin() returns the item with the minimum key value, leaving item in the heap.\ndelMin() returns the item with the minimum key value, removing the item from the heap.\nisEmpty() returns true if the heap is empty, false otherwise.\nsize() returns the number of items in the heap.\nbuildHeap(list) builds a new heap from a list of keys.",
"class BinHeap(object):\n def __init__(self):\n self.heap_list = [0]\n self.current_size = 0",
"二叉搜索树 Binary Search Trees\n其性质与字典非常相近。\nOperations\n\nMap() Create a new, empty map.\nput(key,val) Add a new key-value pair to the map. If the key is already in the map then replace the old value with the new value.\nget(key) Given a key, return the value stored in the map or None otherwise.\ndel Delete the key-value pair from the map using a statement of the form del map[key].\nlen() Return the number of key-value pairs stored in the map.\nin Return True for a statement of the form key in map, if the given key is in the map.",
"class BinarySearchTree(object):\n def __init__(self):\n self.root = None\n self.size = 0\n \n def length(self):\n return self.size\n \n def __len__(self):\n return self.size\n \n def __iter__(self):\n return self.root.__iter__()\n \n def put(self, key, val):\n if self.root:\n self._put(key, val, self.root)\n else:\n self.root = TreeNode(key, val)\n self.size += 1\n \n def _put(key, val, current_node):\n if key < current_node:\n if current_node.has_left_child():\n _put(key, val, current_node.left_child)\n else:\n current_node.left_child = TreeNode(key, val, parent=current_node)\n else:\n if current_node.has_right_child():\n _put(key, val, current_node.right_child)\n else:\n current_node.right_child = TreeNode(key, val, parent=current_node)\n \n def __setitem__(self, k, v):\n self.put(k, v)\n\nclass TreeNode(object):\n def __init__(self, key, val, left=None, right=None, parent=None):\n self.key = key\n self.payload = val\n self.left_child = left\n self.right_child = right\n self.parent = parent\n \n def has_left_child(self):\n return self.left_child\n \n def has_right_child(self):\n return self.right_child\n \n def is_root(self):\n return not self.parent\n \n def is_leaf(self):\n return not (self.right_child or self.left_child)\n \n def has_any_children(self):\n return self.right_child or self.left_child\n \n def has_both_children(self):\n return self.right_child and self.right_child\n \n def replace_node_data(self, key, value, lc, rc):\n self.key = key\n self.payload = value\n self.left_child = lc\n self.right_child = rc\n if self.has_left_child():\n self.left_child.parent = self\n if self.has_right_child():\n self.right_child.parent = self\n \n ",
"平衡二叉搜索树 Balanced Binary Search Tree\n又名 AVL 树。避免出现最坏情况下 O(n) 的复杂度。AVL 的搜索复杂度稳定在 O(logN)。\nbalanceFactor=height(leftSubTree)−height(rightSubTree)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
opalytics/opalytics-ticdat | examples/amplpy/netflow/netflow_other_data_sources.ipynb | bsd-2-clause | [
"Using ticdat to build modular engines\nThe goal of the ticdat package is to facilitate solve engines that are modular and robust. For example, the multicommodity netflow.py engine can read and write from a variety of file types when run from the the command line. It can also be run from a Python script that contains embedded static data, or from a script that reads and writes from a system-of-record data source such as an ERP system. \nWith regards to the latter, we should note that Python is one of the most popular \"glue\" languages. The market has recognized that Python scripts are easy to write, manage data with intuitive programming syntax, and can be connected to nearly any data source.\nThe ticdat package can easily be used in any Python glue script. One way to do this is to exploit ticdat's ability to recognize data tables as list-of-lists. The inner lists contain data values in the field order defined by by the PanDatFactory (i.e. netflow.input_schema).\nFor example, suppose the netflow engine needs to connect to an Oracle database for a daily automated solve. The integration engineer can use the cx_Oracle package (or something equivalent) to turn system data into a list-of-lists for each input table. These data structures can then be used to create a PanDat object that can be passed as input data to netflow.solve. The solution PanDat object returned by netflow.solve can then be converted back into a list-of-lists representation of each solution report table. (The list-of-lists strategy is just one approach. It might make sense to convert system-of-record data into pandas.DataFrame objects, and then use these DataFrames to build the PanDat object.)\nWe demonstrate this approach without explicit references to cx_Oracle. By demonstrating that ticdat is compatible with list-of-list/DataFrame table representations we thus show that ticdat is compatible with any data source that can be connected to Python, and also with human readable static data.",
"commodities = [['Pencils', 0.5], ['Pens', 0.2125]]\n\n# a one column table can just be a simple list \nnodes = ['Boston', 'Denver', 'Detroit', 'New York', 'Seattle']\n\ncost = [['Pencils', 'Denver', 'Boston', 10.0],\n ['Pencils', 'Denver', 'New York', 10.0],\n ['Pencils', 'Denver', 'Seattle', 7.5],\n ['Pencils', 'Detroit', 'Boston', 2.5],\n ['Pencils', 'Detroit', 'New York', 5.0],\n ['Pencils', 'Detroit', 'Seattle', 15.0],\n ['Pens', 'Denver', 'Boston', 15.0],\n ['Pens', 'Denver', 'New York', 17.5],\n ['Pens', 'Denver', 'Seattle', 7.5],\n ['Pens', 'Detroit', 'Boston', 5.0],\n ['Pens', 'Detroit', 'New York', 5.0],\n ['Pens', 'Detroit', 'Seattle', 20.0]]\n\ninflow = [['Pencils', 'Boston', -200],\n ['Pencils', 'Denver', 240],\n ['Pencils', 'Detroit', 200],\n ['Pencils', 'New York', -200],\n ['Pencils', 'Seattle', -40],\n ['Pens', 'Boston', -160],\n ['Pens', 'Denver', 160],\n ['Pens', 'Detroit', 240],\n ['Pens', 'New York', -120],\n ['Pens', 'Seattle', -120]]",
"An integration engineer might prefer to copy system-of-records data into pandas.DataFrame objects. Note that pandas is itself capable of reading directly from various SQL databases, although it usually needs a supporting package like cx_Oracle.",
"from pandas import DataFrame\narcs = DataFrame({\"Source\": [\"Denver\", \"Denver\", \"Denver\", \"Detroit\", \"Detroit\", \"Detroit\",], \n \"Destination\": [\"Boston\", \"New York\", \"Seattle\", \"Boston\", \"New York\", \n \"Seattle\"], \n \"Capacity\": [120, 120, 120, 100, 80, 120]})\n# PanDatFactory doesn't require the fields to be in order so long as the field names are supplied\narcs = arcs[[\"Destination\", \"Source\", \"Capacity\"]]\narcs",
"Next we create a PanDat input data object from the list-of-lists/DataFrame representations.",
"%env PATH = PATH:/Users/petercacioppi/ampl/ampl\nfrom netflow import input_schema, solve, solution_schema\ndat = input_schema.PanDat(commodities=commodities, nodes=nodes, cost=cost, arcs=arcs, \n inflow=inflow)",
"We now create a PanDat solution data object by calling solve.",
"sln = solve(dat)",
"We now create a list-of-lists representation of the solution data object.",
"sln_lists = {t: list(map(list, getattr(sln, t).itertuples(index=False))) \n for t in solution_schema.all_tables}",
"Here we demonstrate that sln_lists is a dictionary mapping table name to list-of-lists of solution report data.",
"import pprint\nfor sln_table_name, sln_table_data in sln_lists.items():\n print \"\\n\\n**\\nSolution Table %s\\n**\"%sln_table_name\n pprint.pprint(sln_table_data)",
"Of course the solution data object itself contains DataFrames, if that representation is preferred.",
"sln.flow",
"Using ticdat to build robust engines\nThe preceding section demonstrated how we can use ticdat to build modular engines. We now demonstrate how we can use ticdat to build engines that check solve pre-conditions, and are thus robust with respect to data integrity problems.\nFirst, lets violate our (somewhat artificial) rule that the commodity volume must be positive.",
"dat.commodities.loc[dat.commodities[\"Name\"] == \"Pencils\", \"Volume\"] = 0\ndat.commodities",
"The input_schema can not only flag this problem, but give us a useful data structure to examine.",
"data_type_failures = input_schema.find_data_type_failures(dat)\ndata_type_failures\n\ndata_type_failures['commodities', 'Volume']",
"Next, lets add a Cost record for a non-existent commodity and see how input_schema flags this problem.",
"dat.cost = dat.cost.append({'Commodity':'Crayons', 'Source': 'Detroit', \n 'Destination': 'Seattle', 'Cost': 10}, \n ignore_index=True)\nfk_failures = input_schema.find_foreign_key_failures(dat, verbosity=\"Low\")\nfk_failures\n\nfk_failures['cost', 'commodities', ('Commodity', 'Name')]",
"In real life, data integrity failures can typically be grouped into a small number of categories. However, the number of failures in each category might be quite large. ticdat creates data structures for each of these categories that can themselves be examined programmatically. As a result, an analyst can leverage the power of Python and pandas to detect patterns in the data integrity problems."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
keras-team/keras-io | examples/generative/ipynb/vae.ipynb | apache-2.0 | [
"Variational AutoEncoder\nAuthor: fchollet<br>\nDate created: 2020/05/03<br>\nLast modified: 2020/05/03<br>\nDescription: Convolutional Variational AutoEncoder (VAE) trained on MNIST digits.\nSetup",
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers",
"Create a sampling layer",
"\nclass Sampling(layers.Layer):\n \"\"\"Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.\"\"\"\n\n def call(self, inputs):\n z_mean, z_log_var = inputs\n batch = tf.shape(z_mean)[0]\n dim = tf.shape(z_mean)[1]\n epsilon = tf.keras.backend.random_normal(shape=(batch, dim))\n return z_mean + tf.exp(0.5 * z_log_var) * epsilon\n",
"Build the encoder",
"latent_dim = 2\n\nencoder_inputs = keras.Input(shape=(28, 28, 1))\nx = layers.Conv2D(32, 3, activation=\"relu\", strides=2, padding=\"same\")(encoder_inputs)\nx = layers.Conv2D(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\nx = layers.Flatten()(x)\nx = layers.Dense(16, activation=\"relu\")(x)\nz_mean = layers.Dense(latent_dim, name=\"z_mean\")(x)\nz_log_var = layers.Dense(latent_dim, name=\"z_log_var\")(x)\nz = Sampling()([z_mean, z_log_var])\nencoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name=\"encoder\")\nencoder.summary()",
"Build the decoder",
"latent_inputs = keras.Input(shape=(latent_dim,))\nx = layers.Dense(7 * 7 * 64, activation=\"relu\")(latent_inputs)\nx = layers.Reshape((7, 7, 64))(x)\nx = layers.Conv2DTranspose(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\nx = layers.Conv2DTranspose(32, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\ndecoder_outputs = layers.Conv2DTranspose(1, 3, activation=\"sigmoid\", padding=\"same\")(x)\ndecoder = keras.Model(latent_inputs, decoder_outputs, name=\"decoder\")\ndecoder.summary()",
"Define the VAE as a Model with a custom train_step",
"\nclass VAE(keras.Model):\n def __init__(self, encoder, decoder, **kwargs):\n super(VAE, self).__init__(**kwargs)\n self.encoder = encoder\n self.decoder = decoder\n self.total_loss_tracker = keras.metrics.Mean(name=\"total_loss\")\n self.reconstruction_loss_tracker = keras.metrics.Mean(\n name=\"reconstruction_loss\"\n )\n self.kl_loss_tracker = keras.metrics.Mean(name=\"kl_loss\")\n\n @property\n def metrics(self):\n return [\n self.total_loss_tracker,\n self.reconstruction_loss_tracker,\n self.kl_loss_tracker,\n ]\n\n def train_step(self, data):\n with tf.GradientTape() as tape:\n z_mean, z_log_var, z = self.encoder(data)\n reconstruction = self.decoder(z)\n reconstruction_loss = tf.reduce_mean(\n tf.reduce_sum(\n keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)\n )\n )\n kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))\n kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))\n total_loss = reconstruction_loss + kl_loss\n grads = tape.gradient(total_loss, self.trainable_weights)\n self.optimizer.apply_gradients(zip(grads, self.trainable_weights))\n self.total_loss_tracker.update_state(total_loss)\n self.reconstruction_loss_tracker.update_state(reconstruction_loss)\n self.kl_loss_tracker.update_state(kl_loss)\n return {\n \"loss\": self.total_loss_tracker.result(),\n \"reconstruction_loss\": self.reconstruction_loss_tracker.result(),\n \"kl_loss\": self.kl_loss_tracker.result(),\n }\n",
"Train the VAE",
"(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()\nmnist_digits = np.concatenate([x_train, x_test], axis=0)\nmnist_digits = np.expand_dims(mnist_digits, -1).astype(\"float32\") / 255\n\nvae = VAE(encoder, decoder)\nvae.compile(optimizer=keras.optimizers.Adam())\nvae.fit(mnist_digits, epochs=30, batch_size=128)",
"Display a grid of sampled digits",
"import matplotlib.pyplot as plt\n\n\ndef plot_latent_space(vae, n=30, figsize=15):\n # display a n*n 2D manifold of digits\n digit_size = 28\n scale = 1.0\n figure = np.zeros((digit_size * n, digit_size * n))\n # linearly spaced coordinates corresponding to the 2D plot\n # of digit classes in the latent space\n grid_x = np.linspace(-scale, scale, n)\n grid_y = np.linspace(-scale, scale, n)[::-1]\n\n for i, yi in enumerate(grid_y):\n for j, xi in enumerate(grid_x):\n z_sample = np.array([[xi, yi]])\n x_decoded = vae.decoder.predict(z_sample)\n digit = x_decoded[0].reshape(digit_size, digit_size)\n figure[\n i * digit_size : (i + 1) * digit_size,\n j * digit_size : (j + 1) * digit_size,\n ] = digit\n\n plt.figure(figsize=(figsize, figsize))\n start_range = digit_size // 2\n end_range = n * digit_size + start_range\n pixel_range = np.arange(start_range, end_range, digit_size)\n sample_range_x = np.round(grid_x, 1)\n sample_range_y = np.round(grid_y, 1)\n plt.xticks(pixel_range, sample_range_x)\n plt.yticks(pixel_range, sample_range_y)\n plt.xlabel(\"z[0]\")\n plt.ylabel(\"z[1]\")\n plt.imshow(figure, cmap=\"Greys_r\")\n plt.show()\n\n\nplot_latent_space(vae)",
"Display how the latent space clusters different digit classes",
"\ndef plot_label_clusters(vae, data, labels):\n # display a 2D plot of the digit classes in the latent space\n z_mean, _, _ = vae.encoder.predict(data)\n plt.figure(figsize=(12, 10))\n plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels)\n plt.colorbar()\n plt.xlabel(\"z[0]\")\n plt.ylabel(\"z[1]\")\n plt.show()\n\n\n(x_train, y_train), _ = keras.datasets.mnist.load_data()\nx_train = np.expand_dims(x_train, -1).astype(\"float32\") / 255\n\nplot_label_clusters(vae, x_train, y_train)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MathiasRiechert/BigDataPapers | 1-number of papers over time/Creating overview bar-plots.ipynb | gpl-3.0 | [
"Number of papers over time\ndata source\nWe load the data from the Competence Centre for Bibliometrics: http://www.bibliometrie.info/.\nThey licence access to the Web of Science and Scopus bibliometric databases, spanning a high proportion of all peer-reviewed research literature. The Competence Centre for Bibliometrics further processes both databases' data, so that it can be queried with SQL.\nload libraries:",
"import cx_Oracle #ensure that OS, InstantClient (Basic, ODBC, SDK) and cx_Oracle are all 64 bit. Install with \"pip install cx_Oracle\". Add link to InstantClient in Path variable!\nimport pandas as pd\nimport re\nimport plotly.plotly as py\nimport plotly.graph_objs as go",
"set parameter",
"#parameter:\nsearchterm=\"big data\" #lowecase!\ncolorlist=[\"#01be70\",\"#586bd0\",\"#c0aa12\",\"#0183e6\",\"#f69234\",\"#0095e9\",\"#bd8600\",\"#007bbe\",\"#bb7300\",\"#63bcfc\",\"#a84a00\",\"#01bedb\",\"#82170e\",\"#00c586\",\"#a22f1f\",\"#3fbe57\",\"#3e4681\",\"#9bc246\",\"#9a9eec\",\"#778f00\",\"#00aad9\",\"#fc9e5e\",\"#01aec1\",\"#832c1e\",\"#55c99a\",\"#dd715b\",\"#017c1c\",\"#ff9b74\",\"#009556\",\"#83392a\",\"#00b39b\",\"#8e5500\",\"#50a7c6\",\"#f4a268\",\"#02aca7\",\"#532b00\",\"#67c4bd\",\"#5e5500\",\"#f0a18f\",\"#007229\",\"#d2b073\",\"#005d3f\",\"#a5be6b\",\"#2a4100\",\"#8cb88c\",\"#2f5c00\",\"#007463\",\"#5b7200\",\"#787c48\",\"#3b7600\"]",
"load data from SQL database:",
"dsn_tns=cx_Oracle.makedsn('127.0.0.1','6025',service_name='bibliodb01.fiz.karlsruhe') #due to licence requirements,\n# access is only allowed for members of the competence center of bibliometric and cooperation partners. You can still \n# continue with the resulting csv below.\n #open connection:\ndb=cx_Oracle.connect(<username>, <password>, dsn_tns)\nprint(db.version)\n\n#%% define sql-query function:\ndef read_query(connection, query):\n cursor = connection.cursor()\n try:\n cursor.execute( query )\n names = [ x[0] for x in cursor.description]\n rows = cursor.fetchall()\n return pd.DataFrame( rows, columns=names)\n finally:\n if cursor is not None:\n cursor.close()\n\n#%% load paper titles from WOSdb:\ndatabase=\"wos_B_2016\" \n \ncommand=\"\"\"SELECT DISTINCT(ARTICLE_TITLE), PUBYEAR \n FROM \"\"\"+database+\"\"\".KEYWORDS, \"\"\"+database+\"\"\".ITEMS_KEYWORDS, \"\"\"+database+\"\"\".ITEMS \n WHERE\n \"\"\"+database+\"\"\".ITEMS_KEYWORDS.FK_KEYWORDS=\"\"\"+database+\"\"\".KEYWORDS.PK_KEYWORDS\n AND \"\"\"+database+\"\"\".ITEMS.PK_ITEMS=\"\"\"+database+\"\"\".ITEMS_KEYWORDS.FK_ITEMS \n AND (lower(\"\"\"+database+\"\"\".KEYWORDS.KEYWORD) LIKE '%\"\"\"+searchterm+\"\"\"%' OR lower(ARTICLE_TITLE) LIKE '%\"\"\"+searchterm+\"\"\"%')\n\"\"\"\n\ndfWOS=read_query(db,command)\ndfWOS['wos']=True #to make the source identifyable\ndfWOS.to_csv(\"all_big_data_titles_year_wos.csv\", sep=';')\n\n\n#%% load paper titles from SCOPUSdb:\ndatabase=\"SCOPUS_B_2016\" \n \ncommand=\"\"\"SELECT DISTINCT(ARTICLE_TITLE), PUBYEAR \n FROM \"\"\"+database+\"\"\".KEYWORDS, \"\"\"+database+\"\"\".ITEMS_KEYWORDS, \"\"\"+database+\"\"\".ITEMS \n WHERE\n \"\"\"+database+\"\"\".ITEMS_KEYWORDS.FK_KEYWORDS=\"\"\"+database+\"\"\".KEYWORDS.PK_KEYWORDS\n AND \"\"\"+database+\"\"\".ITEMS.PK_ITEMS=\"\"\"+database+\"\"\".ITEMS_KEYWORDS.FK_ITEMS \n AND (lower(\"\"\"+database+\"\"\".KEYWORDS.KEYWORD) LIKE '%\"\"\"+searchterm+\"\"\"%' OR lower(ARTICLE_TITLE) LIKE '%\"\"\"+searchterm+\"\"\"%')\n\"\"\"\n\ndfSCOPUS=read_query(db,command)\ndfSCOPUS['scopus']=True #to make the source identifyable\ndfSCOPUS.to_csv(\"all_big_data_titles_year_scopus.csv\", sep=';')\n\n#this takes some time, we will work with the exported CSV from here on",
"merging data",
"dfWOS=pd.read_csv(\"all_big_data_titles_year_wos.csv\",sep=\";\")\ndfSCOPUS=pd.read_csv(\"all_big_data_titles_year_scopus.csv\",sep=\";\")\n\ndf=pd.merge(dfWOS,dfSCOPUS,on='ARTICLE_TITLE',how='outer')\n#get PUBYEAR in one column:\ndf.loc[df['wos'] == 1, 'PUBYEAR_y'] = df['PUBYEAR_x']\n#save resulting csv again:\ndf=df[['ARTICLE_TITLE','PUBYEAR_y','wos','scopus']]\ndf.to_csv(\"all_big_data_titles_with_year.csv\", sep=';')\ndf\n",
"grouping data",
"grouped=df.groupby(['PUBYEAR_y']) \ndf2=grouped.agg('count').reset_index()\ndf2",
"visualize with plotly:\nwe make three diagrams:\n1) a horizontal bar plot comparing the overall papers per db\n2) a vertical bar plot differentiating time and db\n3) a vertical bar plot differentiating tima and db with a logarithmic y-scale (allows for better\ninspection of smaller numbers)",
"#set data for horizontal bar plot:\ndata = [go.Bar(\n x=[pd.DataFrame.sum(df2)['wos'],pd.DataFrame.sum(df2)['scopus'],pd.DataFrame.sum(df2)['ARTICLE_TITLE']],\n y=['Web of Science', 'Scopus', 'Total'],\n orientation = 'h',\n marker=dict(\n color=colorlist\n )\n)]\n#py.plot(data, filename='big_data_papers_horizontal') #for uploading to plotly\npy.iplot(data, filename='horizontal-bar')\n\n#set data for stacked bar plot:\ntrace1 = go.Bar(\n x=df2['PUBYEAR_y'],\n y=df2['wos'],\n name='Web of Science',\n marker=dict(\n color=colorlist[0]\n )\n)\ntrace2 = go.Bar(\n x=df2['PUBYEAR_y'],\n y=df2['scopus'],\n name='Scopus',\n marker=dict(\n color=colorlist[1]\n )\n\n)\ntrace3 = go.Bar(\n x=df2['PUBYEAR_y'],\n y=df2['ARTICLE_TITLE'],\n name='All Papers',\n marker=dict(\n color=colorlist[2]\n )\n)\ndata = [trace1, trace2,trace3]\n\n#set layout for stacked bar chart with logarithmic y scale:\n\n#set layout for stacked bar chart with normal y scale:\nlayout_no_log = go.Layout(\n title='Big data papers over time',\n barmode='group',\n xaxis=dict(\n title='year',\n titlefont=dict(\n family='Arial, sans-serif',\n size=14,\n color='lightgrey'\n ),\n tickfont=dict(\n family='Arial, sans-serif',\n size=10,\n color='black'\n ),\n showticklabels=True,\n dtick=1,\n tickangle=45,\n )\n)\n#plot:\nfig1 = go.Figure(data=data, layout=layout_no_log)\npy.iplot(fig1, filename='big_data_papers_no_log')\n\n\nlayout_log = go.Layout(\n title='Big data papers over time (log y-scale)',\n barmode='group',\n xaxis=dict(\n title='year',\n titlefont=dict(\n family='Arial, sans-serif',\n size=14,\n color='lightgrey'\n ),\n tickfont=dict(\n family='Arial, sans-serif',\n size=10,\n color='black'\n ),\n showticklabels=True,\n dtick=1,\n tickangle=45,\n ),\n yaxis=dict(\n type='log'\n )\n )\nfig2 = go.Figure(data=data, layout=layout_log)\npy.iplot(fig2, filename='big_data_papers_log')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.19/_downloads/ff83425ee773d1d588a6994e5560c06c/plot_mne_dspm_source_localization.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Source localization with MNE/dSPM/sLORETA/eLORETA\nThe aim of this tutorial is to teach you how to compute and apply a linear\ninverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse",
"Process MEG data",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname) # already has an average reference\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nevent_id = dict(aud_l=1) # event trigger and conditions\ntmin = -0.2 # start of each epoch (200ms before the trigger)\ntmax = 0.5 # end of each epoch (500ms after the trigger)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\nbaseline = (None, 0) # means from the first instant to t = 0\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=('meg', 'eog'), baseline=baseline, reject=reject)",
"Compute regularized noise covariance\nFor more details see tut_compute_covariance.",
"noise_cov = mne.compute_covariance(\n epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)\n\nfig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)",
"Compute the evoked response\nLet's just use MEG channels for simplicity.",
"evoked = epochs.average().pick('meg')\nevoked.plot(time_unit='s')\nevoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',\n time_unit='s')\n\n# Show whitening\nevoked.plot_white(noise_cov, time_unit='s')\n\ndel epochs # to save memory",
"Inverse modeling: MNE/dSPM on evoked and raw data",
"# Read the forward solution and compute the inverse operator\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'\nfwd = mne.read_forward_solution(fname_fwd)\n\n# make an MEG inverse operator\ninfo = evoked.info\ninverse_operator = make_inverse_operator(info, fwd, noise_cov,\n loose=0.2, depth=0.8)\ndel fwd\n\n# You can write it to disk with::\n#\n# >>> from mne.minimum_norm import write_inverse_operator\n# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',\n# inverse_operator)",
"Compute inverse solution",
"method = \"dSPM\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc, residual = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None,\n return_residual=True, verbose=True)",
"Visualization\nView activation time-series",
"plt.figure()\nplt.plot(1e3 * stc.times, stc.data[::100, :].T)\nplt.xlabel('time (ms)')\nplt.ylabel('%s value' % method)\nplt.show()",
"Examine the original data and the residual after fitting:",
"fig, axes = plt.subplots(2, 1)\nevoked.plot(axes=axes)\nfor ax in axes:\n ax.texts = []\n for line in ax.lines:\n line.set_color('#98df81')\nresidual.plot(axes=axes)",
"Here we use peak getter to move visualization to the time point of the peak\nand draw a marker at the maximum peak vertex.",
"vertno_max, time_max = stc.get_peak(hemi='rh')\n\nsubjects_dir = data_path + '/subjects'\nsurfer_kwargs = dict(\n hemi='rh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=5)\nbrain = stc.plot(**surfer_kwargs)\nbrain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',\n scale_factor=0.6, alpha=0.5)\nbrain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',\n font_size=14)",
"Morph data to average brain",
"# setup source morph\nmorph = mne.compute_source_morph(\n src=inverse_operator['src'], subject_from=stc.subject,\n subject_to='fsaverage', spacing=5, # to ico-5\n subjects_dir=subjects_dir)\n# morph data\nstc_fsaverage = morph.apply(stc)\n\nbrain = stc_fsaverage.plot(**surfer_kwargs)\nbrain.add_text(0.1, 0.9, 'Morphed to fsaverage', 'title', font_size=20)\ndel stc_fsaverage",
"Dipole orientations\nThe pick_ori parameter of the\n:func:mne.minimum_norm.apply_inverse function controls\nthe orientation of the dipoles. One useful setting is pick_ori='vector',\nwhich will return an estimate that does not only contain the source power at\neach dipole, but also the orientation of the dipoles.",
"stc_vec = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori='vector')\nbrain = stc_vec.plot(**surfer_kwargs)\nbrain.add_text(0.1, 0.9, 'Vector solution', 'title', font_size=20)\ndel stc_vec",
"Note that there is a relationship between the orientation of the dipoles and\nthe surface of the cortex. For this reason, we do not use an inflated\ncortical surface for visualization, but the original surface used to define\nthe source space.\nFor more information about dipole orientations, see\ntut-dipole-orientations.\nNow let's look at each solver:",
"for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]),\n ('sLORETA', [3, 5, 7]),\n ('eLORETA', [0.75, 1.25, 1.75]),)):\n surfer_kwargs['clim']['lims'] = lims\n stc = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None)\n brain = stc.plot(figure=mi, **surfer_kwargs)\n brain.add_text(0.1, 0.9, method, 'title', font_size=20)\n del stc"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zzsza/Datascience_School | 12. 추정 및 검정/02. 검정과 유의 확률.ipynb | mit | [
"검정과 유의 확률\n검정(testing)은 데이터 뒤에 숨어있는 확률 변수의 분포와 모수에 대한 가설의 진위를 정량적(quantitatively)으로 증명하는 작업을 말한다.\n[[school_notebook:87b67dafd7544e3380af278ff4f22d77]]\n예를 들어 다음과 같은 문제가 주어졌다면 어떻게 풀겠는가?\n\n문제1\n\n<blockquote> \n어떤 동전을 15번 던졌더니 12번이 앞면이 나왔다. 이 동전은 휘어지지 않은 공정한 동전(fair coin)인가?\n</blockquote>\n\n\n문제2\n\n<blockquote> \n어떤 트레이더의 일주일 수익률은 다음과 같다.:<br>\n-2.5%, -5%, 4.3%, -3.7% -5.6% <br>\n이 트레이더는 돈을 벌어다 줄 사람인가, 아니면 돈을 잃을 사람인가? \n</blockquote>\n\n이러한 문제들을 데이터 분석의 방법론으로 푼다면 다음과 같이 풀 수 있다.\n\n\n데이터가 어떤 고정된(fixed) 확률 분포를 가지는 확률 변수라고 가정한다. 동전은 베르누이 분포를 따르는 확률 변수의 표본이며 트레이더의 수익률은 정규 분포를 따르는 확률 변수의 표본이라고 가정한다.\n\n\n이 확률 분포의 모수값이 특정한 값을 가지는지 혹은 특정한 값보다 크거나 같은지 알고자 한다. 동전이 공정한 동전이라고 주정하는 것은 그 뒤의 베르누이 확률 분포의 모수 $\\theta$의 값이 0.5 이라고 주장하는 것과 같다. 트레이더가 장기적으로 돈을 벌어다 줄 것이라고 주장하는 것은 그 뒤의 정규 분포의 기댓값 모수 $\\mu$ 가 0보다 크거나 같다고 주장하는 것이다.\n\n\n모수 값이 이러한 주장을 따른다고 가정하면 실제로 현실에 나타난 데이터가 나올 확률을 계산할 수 있다. 동전의 경우에는 공정한 동전임에도 불구하고 15번 중 12번이나 앞면이 나올 확률을 계산할 수 있으며 트레이더의 경우에는 정규 분포에서 해당 데이터가 나올 확률을 계산할 수 있다.\n\n\n이렇게 구한 확률의 값이 판단자가 정한 특정한 기준에 미치지 못한다면 이러한 주장이 틀렸다고 생각할 수 밖에 없다. 반대로 값이 기준보다 높다면 그 주장이 틀렸다고 판단할 증거가 부족한 것이다.\n\n\n가설\n이렇게 확률 분포에 대한 어떤 주장을 가설(hypothesis)이라고 하며 $H$로 표기하는 경우가 많다. 이 가설을 증명하는 행위를 통계적 가설 검정(statistical hypothesis testing) 줄여서 검정(testing)이라고 한다. 특히 확률 분포의 모수 값이 특정한 값을 가진다는 주장을 모수 검정 (parameter testing)이라고 한다. \n가장 일반적으로 사용되는 가설은 모수의 값이 0 이라는 가설이다. \n$$ H: \\theta = 0 $$\n이 가설은 회귀 분석(regression)에서 흔하게 사용되는데 회귀 계수의 값이 0 이면 종속 변수(target)가 해당 독립 변수(feature)의 영향을 받지 않는 다는 의미가 된다.\n검정 방법론\n가설 증명, 즉 검정의 기본적인 논리는 다음과 같다.\n\n\n만약 가설이 맞다면 즉, 모수 값이 특정한 조건을 만족한다면 해당 확률 변수로부터 만들어진 표본(sample) 데이터들은 어떤 규칙을 따르게 된다.\n\n\n해당 규칙에 따라 표본 데이터 집합에서 어떤 숫자를 계산하면 계산된 숫자는 특정한 확률 분포를 따르게 된다. 이 숫자를 검정 통계치(test statistics)라고 하며 확률 분포를 검정 통계 분포(test statistics distribution)라고 한다. 검정 통계 분포의 종류 및 모수의 값은 처음에 정한 가설에 의해 결정된다. 이렇게 검정 통계 분포를 결정하는 최초의 가설을 귀무 가설(Null hypothesis)이라고 한다.\n\n\n데이터에 의해서 실제로 계산된 숫자, 즉, 검정 통계치가 해당 검정 통계 분포에서 나올 수 있는 확률을 계산한다. 이를 유의 확률(p-value)라고 한다.\n\n\n만약 유의 확률이 미리 정한 특정한 기준값보다 작은 경우를 생각하자. 이 기준값을 유의 수준(significance level)이라고 하는 데 보통 1% 혹은 5% 정도의 작은 값을 지정한다. 유의 확률이 유의 수준으로 정한 값(예 1%)보다도 작다는 말은 해당 검정 통계 분포에서 이 검정 통계치가 나올 수 있는 확률이 아주 작다는 의미이므로 가장 근본이 되는 가설 즉, 귀무 가설이 틀렸다는 의미이다. 따라서 이 경우에는 귀무 가설을 기각(reject)한다.\n\n\n만약 유의 확률이 유의 수준보다 크다면 해당 검정 통계 분포에서 이 검정 통계치가 나오는 것이 불가능하지만은 않다는 의미이므로 귀무 가설을 기각할 수 없다. 따라서 이 경우에는 귀무 가설을 채택(accept)한다.\n\n\n귀무 가설과 대립 가설\n검정 작업을 하기 위해서는 기각 혹은 채택하고자 하는 가설을 만들어야 한다. 이러한 가설을 귀무 가설(Null Hypothesis)이라고 하며 $H_0$ 로 표기한다. 일반적으로 검정에서 그냥 가설이라고 하면 귀무가설을 가리킨다. 귀무 가설이 사실이라고 증명되면 채택(accept)하고 거짓이라고 증명되면 기각(reject)한다.\n귀무 가설이 기각되면 채택할 수도 있는 가설을 대립 가설(Alternative Hypothesis)이라고 하며 보통 $H_a$ 로 표기한다. \n예를 들어 귀무 가설 $H_0$가 다음과 같다고 가정하면,\n$$ H_0: \\theta = 0 $$\n다음 가설들은 이 귀무 가설에 대한 대립 가설이 될 수 있다.\n$$ H_a: \\theta \\neq 0 $$\n$$ H_a: \\theta > 0 $$\n$$ H_a: \\theta < 0 $$\n첫번째와 같은 형태의 대립 가설을 가지는 경우를 양측 검정(two-tailed testing), 두번째나 세번째와 같은 형태의 대립 가설을 가지는 경우를 단측 검정(one-tailed testing)이라고 한다.\n검정 통계량\n검정을 하려면 즉, 귀무 가설이 맞거나 틀린 것을 증명하려면 어떤 증거가 있어야 한다. 이 증거에 해당하는 숫자를 검정 통계량(test statistics)라고 한다. \n비유를 들어보자.\n\"어떤 병에 걸렸다\"라는 가설을 증명하려면 환자의 혈액을 채취하여 혈액 내의 특정한 성분의 수치를 측정해야 한다고 가정하자. 이 때 해당 수치가 바로 검정 통계량이 된다.\n\"어떤 학생이 우등 상장을 받을 수 있는 우등생이다\"라는 가설을 증명하려면 시험(test)에 대한 성적을 측정하면 된다. 이 시험 성적을 검정 통계량이라고 부를 수 있다.\n데이터 분석의 경우 검정 통계량은 데이터로부터 계산되는 일종의 함수이다.\n$$\n\\text{test statistics } t = f(x_1, x_2, \\ldots, x_n)\n$$\n예를 들어 동전을 $N$번 던질 경우 앞면이 나온 횟수가 $n$ 자체가 검정 통계량이 될 수 있다. \n정규 분포를 따르는 수익률의 경우라면 $N$개의 수익률 데이터 $x_1, \\ldots, x_N$에서 다음 수식으로 계산한 값도 검정 통계량이 된다. \n$$\nt = \\dfrac{m}{\\frac{s}{\\sqrt{N}}}\n$$\n여기에서\n$$\nm = \\dfrac{1}{N}\\sum_{i=1}^{N} x_i\n$$\n$$\ns^2 = \\dfrac{1}{N}\\sum_{i=1}^{N} (x_i-m)^2\n$$\n검정 통계량은 표본 자료에서 계산된 함수값이므로 표본처럼 확률적(random)이다. 즉, 경우에 따라 표본 값이 달라질 수 있는 것처럼 달라진 표본값에 의해 검정 통계량도 달라진다. 따라서 검정 통계량 $t$ 도 검정 통계량 확률 변수 $T$ 라는 확률 변수의 표본으로 볼 수 있다.\n데이터에 대한 아무 함수나 검정 통계량이 될 수 있는 것이 아닌다. 어떤 함수가 검정 통계량이 되려면 귀무 가설이 사실일 경우 표본에서 계산된 검정 통계량이 따르는 검정 통계량 확률 변수 $T$의 확률 분포를 귀무 가설로부터 알 수 있어야만 한다.\n예를 들어 \"어떤 병에 걸렸다\"는 가설을 혈액 성분 수치로부터 판단하려면 병에 걸린 환자의 성분 수치가 어떤 분포를 따르는지 알 수 있어야 한다. 현실에서는 실제로 병에 걸린 다수의 환자의 혈액 성분 수치를 사용하여 검정 통계량 분포를 구한다. 또한 \"어떤 학생이 우등생이다\"라는 가설을 시험 성적으로부터 판단하라면 우등생인 모든 학생의 시험 성적에 대한 분포를 구해야 한다.\n데이터 분석에서는 어떤 귀무 가설을 만족하는 표본을 입력 변수로 놓고 특정한 함수로 계산한 검정 통계량이 특정한 분포를 따른다는 것을 수학적인 증명을 통해 보이는 것이 일반적이다. 통계학자들의 중요한 업적 중의 하나가 특정한 귀무 가설에 대해 어떤 검정 통계량 함수가 어떤 검정 통계량 분포를 따른 다는 것을 증명해 준 것이다. \n검정 통계량의 예\n일반적으로 많이 사용되는 검정 통계량에는 다음과 같은 것들이 있다.\n1. 베르누이 분포 확률 변수\n모수 $\\theta$를 가지는 베르누이 분포 확률 변수에 대해서는 전체 시도 횟수 $N$ 번 중 성공한 횟수 $n$ 자체를 검정 통계량으로 쓸 수 있다. 이 검정 통계량은 자유도 $N$과 모수 $\\theta$를 가지는 이항 분포를 따른다.\n$$ x \\sim \\text{Ber} \\;\\; \\rightarrow \\;\\; t = \\sum x \\sim \\text{Bin} $$\n2. 카테고리 분포 확률 변수\n모수 벡터 $\\alpha$를 가지는 카테고리 분포 확률 변수에 대해서는 전체 시도 횟수 $N$ 번 중 성공한 횟수 벡터 $x$ 자체를 검정 통계량으로 쓸 수 있다. 이 검정 통계량은 자유도 $N$과 모수 벡터 $\\alpha$를 가지는 다항 분포를 따른다.\n$$ x \\sim \\text{Cat} \\;\\; \\rightarrow \\;\\; t = \\sum x \\sim \\text{Mul} $$\n3. 분산 $\\sigma^2$ 값을 알고 있는 정규 분포 확률 변수\n분산 모수 $\\sigma^2$의 값을 알고 있는 정규 분포 확률 변수에 대해서는 다음과 같이 샘플 평균을 정규화(nomarlize)한 값을 검정 통계량으로 쓴다. 이 검정 통계량은 표준 정규 분포를 따른다. 이 검정 통계량은 특별히 $z$라고 부른다.\n$$\nx \\sim \\mathcal{N}(\\mu, \\sigma^2) \\;\\; \\rightarrow \\;\\; z = \\dfrac{m-\\mu}{\\frac{\\sigma}{\\sqrt{N}}} \\sim \\mathcal{N}(z;0,1)\n$$\n여기에서 $m$은 샘플 평균\n$$\nm = \\dfrac{1}{N}\\sum_{i=1}^{N} x_i\n$$\n4. 분산 $\\sigma^2$ 값을 모르는 정규 분포 확률 변수\n이번에는 분산 모수 $\\sigma^2$의 값을 모르는 정규 분포 확률 변수를 고려하자.\n평균 모수 $\\mu$ 에 대한 검정을 할 때는 다음과 같이 샘플 평균을 샘플 분산으로 정규화(nomarlize)한 값을 검정 통계량으로 쓴다. 이 검정 통계량은 자유도가 $N-1$인 표준 student-t 분포를 따른다. $N$은 데이터의 수이다.\n$$\nx \\sim \\mathcal{N}(\\mu, \\sigma^2) \\;\\; \\rightarrow \\;\\; t = \\dfrac{m-\\mu}{\\frac{s}{\\sqrt{N}}} \\sim t(t;0,1,N-1)\n$$\n여기에서 $m$은 샘플 평균\n$$\nm = \\dfrac{1}{N}\\sum_{i=1}^{N} x_i\n$$\n$s^2$은 샘플 분산이다.\n$$\ns^2 = \\dfrac{1}{N-1}\\sum_{i=1}^{N} (x_i-m)^2\n$$\n분산 모수 $\\sigma^2$에 대한 검정을 할 때는 다음과 같이 샘플 분산을 정규화(normalize)한 값을 검정 통계량으로 쓴다. 이 검정 통계량은 자유도가 $N-1$인 카이 제곱 분포를 따른다. $N$은 데이터의 수이다.\n$$\nx \\sim \\mathcal{N}(\\mu, \\sigma^2) \\;\\; \\rightarrow \\;\\; t = (N-1)\\dfrac{s^2}{\\sigma^2} \\sim \\chi^2 (t;N-1)\n$$\n유의 확률 p-value\n귀무 가설이 사실이라는 가정하에 검정 통계량이 따르는 검정 통계량 분포를 알고 있다면 실제 데이터에서 계산한 검정 통계량 숫자가 분포에서 어느 부분쯤에 위치해 있는지를 알 수 있다. 이 위치를 나타내는 값이 바로 유의 확률(p-value) 이다.\n검정 통계량의 유의 확률은 검정 통계량 숫자보다 더 희귀한(rare) 값이면서 대립 가설을 따르는 값이 나올 수 있는 확률을 말한다. 이 확률은 검정 통계 확률 분포 밀도 함수(pdf)에서 양 끝의 꼬리(tail)부분에 해당하는 영역의 면적으로 계산한다. 실제로는 누적 확률 분포 함수를 사용한다.\n유의 확률은 같은 귀무 가설에 대해서도 대립 가설이 어떤 것인가에 따라 달라질 수 있다.",
"xx1 = np.linspace(-4, 4, 100)\nxx2 = np.linspace(-4, -2, 100)\nxx3 = np.linspace(2, 4, 100)\n\nplt.subplot(3, 1, 1)\nplt.fill_between(xx1, sp.stats.norm.pdf(xx1), facecolor='green', alpha=0.1)\nplt.fill_between(xx2, sp.stats.norm.pdf(xx2), facecolor='blue', alpha=0.35)\nplt.fill_between(xx3, sp.stats.norm.pdf(xx3), facecolor='blue', alpha=0.35)\nplt.text(-3, 0.1, \"p-value=%5.3f\" % (2*sp.stats.norm.cdf(-2)), horizontalalignment='center')\nplt.title(r\"Test statistics = 2. Two-tailed test. $H_a: \\mu \\neq 0$\")\n\nplt.subplot(3, 1, 2)\nplt.fill_between(xx1, sp.stats.norm.pdf(xx1), facecolor='green', alpha=0.1)\nplt.fill_between(xx3, sp.stats.norm.pdf(xx3), facecolor='blue', alpha=0.35)\nplt.text(3, 0.1, \"p-value=%5.3f\" % (sp.stats.norm.cdf(-2)), horizontalalignment='center')\nplt.title(r\"Test statistics = 2. One-tailed test. $H_a: \\mu > 0$\")\n\nplt.subplot(3, 1, 3)\nplt.fill_between(xx1, sp.stats.norm.pdf(xx1), facecolor='green', alpha=0.1)\nplt.fill_between(xx2, sp.stats.norm.pdf(xx2), facecolor='blue', alpha=0.35)\nplt.text(-3, 0.1, \"p-value=%5.3f\" % (sp.stats.norm.cdf(-2)), horizontalalignment='center')\nplt.title(r\"Test statistics = -2. One-tailed test. $H_a: \\mu < 0$\")\n\nplt.tight_layout()\nplt.show()",
"유의 확률의 값이 아주 작으면 귀무 가설이 맞다는 가정하에 계산된 검정 통계량이 나올 가능성이 희귀하다는 의미이다. \n다시 예를 들자면 \"어떤 병에 걸렸다\"는 귀무 가설을 증명하기 위한 검정에서 혈액 검사를 사용하여 계산한 유의확률이 0.02%라는 의미는 실제로 병에 걸린 환자들 중 혈액 검사 수치가 해당 환자의 혈액 검사 수치보다 낮은 사람은 0.02% 뿐이었다는 뜻이고 \"어떤 학생이 우등생이다.\"라는 귀무사설을 증명하기 위한 검정에서 시험 성적을 사용하여 계산한 유의확률이 0.3%라는 의미는 실제로 우등생의 성적을 분석해 보면 실수로 시험을 잘 못치른 경우를 포함해도 해당 점수보다 나쁜 경우는 0.3%에 지나지 않는다는 뜻이다.\n따라서 이렇게 유의 확률의 값이 아주 작은 숫자가 나오면 해당 귀무 가설을 기각할 수 있다.\n유의 수준과 기각역\n계산된 유의 확률 값에 대해 귀무 가설을 기각하는지 채택하는지를 결정할 수 있는 기준 값을 유의 수준(level of significance)라고 한다. 일반적으로 사용되는 유의 수준은 1%, 5%, 10% 등이다.\n검정 통계량이 나오면 확률 밀도 함수(또는 누적 확률 함수)를 사용하여 유의 확률을 계산할 수 있는 것처럼 반대로 특정한 유의 확률 값에 대해 해당하는 검정 통계량을 계산할 수도 있다. 유의 수준에 대해 계산된 검정 통계량을 기각역(critical value)라고 한다.\n기각역 값을 알고 있다면 유의 확률을 유의 수준과 비교하는 것이 아니라 검정 통계량을 직접 기각역과 비교하여 기각/채택 여부를 판단할 수도 있다.\n검정의 예\n이제 서두에서 제기한 문제를 다시 풀어보자.\n\n문제1\n\n<blockquote> \n어떤 동전을 15번 던졌더니 12번이 앞면이 나왔다. 이 동전은 휘어지지 않은 공정한 동전(fair coin)인가?\n</blockquote>\n\n동전의 앞면이 나오는 것을 숫자 1, 뒷면이 나오는 것을 숫자 0으로 나타낸다면 이 문제는 베르누이 확률 변수의 모수 검정 문제로 생각할 수 있다. 판단하고자하는 귀무 가설은 베르누이 확률 분포 모수 $\\theta = 0.5$이다. \n이 문제에 대한 검정 통계량은 15번 던져 앞면이 나온 횟수가 12이고 이 값은 자유도가 15인 이항 분포를 따른다. 이 경우의 유의 확률을 계산하면 \n1.76% 이다.\n$$ \\text{Bin}(n \\geq 12;N=15) = 0.017578125 $$",
"1 - sp.stats.binom(15, 0.5).cdf(12-1)",
"이 값은 5% 보다는 작고 1% 보다는 크기 때문에 유의 수준이 5% 라면 기각할 수 있으며(즉 공정한 동전이 아니라고 말할 수 있다.) 유의 수준이 1% 라면 기각할 수 없다.(즉, 공정한 동전이 아니라고 말할 수 없다.)\n\n문제2\n\n<blockquote> \n어떤 트레이더의 일주일 수익률은 다음과 같다.:<br>\n-2.5%, -5%, 4.3%, -3.7% -5.6% <br>\n이 트레이더는 돈을 벌어다 줄 사람인가, 아니면 돈을 잃을 사람인가? \n</blockquote>\n\n수익률이 정규 분포를 따른 다고 가정하면 이 트레이더의 검정통계량은 다음과 같이 계산된다.\n$$ t = \\dfrac{m}{\\frac{s}{\\sqrt{N}}} = -1.4025 $$\n이 검정 통계량에 대한 유의 확률은 11.67%이다.\n$$ F(t=-1.4025;4) = 0.1167 $$",
"x = np.array([-0.025, -0.05, 0.043, -0.037, -0.056])\nt = x.mean()/x.std(ddof=1)*np.sqrt(len(x))\nt, sp.stats.t(df=4).cdf(t)",
"만약 유의 수준이 10%라면 유의 확률이 이보다 크기 때문에 귀무 가설을 기각할 수 없다. 즉, 정규 분포의 기댓값이 0 보다 작다고 말할수 없다. 따라서 해당 트레이더가 장기적으로 손실을 보는 트레이더라고 말할 수 있는 증거가 부족하다는 의미이다."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxalbert/tohu | notebooks/v6/Primitive_generators.ipynb | mit | [
"Primitive generators\nThis notebook contains tests for tohu's primitive generators.",
"import tohu\nfrom tohu.v6.primitive_generators import *\nfrom tohu.v6.generator_dispatch import *\nfrom tohu.v6.utils import print_generated_sequence\n\nprint(f'Tohu version: {tohu.__version__}')",
"Constant\nConstant simply returns the same, constant value every time.",
"g = Constant('quux')\n\nprint_generated_sequence(g, num=10, seed=12345)",
"Boolean\nBoolean returns either True or False, optionally with different probabilities.",
"g1 = Boolean()\ng2 = Boolean(p=0.8)\n\nprint_generated_sequence(g1, num=20, seed=12345)\nprint_generated_sequence(g2, num=20, seed=99999)",
"Incremental\nIncremental returns a sequence of numbers that increase in regular steps.",
"g = Incremental(start=200, step=4)\n\nprint_generated_sequence(g, num=20, seed=12345)",
"Integer\nInteger returns a random integer between low and high (both inclusive).",
"g = Integer(low=100, high=200)\n\nprint_generated_sequence(g, num=20, seed=12345)",
"Float\nFloat returns a random float between low and high (both inclusive).",
"g = Float(low=2.3, high=4.2)\n\nprint_generated_sequence(g, num=10, sep='\\n', fmt='.12f', seed=12345)",
"CharString",
"g = CharString(length=15)\nprint_generated_sequence(g, num=5, seed=12345)\nprint_generated_sequence(g, num=5, seed=99999)",
"It is possible to explicitly specify the character set.",
"g = CharString(length=12, charset=\"ABCDEFG\")\nprint_generated_sequence(g, num=5, sep='\\n', seed=12345)",
"There are also a few pre-defined character sets.",
"g1 = CharString(length=12, charset=\"<lowercase>\")\ng2 = CharString(length=12, charset=\"<alphanumeric_uppercase>\")\nprint_generated_sequence(g1, num=5, sep='\\n', seed=12345); print()\nprint_generated_sequence(g2, num=5, sep='\\n', seed=12345)",
"DigitString\nDigitString is the same as CharString with charset='0123456789'.",
"g = DigitString(length=15)\nprint_generated_sequence(g, num=5, seed=12345)\nprint_generated_sequence(g, num=5, seed=99999)",
"Sequential\nGenerates a sequence of sequentially numbered strings with a given prefix.",
"g = Sequential(prefix='Foo_', digits=3)",
"Calling reset() on the generator makes the numbering start from 1 again.",
"g.reset()\nprint_generated_sequence(g, num=5)\nprint_generated_sequence(g, num=5)\nprint()\ng.reset()\nprint_generated_sequence(g, num=5)",
"Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here:",
"g.reset(seed=12345); print_generated_sequence(g, num=5)\ng.reset(seed=99999); print_generated_sequence(g, num=5)",
"HashDigest\nHashDigest returns hex strings representing hash digest values (or alternatively raw bytes).\nHashDigest hex strings (uppercase)",
"g = HashDigest(length=6)\n\nprint_generated_sequence(g, num=10, seed=12345)",
"HashDigest hex strings (lowercase)",
"g = HashDigest(length=6, uppercase=False)\n\nprint_generated_sequence(g, num=10, seed=12345)",
"HashDigest byte strings",
"g = HashDigest(length=10, as_bytes=True)\n\nprint_generated_sequence(g, num=5, seed=12345, sep='\\n')",
"NumpyRandomGenerator\nThis generator can produce random numbers using any of the random number generators supported by numpy.",
"g1 = NumpyRandomGenerator(method=\"normal\", loc=3.0, scale=5.0)\ng2 = NumpyRandomGenerator(method=\"poisson\", lam=30)\ng3 = NumpyRandomGenerator(method=\"exponential\", scale=0.3)\n\ng1.reset(seed=12345); print_generated_sequence(g1, num=4)\ng2.reset(seed=12345); print_generated_sequence(g2, num=15)\ng3.reset(seed=12345); print_generated_sequence(g3, num=4)",
"FakerGenerator\nFakerGenerator gives access to any of the methods supported by the faker module. Here are a couple of examples.\nExample: random names",
"g = FakerGenerator(method='name')\n\nprint_generated_sequence(g, num=8, seed=12345)",
"Example: random addresses",
"g = FakerGenerator(method='address')\n\nprint_generated_sequence(g, num=8, seed=12345, sep='\\n---\\n')",
"Timestamp",
"g = Timestamp(start=\"2018-01-01 11:22:33\", end=\"2018-02-13 12:23:34\")\n\ntype(next(g))\n\nprint_generated_sequence(g, num=10, seed=12345, sep='\\n')\n\ng = Timestamp(start=\"2018-01-01 11:22:33\", end=\"2018-02-13 12:23:34\").strftime(\"%-d %b %Y, %H:%M (%a)\")\n\ntype(next(g))\n\nprint_generated_sequence(g, num=10, seed=12345, sep='\\n')",
"Date",
"g = Date(start=\"2018-01-01\", end=\"2018-02-13\")\n\ntype(next(g))\n\nprint_generated_sequence(g, num=10, seed=12345, sep='\\n')\n\ng = Date(start=\"2018-01-01\", end=\"2018-02-13\").strftime(\"%-d %b %Y\")\n\ntype(next(g))\n\nprint_generated_sequence(g, num=10, seed=12345, sep='\\n')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ProfessorKazarinoff/staticsite | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | gpl-3.0 | [
"In this post, we'll use data from a tensile test to build a stress strain curve with Python and Matplotlib.\nA tensile test is a type of mechanical test performed by engineers used to determine the mechanical properties of a material. Engineering metal alloys such as steel and aluminum alloys are tensile tested in order to determine their strength and stiffness. Tensile tests are performed in a piece of equipment called a mechanical test frame.\n\nAfter a tensile test is complete, a set of data is produced by the mechanical test frame. Using the data acquired during a tensile test, a stress-strain curve can be produced. \nIn this post, we will create a stress-strain curve (a plot) from a set of tensile test data of a steel 1045 sample and an aluminum 6061 sample. The stress strain curve we construct will have the following features:\n\nA descriptive title\nAxes labels with units\nTwo lines on the same plot. One line for steel 1045 and one line for aluminum 6061\nA legend\n\nInstall Python\nWe are going to build our stress strain curve with Python and a Jupyter notebook. I suggest engineers and problem-solvers download and install the Anaconda distribution of Python. See this post to learn how to install Anaconda on your computer. Alternatively, you can download Python form Python.org or download Python the Microsoft Store.\nInstall Jupyter, NumPy, Pandas, and Matplotlib\nOnce Python is installed, the next thing we need to do is install a couple of Python packages. If you are using the Anaconda distribution of Python, the packages we are going to use to build the plot: Jupyter, NumPy, Pandas, and Matplotlib come pre-installed and no additional installation steps are necessary. \nHowever, if you downloaded Python from Python.org or installed Python using the Microsoft Store, you will need to install install Jupyter, NumPy, Pandas, and Matplotlib separately. You can install Jupyter, NumPy, Pandas, and Matplotlib with pip (the Python package manager) or install theses four packages with the Anaconda Prompt.\nIf you are using a terminal and pip, type:\n```text\n\npip install jupyter numpy pandas matplotlib \n```\n\nIf you have Anaconda installed and use the Anaconda Prompt, type:\n```text\n\nconda install jupyter numpy pandas matplotlib \n```\n\nOpen a Jupyter notebook\nWe will construct our stress strain curve using a Jupyter notebook. See this post to see how to open a Jupyter notebook. \nMake sure to save your Jupyter notebook with a recognizable name.\nDownload the data and move the data into the same folder as the Jupyter notebook\nNext, we need to download the two data files that we will use to build our stress-strain curve. You can download sample data using the links below:\nsteel1045.xls\naluminum6061.xls\nAfter these .xls files are downloaded, both .xls files need to be moved into the same folder as our Jupyter notebook.\nImport NumPy, Pandas, and Matplotlib\nNow that our Jupyter notebook is open and the two .xls data files are in the same folder as the Jupyter notebook, we can start coding and build our plot.\nAt the top of the Jupyter notebook, import NumPy, Pandas and Matplotlib. The command %matplotlib inline is included so that our plot will display directly inside our Jupyter notebook. If you are using a .py file instead of a Jupyter notebook, make sure to comment out %matplotlib inline as this line is not valid Python code.\nWe will also print out the versions of our NumPy and Pandas packages using the .__version__ attribute. If the versions of NumPy and Pandas prints out, that means that NumPy and Pandas are installed and we can use these packages in our code.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nprint(\"NumPy version:\",np.__version__)\nprint(\"Pandas version:\",pd.__version__)",
"Ensure the two .xls data files are in the same folder as the Jupyter notebook\nBefore we proceed, let's make sure the two .xls data files are in the same folder as our running Jupyter notebook. We'll use a Jupyter notebook magic command to print out the contents of the folder that our notebook is in. The %ls command lists the contents of the current folder.",
"%ls",
"We can see our Jupyter notebook stress_strain_curve_with_python.ipynb as well as the two .xls data files aluminum6061.xls and steel1045.xls are in our current folder. \nNow that we are sure the two .xls data files are in the same folder as our notebook, we can import the data in the two two .xls files using Panda's pd.read_excel() function. The data from the two excel files will be stored in two Pandas dataframes called steel_df and al_df.",
"steel_df = pd.read_excel(\"steel1045.xls\")\nal_df = pd.read_excel(\"aluminum6061.xls\")",
"We can use Pandas .head() method to view the first five rows of each dataframe.",
"steel_df.head()\n\nal_df.head()",
"We see a number of columns in each dataframe. The columns we are interested in are FORCE, EXT, and CH5. Below is a description of what these columns mean.\n\nFORCE Force measurements from the load cell in pounds (lb), force in pounds\nEXT Extension measurements from the mechanical extensometer in percent (%), strain in percent\nCH5 Extension readings from the laser extensometer in percent (%), strain in percent\n\nCreate stress and strain series from the FORCE, EXT, and CH5 columns\nNext we'll create a four Pandas series from the ['CH5'] and ['FORCE'] columns of our al_df and steel_df dataframes. The equations below show how to calculate stress, $\\sigma$, and strain, $\\epsilon$, from force $F$ and cross-sectional area $A$. Cross-sectional area $A$ is the formula for the area of a circle. For the steel and aluminum samples we tested, the diameter $d$ was $0.506 \\ in$.\n$$ \\sigma = \\frac{F}{A_0} $$\n$$ F \\ (kip) = F \\ (lb) \\times 0.001 $$\n$$ A_0 = \\pi (d/2)^2 $$\n$$ d = 0.506 \\ in $$\n$$ \\epsilon \\ (unitless) = \\epsilon \\ (\\%) \\times 0.01 $$",
"strain_steel = steel_df['CH5']*0.01\nd_steel = 0.506 # test bar diameter = 0.506 inches\nstress_steel = (steel_df['FORCE']*0.001)/(np.pi*((d_steel/2)**2))\n\nstrain_al = al_df['CH5']*0.01\nd_al = 0.506 # test bar diameter = 0.506 inches\nstress_al = (al_df['FORCE']*0.001)/(np.pi*((d_al/2)**2))",
"Build a quick plot\nNow that we have the data from the tensile test in four series, we can build a quick plot using Matplotlib's plt.plot() method. The first x,y pair we pass to plt.plot() is strain_steel,stress_steel and the second x,y pair we pass in is strain_al,stress_al. The command plt.show() shows the plot.",
"plt.plot(strain_steel,stress_steel,strain_al,stress_al)\n\nplt.show()",
"We see a plot with two lines. One line represents the steel sample and one line represents the aluminum sample. We can improve our plot by adding axis labels with units, a title and a legend.\nAdd axis labels, title and a legend\nAxis labels, titles and a legend are added to our plot with three Matplotlib methods. The methods are summarized in the table below.\n| Matplotlib method | description | example |\n| --- | --- | --- |\n| plt.xlabel() | x-axis label | plt.xlabel('strain (in/in)') |\n| plt.ylabel() | y-axis label | plt.ylabel('stress (ksi)') |\n| plt.title() | plot title | plt.title('Stress Strain Curve') |\n| plt.legend() | legend | plt.legend(['steel','aluminum']) |\nThe code cell below shows these four methods in action and produces a plot.",
"plt.plot(strain_steel,stress_steel,strain_al,stress_al)\nplt.xlabel('strain (in/in)')\nplt.ylabel('stress (ksi)')\nplt.title('Stress Strain Curve of Steel 1045 and Aluminum 6061 in tension')\nplt.legend(['Steel 1045','Aluminum 6061'])\n\nplt.show()",
"The plot we see has two lines, axis labels, a title and a legend. Next we'll save the plot to a .png image file.\nSave the plot as a .png image\nNow we can save the plot as a .png image using Matplotlib's plt.savefig() method. The code cell below builds the plot and saves an image file called stress-strain_curve.png. The argument dpi=300 inside of Matplotlib's plt.savefig() method specifies the resolution of our saved image. The image stress-strain_curve.png will be saved in the same folder as our running Jupyter notebook.",
"plt.plot(strain_steel,stress_steel,strain_al,stress_al)\nplt.xlabel('strain (in/in)')\nplt.ylabel('stress (ksi)')\nplt.title('Stress Strain Curve of Steel 1045 and Aluminum 6061 in tension')\nplt.legend(['Steel 1045','Aluminum 6061'])\n\nplt.savefig('stress-strain_curve.png', dpi=300, bbox_inches='tight')\nplt.show()",
"Our complete stress strain curve contains two lines, one for steel and one for aluminum. The plot has axis labels with units, a title and a legend. A copy of the plot is now saved as stress-strain_curve.png in the same folder as our Jupyter notebook.\nSummary\nIn this post, we built a stress strain curve using Python. First we installed Python and made sure that NumPy, Pandas, Matplotlib and Jupyter were installed. Next we opened a Jupyter notebook and moved our .xls data files into the same folder as the Jupyter notebook. Inside the Jupyter notebook we entered code into a couple different code cells. \nIn the first Jupyter notebook code cell, we imported NumPy, Pandas, and Matplotlib and printed our their versions. In the next code cell, we saved the data from two .xls data files into two Pandas dataframes. In the third code cell, we created Pandas series for stress and strain from the columns in the dataframes. In the final code cell we built our stress strain curve with Matplotlib and saved the plot to a .png image file."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
scikit-multilearn/scikit-multilearn | docs/source/multilabelembeddings.ipynb | bsd-2-clause | [
"Multi-label embedding-based classification\nMulti-label embedding techniques emerged as a response the need to cope with a large label space, but with the rise of computing power they became a method of improving classification quality. Typically the embedding-based multi-label classification starts with embedding the label matrix of the training set in some way, training a regressor for unseen samples to predict their embeddings, and a classifier (sometimes very simple ones) to correct the regression error. Scikit-multilearn provides several multi-label embedders alongisde a general regressor-classifier classification class. \nCurrently available embedding strategies include: \n\nLabel Network Embeddings via OpenNE network embedding library, as in the LNEMLC paper\nCost-Sensitive Label Embedding with Multidimensional Scaling, as in the CLEMS paper\nscikit-learn based embeddings such as PCA or manifold learning approaches\n\nLet's start with loading some data:",
"import numpy\nimport sklearn.metrics as metrics\nfrom skmultilearn.dataset import load_dataset\n\nX_train, y_train, feature_names, label_names = load_dataset('emotions', 'train')\nX_test, y_test, _, _ = load_dataset('emotions', 'test')",
"Label Network Embeddings\nThe label network embeddings approaches require a working tensorflow installation and the OpenNE library. To install them, run the following code:\nbash\npip install networkx tensorflow\ngit clone https://github.com/thunlp/OpenNE/\npip install -e OpenNE/src\n\nFor an example we will use the LINE embedding method, one of the most efficient and well-performing state of the art approaches, for the meaning of parameters consult the OpenNE documentation. We select order = 3 which means that the method will take both first and second order proximities between labels for embedding. We select a dimension of 5 times the number of labels, as the linear embeddings tend to need more dimensions for best performance, normalize the label weights to maintain normalized distances in the network and agregate label embedings per sample by summation which is a classical approach.",
"from skmultilearn.embedding import OpenNetworkEmbedder \nfrom skmultilearn.cluster import LabelCooccurrenceGraphBuilder\n\ngraph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)\nopenne_line_params = dict(batch_size=1000, order=3)\nembedder = OpenNetworkEmbedder(\n graph_builder, \n 'LINE', \n dimension = 5*y_train.shape[1], \n aggregation_function = 'add', \n normalize_weights=True, \n param_dict = openne_line_params\n)",
"We now need to select a regressor and a classifier, we use random forest regressors with MLkNN which is a well working combination often used for multi-label embedding:",
"from skmultilearn.embedding import EmbeddingClassifier\nfrom sklearn.ensemble import RandomForestRegressor\nfrom skmultilearn.adapt import MLkNN\n\nclf = EmbeddingClassifier(\n embedder,\n RandomForestRegressor(n_estimators=10),\n MLkNN(k=5)\n)\n\nclf.fit(X_train, y_train)\n\npredictions = clf.predict(X_test)",
"Cost-Sensitive Label Embedding with Multidimensional Scaling\nCLEMS is another well-perfoming method in multi-label embeddings. It uses weighted multi-dimensional scaling to embedd a cost-matrix of unique label combinations. The cost-matrix contains the cost of mistaking a given label combination for another, thus real-valued functions are better ideas than discrete ones. Also, the is_score parameter is used to tell the embedder if the cost function is a score (the higher the better) or a loss (the lower the better). Additional params can be also assigned to the weighted scaler. The most efficient parameter for the number of dimensions is equal to number of labels, and is thus enforced here.",
"from skmultilearn.embedding import CLEMS, EmbeddingClassifier\nfrom sklearn.ensemble import RandomForestRegressor\nfrom skmultilearn.adapt import MLkNN\n\ndimensional_scaler_params = {'n_jobs': -1}\n\nclf = EmbeddingClassifier(\n CLEMS(metrics.jaccard_similarity_score, is_score=True, params=dimensional_scaler_params),\n RandomForestRegressor(n_estimators=10, n_jobs=-1),\n MLkNN(k=1),\n regressor_per_dimension= True\n)\n\nclf.fit(X_train, y_train)\n\npredictions = clf.predict(X_test)",
"Scikit-learn based embedders\nAny scikit-learn embedder can be used for multi-label classification embeddings with scikit-multilearn, just select one and try, here's a spectral embedding approach with 10 dimensions of the embedding space:",
"from skmultilearn.embedding import SKLearnEmbedder, EmbeddingClassifier\nfrom sklearn.manifold import SpectralEmbedding\nfrom sklearn.ensemble import RandomForestRegressor\nfrom skmultilearn.adapt import MLkNN\n\nclf = EmbeddingClassifier(\n SKLearnEmbedder(SpectralEmbedding(n_components = 10)),\n RandomForestRegressor(n_estimators=10),\n MLkNN(k=5)\n)\n\nclf.fit(X_train, y_train)\n\npredictions = clf.predict(X_test)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rsignell-usgs/notebook | CSW/CSW_ISO_Queryables-IOOS.ipynb | mit | [
"CSW access with OWSLib using ISO queryables\nDemonstration of how to use the OGC Catalog Services for the Web (CSW) to search for find all datasets containing a specified variable that fall withing a specified date range and geospatial bounding box, and then use the data access service contained in the returned metadata to retrieve and visualize the data. <P> Here we are accessing a Geoportal Server CSW, but in the future we should be able to point it toward any another CSW service, such as the one provided by catalog.data.gov.",
"from pylab import *\nfrom owslib.csw import CatalogueServiceWeb\nfrom owslib.sos import SensorObservationService\nfrom owslib import fes\nimport netCDF4\nimport pandas as pd\nimport datetime as dt\nfrom IPython.core.display import HTML\n\nHTML('<iframe src=http://www.nodc.noaa.gov/geoportal/ width=950 height=400></iframe>')\n\n# connect to CSW, explore it's properties\n\nendpoint = 'http://www.ngdc.noaa.gov/geoportal/csw' # NGDC Geoportal\n\n#endpoint = 'http://www.nodc.noaa.gov/geoportal/csw' # NODC Geoportal: granule level\n#endpoint = 'http://data.nodc.noaa.gov/geoportal/csw' # NODC Geoportal: collection level \n#endpoint = 'http://geodiscover.cgdi.ca/wes/serviceManagerCSW/csw' # NRCAN CUSTOM\n#endpoint = 'http://geoport.whoi.edu/gi-cat/services/cswiso' # USGS Woods Hole GI_CAT\n#endpoint = 'http://cida.usgs.gov/gdp/geonetwork/srv/en/csw' # USGS CIDA Geonetwork\n#endpoint = 'http://cmgds.marine.usgs.gov/geonetwork/srv/en/csw' # USGS Coastal and Marine Program\n#endpoint = 'http://geoport.whoi.edu/geoportal/csw' # USGS Woods Hole Geoportal \n#endpoint = 'http://geo.gov.ckan.org/csw' # CKAN testing site for new Data.gov\n#endpoint = 'https://edg.epa.gov/metadata/csw' # EPA\n#endpoint = 'http://cwic.csiss.gmu.edu/cwicv1/discovery' # CWIC\n\ncsw = CatalogueServiceWeb(endpoint,timeout=60)\ncsw.version",
"On the GeoIDE Wiki they give some example CSW examples to illustrate the range possibilities. Here's one where to search for PACIOOS WMS services:",
"HTML('<iframe src=https://geo-ide.noaa.gov/wiki/index.php?title=ESRI_Geoportal#PacIOOS_WAF width=950 height=350></iframe>')\n",
"Also on the GEO-IDE Wiki we find the list of UUIDs for each region/provider, which we turn into a dictionary here:",
"regionids = {'AOOS':\t'{1E96581F-6B73-45AD-9F9F-2CC3FED76EE6}',\n'CENCOOS':\t'{BE483F24-52E7-4DDE-909F-EE8D4FF118EA}',\n'CARICOOS':\t'{0C4CA8A6-5967-4590-BFE0-B8A21CD8BB01}',\n'GCOOS':\t'{E77E250D-2D65-463C-B201-535775D222C9}',\n'GLOS':\t'{E4A9E4F4-78A4-4BA0-B653-F548D74F68FA}',\n'MARACOOS':\t'{A26F8553-798B-4B1C-8755-1031D752F7C2}',\n'NANOOS':\t'{C6F4754B-30DC-459E-883A-2AC79DA977AB}',\n'NAVY':\t'{FB160233-7C3B-4841-AD4B-EB5AD843E743}',\n'NDBC':\t'{B3F50F38-3DE4-4EC9-ABF8-955887829FCC}',\n'NERACOOS':\t'{E13C88D9-3FF3-4232-A379-84B6A1D7083E}',\n'NOS/CO-OPS':\t'{2F58127E-A139-4A45-83F2-9695FB704306}',\n'PacIOOS':\t'{78C0463E-2FCE-4AB2-A9C9-6A34BF261F52}',\n'SCCOOS':\t'{20A3408F-9EC4-4B36-8E10-BBCDB1E81BDF}',\n'SECOORA':\t'{E796C954-B248-4118-896C-42E6FAA6EDE9}',\n'USACE':\t'{4C080A33-F3C3-4F27-AF16-F85BF3095C41}',\n'USGS/CMGP': '{275DFB94-E58A-4157-8C31-C72F372E72E}'}\n\n[op.name for op in csw.operations]\n\ndef dateRange(start_date='1900-01-01',stop_date='2100-01-01',constraint='overlaps'):\n if constraint == 'overlaps':\n start = fes.PropertyIsLessThanOrEqualTo(propertyname='startDate', literal=stop_date)\n stop = fes.PropertyIsGreaterThanOrEqualTo(propertyname='endDate', literal=start_date)\n elif constraint == 'within':\n start = fes.PropertyIsGreaterThanOrEqualTo(propertyname='startDate', literal=start_date)\n stop = fes.PropertyIsLessThanOrEqualTo(propertyname='endDate', literal=stop_date)\n return start,stop\n\n# get specific ServiceType URL from records\ndef service_urls(records,service_string='urn:x-esri:specification:ServiceType:odp:url'):\n urls=[]\n for key,rec in records.iteritems():\n #create a generator object, and iterate through it until the match is found\n #if not found, gets the default value (here \"none\")\n url = next((d['url'] for d in rec.references if d['scheme'] == service_string), None)\n if url is not None:\n urls.append(url)\n return urls\n\n# Perform the CSW query, using Kyle's cool new filters on ISO queryables\n# find all datasets in a bounding box and temporal extent that have \n# specific keywords and also can be accessed via OPeNDAP \n\nbox=[-89.0, 30.0, -87.0, 31.0]\nstart_date='2013-08-21'\nstop_date='2013-08-30'\nstd_name = 'temperature'\nservice_type='SOS'\nregion_id = regionids['GCOOS']\n\n# convert User Input into FES filters\nstart,stop = dateRange(start_date,stop_date,constraint='overlaps')\nbbox = fes.BBox(box)\nkeywords = fes.PropertyIsLike(propertyname='anyText', literal=std_name)\nserviceType = fes.PropertyIsLike(propertyname='apiso:ServiceType', literal=('*%s*' % service_type))\nsiteid = fes.PropertyIsEqualTo(propertyname='sys.siteuuid', literal=region_id)\n\n# try simple query with serviceType and keyword first\ncsw.getrecords2(constraints=[[serviceType,keywords]],maxrecords=15,esn='full')\nfor rec,item in csw.records.iteritems():\n print item.title",
"The filters can be passed as a list to getrecords2, with AND or OR implied by syntax: \n<pre>\n[a,b,c] --> a || b || c\n\n[[a,b,c]] --> a && b && c\n\n[[a,b],[c],[d],[e]] or [[a,b],c,d,e] --> (a && b) || c || d || e\n</pre>",
"# try simple query with serviceType and keyword first\ncsw.getrecords2(constraints=[[serviceType,keywords]],maxrecords=15,esn='full')\nfor rec,item in csw.records.iteritems():\n print item.title\n\n# check out references for one of the returned records\ncsw.records['NOAA.NOS.CO-OPS SOS'].references\n\n# filter for GCOOS SOS data\ncsw.getrecords2(constraints=[[keywords,serviceType,siteid]],maxrecords=15,esn='full')\nfor rec,item in csw.records.iteritems():\n print item.title\n\n# filter for SOS data in BBOX\ncsw.getrecords2(constraints=[[keywords,serviceType,bbox]],maxrecords=15,esn='full')\nfor rec,item in csw.records.iteritems():\n print item.title\n\nurls = service_urls(csw.records,service_string='urn:x-esri:specification:ServiceType:sos:url')\nprint \"\\n\".join(urls)\n\nurls = [url for url in urls if 'oostethys' not in url]\nprint \"\\n\".join(urls)\n\nsos = SensorObservationService(urls[0])\n\ngetob = sos.get_operation_by_name('getobservation')\n\nprint getob.parameters\n\noff = sos.offerings[1]\nofferings = [off.name]\nresponseFormat = off.response_formats[0]\nobservedProperties = [off.observed_properties[0]]\n\nprint sos.offerings[0]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mu4farooqi/deep-learning-projects | language-translation/dlnd_language_translation.ipynb | gpl-3.0 | [
"Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the <EOS> word id by doing:\npython\ntarget_vocab_to_int['<EOS>']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.",
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\\n')]\n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)\nProcess Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.",
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)\n\ndef process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)",
"Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()",
"from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n rnn_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n lstm = lambda: tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob)\n return tf.nn.dynamic_rnn(tf.contrib.rnn.MultiRNNCell([lstm() for _ in range(num_layers)]), rnn_inputs, source_sequence_length, dtype=tf.float32)\n \n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode",
"\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n taining_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)\n decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, taining_helper, encoder_state, output_layer)\n output = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length)\n return output[0]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode",
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n \n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')\n taining_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)\n decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, taining_helper, encoder_state, output_layer)\n output = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length)\n return output[0]\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.",
"def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n lstm = lambda: tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob)\n dec_cell = tf.contrib.rnn.MultiRNNCell([lstm() for _ in range(num_layers)])\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n \n with tf.variable_scope('decoder'):\n dec_train = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, \n max_target_sequence_length, output_layer, keep_prob)\n \n with tf.variable_scope('decoder', reuse=True):\n dec_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],\n target_vocab_to_int['<EOS>'], max_target_sequence_length,\n target_vocab_size, output_layer, batch_size, keep_prob)\n \n return dec_train, dec_infer\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.",
"def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n _, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, \n source_vocab_size, enc_embedding_size)\n dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n return decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sentence_length, \n rnn_size, num_layers, target_vocab_to_int, target_vocab_size, \n batch_size, keep_prob, dec_embedding_size)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement",
"# Number of Epochs\nepochs = 3\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 300\ndecoding_embedding_size = 300\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.75\ndisplay_step = 20",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n",
"Batch and pad the source and target sequences",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Save Parameters\nSave the batch_size and save_path parameters for inference.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the <UNK> word id.",
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Translate\nThis will translate translate_sentence from English to French.",
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n",
"Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nixphix/ml-projects | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | mit | [
"Sentiment Analysis on \"Jallikattu\" with Twitter Data Feed <h3 style=\"color:red;\">#DataScienceForSocialCause</h3>\nTwitter is flooded with Jallikattu issue, let us find peoples sentiment with Data Science tools. Following is the approach\n* Register a Twitter API handle for data feed\n* Pull out tweets on search query 'jallikattu' \n* Using NLP packages find the sentiment of the tweet (Positive, Neutral or Negative)\n* Plot pie chart of the sentiment \n* Plot a masked word cloud of tags used\nFinall output we expect is a masked word cloud of popular tags used in twitter with font size propotional to the frequency of use. Let's dive in ...\nLoading necessary packages\nIn particular we will be using tweepy to register an api handle with twitter and get the data feed. Tweepy Document\nTextBlob package to determine the sentiment of the tweets. TextBlob Document",
"# import tweepy for twitter datastream and textblob for processing tweets\nimport tweepy\nimport textblob\n\n# wordcloud package is used to produce the cool masked tag cloud above\nfrom wordcloud import WordCloud\n\n# pickle to serialize/deserialize python objects\nimport pickle\n\n# regex package to extract hasttags from tweets\nimport re\n\n# os for loading files from local system, matplotlib, np and PIL for ploting\nfrom os import path\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom PIL import Image",
"We will create a Twitter API handle for fetching data\n\n\nInorder to qualify for a Twitter API handle you need to be a Phone Verified Twitter user. \n\nGoto Twitter settings page twitter.com/settings/account\nChoose Mobile tab on left pane, then enter your phone number and verify by OTP\nNow you should be able to register new API handle for your account for programmatic tweeting\n\n\n\nNow goto Twitter Application Management page\n\n\nClick Create New Appbutton \n\nEnter a Unique App name(global namespace), you might have to try few time to get it correct\nDescription can be anything you wish\nwebsite can be some <yourname>.com, you dont really have to own the domain\nLeave the callback URL empty, agree to the terms and condition unconditionally \nClick create\n\n\n\nYou can find the api credentials in Application Management consol\n\nChoose the App and goto keys and access tokens tab to get API_KEY, API_SECRET, ACCESS_TOKEN and ACCESS_TOKEN_SECRET\n\nRUN THE CODE BLOCK BELOW ONLY ON FIRST TIME YOU CONFIGURE THE TWITTER API",
"# make sure to exclued this folder in git ignore\npath_to_cred_file = path.abspath('../restricted/api_credentials.p')\n\n# we will store twitter handle credentials in a pickle file (object de-serialization)\n# code for pickling credentials need to be run only once during initial configuration\n# fill the following dictionary with your twitter credentials\ntwitter_credentials = {'api_key':'API_KEY', \\\n 'api_secret':'API_SECRET', \\\n 'access_token':'ACCESS_TOKEN', \\\n 'access_token_secret':'ACCESS_TOKEN_SECRET'}\npickle.dump(twitter_credentials,open(path_to_cred_file, \"wb\"))\nprint(\"Pickled credentials saved to :\\n\"+path_to_cred_file+\"\\n\")\nprint(\"\\n\".join([\"{:20} : {}\".format(key,value) for key,value in twitter_credentials.items()]))",
"From second run you can load the credentials securely form stored file\nIf you want to check the credentials uncomment the last line in below code block",
"# make sure to exclued this folder in git ignore\npath_to_cred_file = path.abspath('../restricted/api_credentials.p')\n\n# load saved twitter credentials\ntwitter_credentials = pickle.load(open(path_to_cred_file,'rb'))\n#print(\"\\n\".join([\"{:20} : {}\".format(key,value) for key,value in twitter_credentials.items()]))",
"Creating an Open Auth Instance\nWith the created api and token we will open an open auth instance to authenticate our twitter account.\nIf you feel that your twitter api credentials have been compromised you can just generate a new set of access token-secret pair, access token is like RSA to authenticate your api key.",
"# lets create an open authentication handler and initialize it with our twitter handlers api key\nauth = tweepy.OAuthHandler(twitter_credentials['api_key'],twitter_credentials['api_secret'])\n\n# access token is like password for the api key, \nauth.set_access_token(twitter_credentials['access_token'],twitter_credentials['access_token_secret'])",
"Twitter API Handle\nTweepy comes with a Twitter API wrapper class called 'API', passing the open auth instance to this API creates a live Twitter handle to our account.\nATTENTION: Please beware that this is a handle you your own account not any pseudo account, if you tweet something with this it will be your tweet This is the reason I took care not to expose my api credentials, if you expose anyone can mess up your Twitter account.\nLet's open the twitter handle and print the Name and Location of the twitter account owner, you should be seeing your name.",
"# lets create an instance of twitter api wrapper\napi = tweepy.API(auth)\n\n# lets do some self check\nuser = api.me()\nprint(\"{}\\n{}\".format(user.name,user.location))",
"Inspiration for this Project\nI drew inspiration for this project from the ongoing issue on traditional bull fighting AKA Jallikattu. Here I'm trying read pulse of the people based on tweets.\nWe are searching for key word Jallikattu in Twitters public tweets, in the retured search result we are taking 150 tweets to do our Sentiment Analysis. Please dont go for large number of tweets there is an upper limit of 450 tweets, for more on api rate limits checkout Twitter Developer Doc.",
"# now lets get some data to check the sentiment on it\n# lets search for key word jallikattu and check the sentiment on it\nquery = 'jallikattu'\ntweet_cnt = 150\npeta_tweets = api.search(q=query,count=tweet_cnt)",
"Processing Tweets\nOnce we get the tweets, we will iterate through the tweets and do following oprations\n1. Pass the tweet text to TextBlob to process the tweet\n2. Processed tweets will have two attributes \n * Polarity which is a numerical value between -1 to 1, the sentiment of the text can be infered from this.\n * Subjectivity this shows wheather the text is stated as a fact or an opinion, value ranges from 0 to 1\n3. For each tweet we will find sentiment of the text (positive, neutral or negative) and update a counter variable accordingly, this counter is later ploted as a pie chart.\n4. Then we pass the tweet text to a regular expression to extract hash tags, which we later use to create an awesome word cloud visualization.",
"# lets go over the tweets \nsentiment_polarity = [0,0,0]\ntags = []\n\nfor tweet in peta_tweets:\n processed_tweet = textblob.TextBlob(tweet.text)\n polarity = processed_tweet.sentiment.polarity\n upd_index = 0 if polarity > 0 else (1 if polarity == 0 else 2)\n sentiment_polarity[upd_index] = sentiment_polarity[upd_index]+1\n tags.extend(re.findall(r\"#(\\w+)\", tweet.text))\n #print(tweet.text)\n #print(processed_tweet.sentiment,'\\n')\n\nsentiment_label = ['Positive','Neutral','Negative']\n#print(\"\\n\".join([\"{:8} tweets count {}\".format(s,val) for s,val in zip(sentiment_label,sentiment_polarity)]))\n\n# plotting sentiment pie chart\ncolors = ['yellowgreen', 'gold', 'coral']\n\n# lets explode the positive sentiment for visual appeal\nexplode = (0.1, 0, 0)\nplt.pie(sentiment_polarity,labels=sentiment_label,colors=colors,explode=explode,shadow=True,autopct='%1.1f%%')\nplt.axis('equal')\nplt.legend(bbox_to_anchor=(1.3,1))\nplt.title('Twitter Sentiment on \\\"'+query+'\\\"')\nplt.show()",
"Sentiment Analysis\nWe can see that majority is neutral which is contributed by \n1. Tweets with media only(photo, video) \n2. Tweets in regional language. Textblob do not work on our indian languages.\n3. Some tweets contains only stop words or the words that do not give any positive or negative perspective. \n4. Polarity is calculated by the number of positive words like \"great, awesome, etc.\" or negative words like \"hate, bad, etc\"\nOne more point to note is that TextBlob is not a complete NLP package it does not do context aware search, such sophisticated deep learing abilities are available only with likes of Google.",
"# lets process the hash tags in the tweets and make a word cloud visualization\n# normalizing tags by converting all tags to lowercase\ntags = [t.lower() for t in tags]\n\n# get unique count of tags to take count for each\nuniq_tags = list(set(tags))\ntag_count = []\n\n# for each unique hash tag take frequency of occurance\nfor tag in uniq_tags:\n tag_count.append((tag,tags.count(tag)))\n\n# lets print the top five tags \ntag_count =sorted(tag_count,key=lambda x:-x[1])[:5]\nprint(\"\\n\".join([\"{:8} {}\".format(tag,val) for tag,val in tag_count]))",
"Simple Word Cloud with Twitter #tags\nLet us viualize the tags used in for Jallikattu by creating a tag cloud. The wordcloud package takes a single string of tags separated by whitespace. We will concatinate the tags and pass it to generate method to create a tag cloud image.",
"# we will create a vivid tag cloud visualization \n# creating a single string of texts from tags, the tag's font size is proportional to its frequency\ntext = \" \".join(tags)\n\n# this generates an image from the long string, if you wish you may save it to local\nwc = WordCloud().generate(text)\n\n# we will display the image with matplotlibs image show, removed x and y axis ticks\nplt.imshow(wc)\nplt.axis(\"off\")\nplt.show()",
"Masked Word Cloud\nThe tag cloud can be masked using a grascale stencil image the wordcloud package neatly arranges the word in side the mask image. I have supreimposed generated word cloud image on to the mask image to provide a detailing otherwise the background of the word cloud will be white and it will appeare like words are hanging in space instead.\nInorder to make the image superimposing work well, we need to manipulate image transparency using image alpha channel. If you look at the visual only fine detail of mask image is seen in the tag cloud this is bacause word cloud is layed on mask image and the transparency of word cloud image is 90% so only 10% of mask image is seen.",
"# we can also create a masked word cloud from the tags by using grayscale image as stencil\n# lets load the mask image from local\nbull_mask = np.array(Image.open(path.abspath('../asset/bull_mask_1.jpg')))\n\nwc_mask = WordCloud(background_color=\"white\", mask=bull_mask).generate(text)\nmask_image = plt.imshow(bull_mask, cmap=plt.cm.gray)\nword_cloud = plt.imshow(wc_mask,alpha=0.9)\nplt.axis(\"off\")\nplt.title(\"Twitter Hash Tag Word Cloud for \"+query)\nplt.show()",
"The tag cloud marks the key moments like the call for protest in Chennai Marina, Alanganallur. Also shows one of a leading actors support for the cause and calls for ban on peta.\nThis code will give different output over time as new tweet are added in timeline and old ones are pushed down, \nThank you for showing intrest in my work\nIf you liked it and want to be notified of my future work follow me on \nKnowme\n@iPrabakaran Twitter\nGitHub"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rflamary/POT | docs/source/auto_examples/plot_gromov.ipynb | mit | [
"%matplotlib inline",
"Gromov-Wasserstein example\nThis example is designed to show how to use the Gromov-Wassertsein distance\ncomputation in POT.",
"# Author: Erwan Vautier <[email protected]>\n# Nicolas Courty <[email protected]>\n#\n# License: MIT License\n\nimport scipy as sp\nimport numpy as np\nimport matplotlib.pylab as pl\nfrom mpl_toolkits.mplot3d import Axes3D # noqa\nimport ot",
"Sample two Gaussian distributions (2D and 3D)\nThe Gromov-Wasserstein distance allows to compute distances with samples that\ndo not belong to the same metric space. For demonstration purpose, we sample\ntwo Gaussian distributions in 2- and 3-dimensional spaces.",
"n_samples = 30 # nb samples\n\nmu_s = np.array([0, 0])\ncov_s = np.array([[1, 0], [0, 1]])\n\nmu_t = np.array([4, 4, 4])\ncov_t = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])\n\n\nxs = ot.datasets.make_2D_samples_gauss(n_samples, mu_s, cov_s)\nP = sp.linalg.sqrtm(cov_t)\nxt = np.random.randn(n_samples, 3).dot(P) + mu_t",
"Plotting the distributions",
"fig = pl.figure()\nax1 = fig.add_subplot(121)\nax1.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\nax2 = fig.add_subplot(122, projection='3d')\nax2.scatter(xt[:, 0], xt[:, 1], xt[:, 2], color='r')\npl.show()",
"Compute distance kernels, normalize them and then display",
"C1 = sp.spatial.distance.cdist(xs, xs)\nC2 = sp.spatial.distance.cdist(xt, xt)\n\nC1 /= C1.max()\nC2 /= C2.max()\n\npl.figure()\npl.subplot(121)\npl.imshow(C1)\npl.subplot(122)\npl.imshow(C2)\npl.show()",
"Compute Gromov-Wasserstein plans and distance",
"p = ot.unif(n_samples)\nq = ot.unif(n_samples)\n\ngw0, log0 = ot.gromov.gromov_wasserstein(\n C1, C2, p, q, 'square_loss', verbose=True, log=True)\n\ngw, log = ot.gromov.entropic_gromov_wasserstein(\n C1, C2, p, q, 'square_loss', epsilon=5e-4, log=True, verbose=True)\n\n\nprint('Gromov-Wasserstein distances: ' + str(log0['gw_dist']))\nprint('Entropic Gromov-Wasserstein distances: ' + str(log['gw_dist']))\n\n\npl.figure(1, (10, 5))\n\npl.subplot(1, 2, 1)\npl.imshow(gw0, cmap='jet')\npl.title('Gromov Wasserstein')\n\npl.subplot(1, 2, 2)\npl.imshow(gw, cmap='jet')\npl.title('Entropic Gromov Wasserstein')\n\npl.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
relopezbriega/mi-python-blog | content/notebooks/ecuaciones-diferenciales.ipynb | gpl-2.0 | [
"<center><h1>Ecuaciones Diferenciales con Python</h1>\n<img alt=\"Ecuaciones Diferenciales con python\" title=\"Series de Taylor\" src=\"https://relopezbriega.github.io/images/diffeq.gif\" width=\"200px\" height=\"300px\">\n<a href=\"https://conf.scipyla.org/scipyla2016/\">SciPyLA 2016</a> - Florianópolis, Santa Catarina, Brasil - 20 de Mayo de 2016\n<img alt=\"SciPyLA\" title=\"SciPyLA\" src=\"https://conf.scipyla.org/scipyla2016/site_media/static/img/logo2-scipyla-path.svg\" width=\"200px\" height=\"120px\"></center> \nQué es una ecuación diferencial?\nUna Ecuación diferencial es una ecuación que involucra una variable dependiente y sus derivadas con respecto a una o más variables independientes. Muchas de las leyes de la naturaleza, en Física, Química, Biología, y Astronomía; encuentran la forma más natural de ser expresadas en el lenguaje de las Ecuaciones diferenciales. Estas ecuaciones no sólo tienen aplicaciones en la ciencias físicas, sino que también abundan sus aplicaciones en las ciencias aplicadas como ser Ingeniería, Finanzas y Economía. \nPor qué son importantes?\nEs fácil entender la razón detrás de esta amplia utilidad de las Ecuaciones diferenciales. Si recordamos que $y = f(x)$ es una función, entonces su derivada $dy / dx$ puede ser interpretada como el ritmo de cambio de $y$ con respecto a $x$. En muchos procesos naturales, las variables involucradas y su ritmo de cambio están conectados entre sí por medio de los principios científicos básicos que rigen el proceso. Cuando esta conexión es expresada matemáticamente, el resultado generalmente es una Ecuación diferencial.\n<center><img alt=\"Ritmo de cambio\" title=\"Diferenciación\" src=\"https://relopezbriega.github.io/images/Graph_of_sliding_derivative_line.gif\"></center>\nLa ley de enfriamiento de Newton\nLa ley del enfriamiento de Newton o enfriamiento newtoniano establece que la tasa de pérdida de calor de un cuerpo es proporcional a la diferencia de temperatura entre el cuerpo y su ambiente.\nLa ley del enfriamiento de Newton hace una declaración sobre una tasa instantánea de cambio de la temperatura. Por lo que si traducimos esto al lenguaje de las matemáticas, podemos arribar a una Ecuación diferencial.\nLo que la ley nos dice, es que el ritmo de cambio de la temperatura $\\frac{dT}{dt}$ es proporcional a la tempuratura de nuestro objeto $T(t)$ y la tempuratura del ambiente $T_{a}$, por lo tanto esto lo podemos expresar de la siguiente manera:\n$$\\frac{dT}{dt} = -k(T - T_{a})$$\nEn donde $k$ es una constante de proporcionalidad, $t$ es la variable tiempo, $T_{a}$ es la temperutara del ambiente y $T$ es la función incognita que queremos encontrar.\nTipos de Ecuaciones diferenciales\nA las Ecuaciones diferenciales las podemos clasificar en dos grandes grupos:\n\n\nEcuaciones diferencias ordinarias\n\n\nEcuaciones en derivadas parciales\n\n\nEcuaciones diferenciales ordinarias\nLa Ecuación diferencial de la ley del enfriamiento de Newton, es el caso típico de una Ecuación diferencial ordinaria, ya que todas las derivadas involucradas son tomadas con respecto a una única y misma variable independiente (en este caso, el tiempo). \nCuando una Ecuación diferencial contiene derivadas con respecto a única variable independiente, las denominamos Ecuaciones diferenciales ordinarias o EDO para abreviar.\nEjemplos de estas ecuaciones son:\n$$\\frac{d^2y}{dx^2} + \\frac{dy}{dx} + y = 0; \\hspace{2cm} \\frac{dy}{dx} = 2xy; $$\nEcuaciones en derivadas parciales\nUna Ecuación en derivadas parciales es una ecuación que, como su nombre lo indica, contiene derivadas parciales. A diferencia de lo que habíamos visto con las ecuaciones diferenciales ordinarias, en donde la función incógnita depende solo de una variable; en las Ecuaciones en derivadas parciales, o EDP para abreviar, la función incógnita va a depender de dos o más variables independientes $x, y, \\dots$. Generalmente a la función incógnita la vamos a expresar como $u(x, y, \\dots)$ y a sus derivadas parciales como $\\partial u / \\partial x = u_x$ o $\\partial u / \\partial y = u_y$ dependiendo de sobre que variable estemos derivando.\nEjemplos de estas ecuaciones son:\n$$\\frac{\\partial^2 w}{\\partial x^2} + \\frac{\\partial^2 w}{\\partial y^2} + \\frac{\\partial^2 w}{\\partial z^2} = 0 \\hspace{2cm} a^2\\left(\\frac{\\partial^2 w}{\\partial x^2} + \\frac{\\partial^2 w}{\\partial y^2} + \\frac{\\partial^2 w}{\\partial z^2}\\right) = \\frac{\\partial w}{\\partial t}$$\nClasificación de las Ecuaciones diferenciales\nLa clasificación de las Ecuaciones diferenciales es algo muy importante, ya que dependiendo del tipo de ecuación con el que estemos tratando, distintos seran los caminos que podemos utilizar para resolverlas.\nLas podemos clasificar de la siguiente manera:\nSegun su orden\nEl orden de una Ecuación diferencial va a ser igual al orden de la mayor derivada presente. Así, en nuestro primer ejemplo, la Ecuación diferencial de la ley del enfriamiento de Newton es de primer orden, ya que nos encontramos ante la primer derivada de la temperatura con respecto al tiempo. \nLa ecuación general de las ecuaciones diferenciales ordinarias de grado $n$ es la siguiente:\n$$F\\left(x, y, \\frac{dy}{dx}, \\frac{d^2y}{dx^2}, \\dots , \\frac{d^ny}{dx^n}\\right) = 0 $$\no utilizando la notación prima para las derivadas,\n$$F(x, y, y', y'', \\dots, y^{(n)}) = 0$$\nLa más simple de todas las ecuaciones diferenciales ordinarias es la siguiente ecuación de primer orden:\n$$ \\frac{dy}{dx} = f(x)$$ \ny para resolverla simplemente debemos calcular su integral indefinida:\n$$y = \\int f(x) dx + c$$.\nSegun si es separable\nUna ecuación separable es una ecuación diferencial de primer orden en la que la expresión para\n$dx / dy$ se puede factorizar como una función de x multiplicada por una función de y. En otras palabras, puede ser\nescrita en la forma:\n$$\\frac{dy}{dx} = f(x)g(y)$$\nEl nombre separable viene del hecho de que la expresión en el lado derecho se puede \"separar\" en una función de $x$ y una función de $y$.\nPara resolver este tipo de ecuaciones, podemos reescribirlas en la forma diferencial:\n$$\\frac{dy}{g(y)} = f(x)dx$$\ny luego podemos resolver la ecuación original integrando:\n$$\\int \\frac{dy}{g(y)} = \\int f(x) dx + c$$\nÉstas suelen ser las Ecuaciones diferenciales más fáciles de resolver, ya que el problema de resolverlas puede ser reducido a un problema de integración; a pesar de que igualmente muchas veces estas integrales pueden ser difíciles de calcular.\nSegun si son lineales o no lineales\nUno de los tipos más importantes de Ecuaciones diferenciales son las Ecuaciones diferenciales lineales. Este tipo de ecuaciones son muy comunes en varias ciencias y tienen la ventaja de que pueden llegar a ser resueltas en forma analítica ya que su ecuación diferencial de primer orden adopta la forma:\n$$\\frac{dy}{dx} + P(x)y = Q(x) $$\ndonde, $P$ y $Q$ son funciones continuas de $x$ en un determinado intervalo. Para resolver este tipo de ecuaciones de la forma $ y' + P(x)y = Q(x)$, debemos multiplicar los dos lados de la ecuación por el factor de integración $e^{\\int P(x) dx}$ y luego integrar ambos lados.\nOtras clasificaciones\nOtras clasificaciones que comúnmente se utilizan son:\nSegún el número de variables:\nEsta clasificación va a estar dada por la cantidad de variables independientes que contenga la Ecuación diferencial\nSegún sus coeficientes:\nAquí clasificamos a la Ecuación diferencial según sus coeficientes, si los mismos son <a href=\"https://es.wikipedia.org/wiki/Constante_(matem%C3%A1ticas)\">constantes</a>, se dice que la ecuación es de coeficientes constantes, en caso contrario será de coeficientes variables.\nSegún homogeneidad:\nUna Ecuación diferencial homogénea es una ecuación igual a cero en la que solo hay derivadas de $y$ y términos $y$, como por ejemplo la siguiente ecuación:\n$$\\frac{d^4y}{dx^4} + \\frac{d^2y}{dx^2} + y^2 = 0$$\nCondición inicial y de frontera\nLas Ecuaciones diferenciales pueden tener muchas soluciones, pero a nosotros nos va a interesar encontrar la solución para un caso particular; para lograr esto, debemos imponer unas condiciones auxiliares al problema original. Estas condiciones van a estar motivadas por la Física del problema que estemos analizando y pueden llevar a ser de dos tipos diferentes: condiciones iniciales y condiciones de frontera.\nCondición inicial\nLa condición inicial va a establecer el estado del problema al momento de tiempo cero, $t_0$. Por ejemplo para el problema de difusión, la condición inicial va a ser:\n$$u(x, t_0) = \\phi(x)$$\ndonde $\\phi(x)= \\phi(x, y, z)$ es una función que puede representar el estado de concentración inicial. Para el problema del flujo del calor, $\\phi(x)$ va a representar la temperatura inicial.\nCondición de frontera o contorno\nLa condición de frontera nos va a delimitar el dominio en el que nuestra EDP es válida. Así por ejemplo, volviendo al problema de difusión, el dominio en el que nuestra EDP es válida, puede estar delimitado por la superficie del objeto que contiene al líquido. Existen varios tipos de condiciones de frontera, de las cuales las más importantes son:\n\n\nLa condición de frontera de Dirichlet, en dónde los valores válidos de la función incógnita $u$ son especificados.\n\n\nLa condición de frontera de Neumann, en donde los valores válidos especificados son dados para alguna de las derivadas de $u$.\n\n\nLa condición de frontera de Robin, en donde los valores válidos son especificados por una combinación lineal de una función y las derivadas de $u$.\n\n\nSeries de Potencias\nCuando comenzamos a lidiar con las Ecuaciones diferenciales, veremos que existen un gran número de ellas que no pueden ser resueltas en forma analítica utilizando los principios del Cálculo integral y el Cálculo diferencial; pero sin embargo, tal vez podamos encontrar soluciones aproximadas para este tipo de ecuaciones en términos de Series de potencias.\n¿Qué es una serie de potencias?\nUna Serie de potencias es una serie, generalmente infinita, que posee la siguiente forma:\n$$\\sum_{n=0}^{\\infty} C_nX^n = C_0 + C_1X + C_2X^2 + C_3X^3 + \\dots $$\nEn dónde $X$ es una variable y las $C_n$ son constantes o los coeficientes de la serie. Una Serie de potencias puede converger para algunos valores de $X$ y divergir para otros valores de $X$. La suma de la serie es una función.\n$$f(x) = C_0 + C_1X + C_2X^2 + C_3X^3 + \\dots + C_nX^n + \\dots$$\nEl dominio de esta función va a estar dado por el conjunto de todos los $X$ para los que la serie converge.\nSeries de Taylor\nLas Series de Taylor son un caso especial de Series de potencias cuyos términos adoptan la forma $(x - a)^n$. Las Series de Taylor nos van a permitir aproximar funciones continuas que no pueden resolverse en forma analítica y se van a calcular a partir de las derivadas de estas funciones. Su definición matemática es la siguiente:\n$$f(x) = \\sum_{n=0}^{\\infty}\\frac{f^{(n)}(a)}{n!}(x - a)^n$$\nLo que es equivalente a decir:\n$$f(x) = f(a) + \\frac{f'(a)}{1!}(x -a) + \\frac{f''(a)}{2!}(x -a)^2 + \\frac{f'''(a)}{3!}(x -a)^3 + \\dots$$\nUna de las razones de que las Series de Taylor sean importantes es que nos permiten integrar funciones que de otra forma no podíamos manejar.\nSeries de Fourier\nLas Series de Fourier son series infinitas expresadas en términos de <a href=\"https://es.wikipedia.org/wiki/Seno_(trigonometr%C3%ADa)\">seno</a> y <a href=\"https://es.wikipedia.org/wiki/Coseno\">coseno</a> que convergen en una función periódica y continua. Así por ejemplo una función con un período de $2\\pi$, va a adaptar la forma de la siguiente serie:\n$$f(x) = \\frac{a_{0}}{2} + \\sum_{k=1}^{\\infty}(a_{k} \\cos kx + b_{k} \\sin kx)$$\nEl análisis de Fourier suele ser una herramienta muy útil para resolver Ecuaciones diferenciales.\n<center><h1>Soluciones analíticas con Python</h1>\n<br>\n<h2>SymPy<h2>\n<br>\n<a href=\"https://www.sympy.org/es/\" target=\"_blank\"><img src=\"https://www.sympy.org/static/images/logo.png\" title=\"SymPy\"></a>\n</center>\n\n\n\n# SymPy\n\n[SymPy](https://www.sympy.org/es/) es una librería de [Python](https://www.python.org/) para matemática simbólica. Tiene como objetivo convertirse en un [sistema completo de álgebra computacional](https://es.wikipedia.org/wiki/Sistema_algebraico_computacional).\n\nAlgunas de las características que posee son:\n\n* **Álgebra básica**: Simplificaciones, expansiones, sustituciones, aritmética, etc.\n* **Cálculo**: Límites, derivadas, integrales, series de taylor, etc.\n* **Resolución de Ecuaciones**: Ecuaciones algebraicas, ecuaciones polinomiales, ecuaciones diferenciales, sistemas de ecuaciones, etc.\n* **Matrices**: Determinantes, aritmética básica, eigenvalores, eigenvectores, etc.\n* **Física**: Mecánica, mecánica cuántica, óptica, álgebra de Pauli, etc\n* otras.\n\n# El problema a resolver\n\nSupongamos que hubo un asesinato y la policía lleva a la escena del crimen a las 2:41 am. El forense toma la temperatura de la víctima y encuentra que es de 34.5° C; luego vuelve a tomar la temperatura una hora después y la lectura marca 33.9° C. \n\nSuponiendo que ese día el clima estuvo templado y el ambiente hubo una temperatura constante de 15° C. ¿Cuál fue la hora del asesinato?\n\n<center><img title=\"hora de asesinato\" src=\"https://relopezbriega.github.io/images/murder.png\" width=\"300px\" height=\"200px\"></center>",
"import matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport numpy as np\nimport sympy \nfrom scipy import integrate\n\n# imprimir con notación matemática.\nsympy.init_printing(use_latex='mathjax') \n\n# importando modulos de fenics\nimport dolfin\nimport mshr\n\n%matplotlib inline\n\ndolfin.parameters[\"reorder_dofs_serial\"] = False\ndolfin.parameters[\"allow_extrapolation\"] = True\n\ndef plot_direction_field(x, y_x, f_xy, x_lim=(-5, 5), y_lim=(-5, 5), ax=None):\n \"\"\"Esta función dibuja el campo de dirección de una EDO\"\"\"\n \n f_np = sympy.lambdify((x, y_x), f_xy, modules='numpy')\n x_vec = np.linspace(x_lim[0], x_lim[1], 20)\n y_vec = np.linspace(y_lim[0], y_lim[1], 20)\n \n if ax is None:\n _, ax = plt.subplots(figsize=(4, 4))\n \n dx = x_vec[1] - x_vec[0]\n dy = y_vec[1] - y_vec[0]\n \n for m, xx in enumerate(x_vec):\n for n, yy in enumerate(y_vec):\n Dy = f_np(xx, yy) * dx\n Dx = 0.8 * dx**2 / np.sqrt(dx**2 + Dy**2)\n Dy = 0.8 * Dy*dy / np.sqrt(dx**2 + Dy**2)\n ax.plot([xx - Dx/2, xx + Dx/2],\n [yy - Dy/2, yy + Dy/2], 'b', lw=0.5)\n \n ax.axis('tight')\n ax.set_title(r\"$%s$\" %\n (sympy.latex(sympy.Eq(y(x).diff(x), f_xy))),\n fontsize=18)\n \n return ax\n\ndef laplace_transform_derivatives(e):\n \"\"\"\n Evalua las transformadas de Laplace de derivadas de funciones sin evaluar.\n \"\"\"\n if isinstance(e, sympy.LaplaceTransform):\n if isinstance(e.args[0], sympy.Derivative):\n d, t, s = e.args \n n = len(d.args) - 1\n return ((s**n) * sympy.LaplaceTransform(d.args[0], t, s) -\n sum([s**(n-i) * sympy.diff(d.args[0], t, i-1).subs(t, 0)\n for i in range(1, n+1)]))\n \n if isinstance(e, (sympy.Add, sympy.Mul)):\n t = type(e) \n return t(*[laplace_transform_derivatives(arg) for arg in e.args])\n \n return e",
"Resolviendo el problema en forma analítica con SymPy\nPara resolver el problema, debemos utilizar la La Ecuación diferencial de la ley del enfriamiento de Newton. Los datos que tenemos son:\n\nTemperatura inicial = 34.5\nTemperatura 1 hora despues = 33.9\nTemperatura del ambiente = 15\nTemperatura normal promedio de un ser humano = 37",
"# defino las incognitas\nt, k = sympy.symbols('t k')\ny = sympy.Function('y')\n\n# expreso la ecuacion\nf = k*(y(t) -15)\nsympy.Eq(y(t).diff(t), f)\n\n# Resolviendo la ecuación\nedo_sol = sympy.dsolve(y(t).diff(t) - f)\nedo_sol",
"Ahora que tenemos la solución de la Ecuación diferencial, despejemos constante de integración utilizando la condición inicial.",
"# Condición inicial\nics = {y(0): 34.5}\n\nC_eq = sympy.Eq(edo_sol.lhs.subs(t, 0).subs(ics), edo_sol.rhs.subs(t, 0))\nC_eq\n\nC = sympy.solve(C_eq)[0]\nC",
"Ahora que ya sabemos el valor de C, podemos determinar el valor de $k$.",
"eq = sympy.Eq(y(t), C * sympy.E**(k*t) +15)\neq\n\nics = {y(1): 33.9}\nk_eq = sympy.Eq(eq.lhs.subs(t, 1).subs(ics), eq.rhs.subs(t, 1))\nkn = round(sympy.solve(k_eq)[0], 4)\nkn",
"Ahora que ya tenemos todos los datos, podemos determinar la hora aproximada de la muerte.",
"hmuerte = sympy.Eq(37, 19.5 * sympy.E**(kn*t) + 15)\nhmuerte\n\nt = round(sympy.solve(hmuerte)[0],2)\nt\n\nh, m = divmod(t*-60, 60)\nprint \"%d horas, %d minutos\" % (h, m)",
"Es decir, que pasaron aproximadamente 3 horas y 51 minutos desde que ocurrió el crimen, por lo tanto la hora del asesinato debio haber sido alredor de las 10:50 pm.\nTransformada de Laplace\nUn método alternativo que podemos utilizar para resolver en forma analítica Ecuaciones diferenciales ordinarias complejas, es utilizar la Transformada de Laplace, que es un tipo particular de transformada integral. La idea es que podemos utilizar esta técnica para transformar nuestra Ecuación diferencial en algo más simple, resolver esta ecuación más simple y, a continuación, invertir la transformación para recuperar la solución a la Ecuación diferencial original.\n¿Qué es una Transformada de Laplace?\nPara poder comprender la Transformada de Laplace, primero debemos revisar la definición general de la transformada integral, la cuál adapta la siguiente forma:\n$$T(f(t)) = \\int_{\\alpha}^{\\beta} K (s, t) \\ f(t) \\ dt = F(s) $$\nEn este caso, $f(t)$ es la función que queremos transformar, y $F(s)$ es la función transformada. Los límites de la integración, $\\alpha$ y $\\beta$, pueden ser cualquier valor entre $-\\infty$ y $+\\infty$ y $K(s, t)$ es lo que se conoce como el núcleo o kernel de la transformada, y podemos elegir el kernel que nos plazca. La idea es poder elegir un kernel que nos dé la oportunidad de simplificar la Ecuación diferencial con mayor facilidad.\nSi nos restringimos a Ecuaciones diferenciales con coeficientes constantes, entonces un kernel que resulta realmente útil es $e^{-st}$, ya que al diferenciar este kernel con respecto de $t$, terminamos obteniendo potencias de $s$, que podemos equiparar a los coeficientes constantes. De esta forma, podemos arribar a la definición de la Transformada de Laplace:\n$$\\mathcal{L}{f(t)}=\\int_0^{\\infty} e^{-st} \\ f(t) \\ dt$$\nTransformada de Laplace con SymPy\nLa principal ventaja de utilizar Transformadas de Laplace es que cambia la Ecuación diferencial en una ecuación algebraica, lo que simplifica el proceso para calcular su solución. La única parte complicada es encontrar las transformaciones y las inversas de las transformaciones de los varios términos de la Ecuación diferencial que queramos resolver. Aquí es donde nos podemos ayudar de SymPy.\nVamos a intentar resolver la siguiente ecuación:\n$$y'' + 3y' + 2y = 0$$\ncon las siguientes condiciones iniciales: $y(0) = 2$ y $y'(0) = -3$",
"# Ejemplo de transformada de Laplace\n# Defino las incognitas\nt = sympy.symbols(\"t\", positive=True)\ny = sympy.Function(\"y\")\n\n# simbolos adicionales.\ns, Y = sympy.symbols(\"s, Y\", real=True)\n\n# Defino la ecuación\nedo = y(t).diff(t, t) + 3*y(t).diff(t) + 2*y(t)\nsympy.Eq(edo)\n\n# Calculo la transformada de Laplace \nL_edo = sympy.laplace_transform(edo, t, s, noconds=True)\nL_edo_2 = laplace_transform_derivatives(L_edo)\n\n# reemplazamos la transfomada de Laplace de y(t) por la incognita Y\n# para facilitar la lectura de la ecuación.\nL_edo_3 = L_edo_2.subs(sympy.laplace_transform(y(t), t, s), Y)\nsympy.Eq(L_edo_3)",
"Aquí ya logramos convertir a la Ecuación diferencial en una ecuación algebraica. Ahora podemos aplicarle las condiciones iniciales para resolverla.",
"# Definimos las condiciones iniciales\nics = {y(0): 2, y(t).diff(t).subs(t, 0): -3}\nics\n\n# Aplicamos las condiciones iniciales\nL_edo_4 = L_edo_3.subs(ics)\n\n# Resolvemos la ecuación y arribamos a la Transformada de Laplace\n# que es equivalente a nuestra ecuación diferencial\nY_sol = sympy.solve(L_edo_4, Y)\nY_sol\n\n# Por último, calculamos al inversa de la Transformada de Laplace que \n# obtuvimos arriba, para obtener la solución de nuestra ecuación diferencial.\ny_sol = sympy.inverse_laplace_transform(Y_sol[0], s, t)\ny_sol\n\n# Comprobamos la solución.\ny_sol.subs(t, 0), sympy.diff(y_sol).subs(t, 0)",
"Las Transformadas de Laplace, pueden ser una buena alternativa para resolver Ecuaciones diferenciales en forma analítica. Pero aún así, siguen existiendo ecuaciones que se resisten a ser resueltas por medios analíticos, para estos casos, debemos recurrir a los métodos numéricos.\nSeries de potencias y campos de direcciones\nSupongamos ahora que queremos resolver con SymPy la siguiente Ecuación diferencial:\n$$\\frac{dy}{dx} = x^2 + y^2 -1$$\ncon una condición inicial de $y(0) = 0$.\nSi aplicamos lo que vimos hasta ahora, vamos a obtener el siguiente resultado:",
"# Defino incognitas\nx = sympy.symbols('x')\ny = sympy.Function('y')\n\n# Defino la función\nf = y(x)**2 + x**2 -1\n\n# Condición inicial\nics = {y(0): 0}\n\n# Resolviendo la ecuación diferencial\nedo_sol = sympy.dsolve(y(x).diff(x) - f, ics=ics)\nedo_sol",
"El resultado que nos da SymPy, es una aproximación con Series de potencias (una serie de Taylor); y el problema con las Series de potencias es que sus resultados sólo suelen válidos para un rango determinado de valores. Una herramienta que nos puede ayudar a visualizar el rango de validez de una aproximación con Series de potencias son los Campos de direcciones.\nCampos de direcciones\nLos Campos de direcciones es una técnica sencilla pero útil para visualizar posibles soluciones a las ecuaciones diferenciales de primer orden. Se compone de líneas cortas que muestran la pendiente de la función incógnita en el plano x-y. Este gráfico se puede producir fácilmente debido a que la pendiente de $y(x)$ en los puntos arbitrarios del plano x-y está dada por la definición misma de la Ecuación diferencial ordinaria:\n$$\\frac{dy}{dx} = f(x, y(x))$$\nEs decir, que sólo tenemos que iterar sobre los valores $x$ e $y$ en la grilla de coordenadas de interés y evaluar $f(x, y(x))$ para saber la pendiente de $y(x)$ en ese punto. Cuantos más segmentos de líneas trazamos en un Campo de dirección, más clara será la imagen. La razón por la cual el gráfico de Campos de direcciones es útil, es que las curvas suaves y continuos que son <a href=\"https://es.wikipedia.org/wiki/Tangente_(geometr%C3%ADa)\">tangentes</a> a las líneas de pendiente en cada punto del gráfico, son las posibles soluciones a la Ecuación diferencial ordinaria.\nPor ejemplo, el Campos de direcciones de la ecuación:\n$$\\frac{dy}{dx} = x^2 + y^2 -1$$\nes el siguiente:",
"# grafico de campo de dirección\nfig, axes = plt.subplots(1, 1, figsize=(7, 5))\ncampo_dir = plot_direction_field(x, y(x), f, ax=axes)",
"Rango de validez de la solución de series de potencia\nAhora que ya conocemos a los Campos de direcciones, volvamos a la solución aproximada con Series de potencias que habiamos obtenido anteriormente. Podemos graficar esa solución en el Campos de direcciones, y compararla con una solución por método númericos.\n<img title=\"Campo de direcciones\" src=\"https://relopezbriega.github.io/images/campo_direcciones.png\" width=\"600\" height=\"250\">\nEn el panel de la izquierda podemos ver el gráfico de la solución aproximada por la Serie de potencias. La solución aproximada se alinea bien con el campo de direcciones para los valores de $x$ entre $-1.5$ y $1.5$, luego comienza a desviarse, lo que nos indica que la solución aproximada ya no sería válida.",
"fig, axes = plt.subplots(1, 2, figsize=(10, 5))\n\n# panel izquierdo - solución aproximada por Serie de potencias\nplot_direction_field(x, y(x), f, ax=axes[0])\nx_vec = np.linspace(-3, 3, 100)\naxes[0].plot(x_vec, sympy.lambdify(x, edo_sol.rhs.removeO())(x_vec),\n 'b', lw=2)\n\n# panel derecho - Solución por método iterativo\nplot_direction_field(x, y(x), f, ax=axes[1])\nx_vec = np.linspace(-1, 1, 100)\naxes[1].plot(x_vec, sympy.lambdify(x, edo_sol.rhs.removeO())(x_vec),\n 'b', lw=2)\n\n# Resolviendo la EDO en forma iterativa \nedo_sol_m = edo_sol_p = edo_sol\ndx = 0.125\n\n# x positivos\nfor x0 in np.arange(1, 2., dx):\n x_vec = np.linspace(x0, x0 + dx, 100)\n ics = {y(x0): edo_sol_p.rhs.removeO().subs(x, x0)}\n edo_sol_p = sympy.dsolve(y(x).diff(x) - f, ics=ics, n=6)\n axes[1].plot(x_vec, sympy.lambdify(x, edo_sol_p.rhs.removeO())(x_vec),\n 'r', lw=2)\n\n# x negativos\nfor x0 in np.arange(1, 5, dx):\n x_vec = np.linspace(-x0-dx, -x0, 100)\n ics = {y(-x0): edo_sol_m.rhs.removeO().subs(x, -x0)}\n edo_sol_m = sympy.dsolve(y(x).diff(x) - f, ics=ics, n=6)\n axes[1].plot(x_vec, sympy.lambdify(x, edo_sol_m.rhs.removeO())(x_vec),\n 'r', lw=2)",
"<center><h1>Soluciones numéricas con Python</h1>\n<br>\n<h2>SciPy<h2>\n<br>\n<a href=\"https://scipy.org/\" target=\"_blank\"><img src=\"https://www2.warwick.ac.uk/fac/sci/moac/people/students/peter_cock/python/scipy_logo.png?maxWidth=175&maxHeight=61\" title=\"SciPy\"></a>\n</center>\n\n# SciPy\n\n[SciPy](https://www.scipy.org/) es un conjunto de paquetes donde cada uno ellos ataca un problema distinto dentro de la computación científica y el análisis numérico. Algunos de los paquetes que incluye, son:\n\n* **`scipy.integrate`**: que proporciona diferentes funciones para resolver problemas de integración numérica.\n* **`scipy.linalg`**: que proporciona funciones para resolver problemas de álgebra lineal.\n* **`scipy.optimize`**: para los problemas de optimización y minimización.\n* **`scipy.signal`**: para el análisis y procesamiento de señales.\n* **`scipy.sparse`**: para matrices dispersas y solucionar sistemas lineales dispersos\n* **`scipy.stats`**: para el análisis de estadística y probabilidades.\n\nPara resolver las [Ecuaciones diferenciales](https://relopezbriega.github.io/blog/2016/01/10/ecuaciones-diferenciales-con-python/), el paquete que nos interesa es `scipy.integrate`.\n\n## Resolviendo Ecuaciones diferenciales con SciPy\n\n[SciPy](https://www.scipy.org/) nos ofrece dos solucionadores de [ecuaciones diferenciales ordinarias](https://relopezbriega.github.io/blog/2016/01/10/ecuaciones-diferenciales-con-python/), `integrate.odeint` y `integrate.ode`. La principal diferencia entre ambos, es que `integrate.ode` es más flexible, ya que nos ofrece la posibilidad de elegir entre distintos *solucionadores*; aunque `integrate.odeint` es más fácil de utilizar.\n\nTratemos de resolver la siguiente ecuación:\n\n$$\\frac{dy}{dx} = x + y^2$$",
"# Defino la función\nf = y(x)**2 + x\nf\n\n# la convierto en una función ejecutable\nf_np = sympy.lambdify((y(x), x), f)\n\n# Definimos los valores de la condición inicial y el rango de x sobre los \n# que vamos a iterar para calcular y(x)\ny0 = 0\nxp = np.linspace(0, 1.9, 100)\n\n# Calculando la solución numerica para los valores de y0 y xp\nyp = integrate.odeint(f_np, y0, xp)\n\n# Aplicamos el mismo procedimiento para valores de x negativos\nxn = np.linspace(0, -5, 100)\nyn = integrate.odeint(f_np, y0, xn)",
"Los resultados son dos matrices unidimensionales de NumPy $yp$ y $yn$, de la misma longitud que las correspondientes matrices de coordenadas $xp$ y $xn$, que contienen las soluciones numéricas de la ecuación diferencial ordinaria para esos puntos específicos. Para visualizar la solución, podemos graficar las matrices $yp$ y $yn$, junto con su Campo de direcciones.",
"# graficando la solucion con el campo de direcciones\nfig, axes = plt.subplots(1, 1, figsize=(8, 6))\nplot_direction_field(x, y(x), f, ax=axes)\naxes.plot(xn, yn, 'b', lw=2)\naxes.plot(xp, yp, 'r', lw=2)\nplt.show()",
"Sistemas de ecuaciones diferenciales\nEn este ejemplo, solucionamos solo una ecuación. Generalmente, la mayoría de los problemas se presentan en la forma de sistemas de ecuaciones diferenciales ordinarias, es decir, que incluyen varias ecuaciones a resolver. Para ver como podemos utilizar a integrate.odeint para resolver este tipo de problemas, consideremos el siguiente sistema de ecuaciones diferenciales ordinarias, conocido el atractor de Lorenz:\n$$x'(t) = \\sigma(y -x), \\\ny'(t) = x(\\rho -z)-y, \\\nz'(t) = xy - \\beta z\n$$\nEstas ecuaciones son conocidas por sus soluciones caóticas, que dependen sensiblemente de los valores de los parámetros $\\sigma$, $\\rho$ y $\\beta$. Veamos como podemos resolverlas con la ayuda de Python.",
"# Definimos el sistema de ecuaciones\ndef f(xyz, t, sigma, rho, beta):\n x, y, z = xyz\n return [sigma * (y - x), \n x * (rho - z) - y,\n x * y - beta * z]\n\n# Asignamos valores a los parámetros\nsigma, rho, beta = 8, 28, 8/3.0\n\n# Condición inicial y valores de t sobre los que calcular\nxyz0 = [1.0, 1.0, 1.0]\nt = np.linspace(0, 25, 10000)\n\n# Resolvemos las ecuaciones\nxyz1 = integrate.odeint(f, xyz0, t, args=(sigma, rho, beta))\nxyz2 = integrate.odeint(f, xyz0, t, args=(sigma, rho, 0.6*beta))\nxyz3 = integrate.odeint(f, xyz0, t, args=(2*sigma, rho, 0.6*beta))\n\n# Graficamos las soluciones\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\nfig, (ax1,ax2,ax3) = plt.subplots(1, 3, figsize=(12, 4),\n subplot_kw={'projection':'3d'})\n\nfor ax, xyz, c in [(ax1, xyz1, 'r'), (ax2, xyz2, 'b'), (ax3, xyz3, 'g')]:\n ax.plot(xyz[:,0], xyz[:,1], xyz[:,2], c, alpha=0.5)\n ax.set_xlabel('$x$', fontsize=16)\n ax.set_ylabel('$y$', fontsize=16)\n ax.set_zlabel('$z$', fontsize=16)\n ax.set_xticks([-15, 0, 15])\n ax.set_yticks([-20, 0, 20])\n ax.set_zticks([0, 20, 40])",
"Ecuaciones en derivadas parciales\nLos casos que vimos hasta ahora, se trataron de ecuaciones diferenciales ordinarias, pero ¿cómo podemos hacer para resolver ecuaciones en derivadas parciales?\nEstas ecuaciones son mucho más difíciles de resolver, pero podes recurrir a la poderosa herramienta que nos proporciona el Método de los Elementos Finitos para resolverlas en forma numérica. \nMétodo de los elementos finitos\nLa idea general detrás del Método de los Elementos Finitos es la división de un continuo en un\nconjunto de pequeños elementos interconectados por una serie de puntos llamados nodos. \nLas ecuaciones que rigen el comportamiento del continuo regirán también el del elemento.\nDe esta forma se consigue pasar de un sistema continuo (infinitos grados de libertad), que\nes regido por una ecuación diferencial o un sistema de ecuaciones diferenciales, a un\nsistema con un número de grados de libertad finito cuyo comportamiento se modela por un\nsistema de ecuaciones, lineales o no. \nPor ejemplo, en siguiente imagen, podemos ver que en primer lugar tenemos una placa con un hueco en el centro, supongamos que queremos determinar su distribución de temperatura. Para realizar esto, deberíamos resolver la ecuación del calor para cada punto en la placa. El enfoque que utiliza el Método de los Elementos Finitos es el de dividir al objeto en elementos finitos conectados entre sí por nodos; como lo muestran la tercera y cuarta imagen. Este nuevo objeto, constituido por los elementos finitos (los triángulos de la segunda imagen) se llama malla y es una representación aproximada del objeto original. Mientras más nodos tengamos, más exacta será la solución.\n<img alt=\"Método de los Elementos Finitos con python\" title=\"Método de los Elementos Finitos con python\" src=\"https://relopezbriega.github.io/images/FEM.png\n\" >\nEl proyecto FEniCS\nEl proyecto FEniCS es un framework para resolver numéricamente problemas generales de ecuaciones en derivadas parciales utilizando el métodos de los elementos finitos. \nPodemos instalarlo en Ubuntu con los siguientes comandos:\nsudo add-apt-repository ppa:fenics-packages/fenics\nsudo apt-get update\nsudo apt-get install fenics\nLa interfaz principal que vamos a utilizar para trabajar con este framework nos la proporcionan las librerías dolfin y mshr; las cuales debemos importar para poder trabajar con el. Por ahora solo funciona con Python 2.\nProblema a resolver\nEl problema que vamos a resolver con la ayuda de FEniCS, va a ser la ecuación del calor en dos dimensiones en estado estacionario, definida por:\n$$u_{xx} + u_{yy} = f$$\ndonde f es la función fuente y donde tenemos las siguientes condiciones de frontera:\n$$u(x=0) = 3 ; \\ u(x=1)=-1 ; \\ u(y=0) = -5 ; \\ u(y=1) = 5$$ \nEl primer paso en la solución de una EDP utilizando el métodos de los elementos finitos, es definir una malla que describa la discretización del dominio del problema. Para este caso, vamos a utilizar la función RectangleMesh que nos ofrece FEniCS.",
"# Discretizando el problema\nN1 = N2 = 75\nmesh = dolfin.RectangleMesh(dolfin.Point(0, 0), dolfin.Point(1, 1), N1, N2)\n\n# grafico de la malla.\ndolfin.RectangleMesh(dolfin.Point(0, 0), dolfin.Point(1, 1), 10, 10)",
"El siguiente paso es definir una representación del espacio funcional para las funciones de ensayo y prueba. Para esto vamos a utilizar la clase FunctionSpace. El constructor de esta clase tiene al menos tres argumentos: un objeto de malla, el nombre del tipo de función base, y el grado de la función base. En este caso, vamos a utilizar la función de Lagrange.",
"# Funciones bases\nV = dolfin.FunctionSpace(mesh, 'Lagrange', 1)\nu = dolfin.TrialFunction(V)\nv = dolfin.TestFunction(V)",
"Ahora debemos definir a nuestra EDP en su formulación débil equivalente para poder tratarla como un problema de álgebra lineal que podamos resolver con el MEF.",
"# Formulación debil de la EDP\na = dolfin.inner(dolfin.nabla_grad(u), dolfin.nabla_grad(v)) * dolfin.dx\nf = dolfin.Constant(0.0)\nL = f * v * dolfin.dx",
"Y definimos las condiciones de frontera.",
"# Defino condiciones de frontera\ndef u0_top_boundary(x, on_boundary):\n return on_boundary and abs(x[1]-1) < 1e-8\n\ndef u0_bottom_boundary(x, on_boundary):\n return on_boundary and abs(x[1]) < 1e-8\n\ndef u0_left_boundary(x, on_boundary):\n return on_boundary and abs(x[0]) < 1e-8\n\ndef u0_right_boundary(x, on_boundary):\n return on_boundary and abs(x[0]-1) < 1e-8\n\n# Definiendo condiciones de frontera de Dirichlet\nbc_t = dolfin.DirichletBC(V, dolfin.Constant(5), u0_top_boundary)\nbc_b = dolfin.DirichletBC(V, dolfin.Constant(-5), u0_bottom_boundary)\nbc_l = dolfin.DirichletBC(V, dolfin.Constant(3), u0_left_boundary)\nbc_r = dolfin.DirichletBC(V, dolfin.Constant(-1), u0_right_boundary)\n\n# Lista de condiciones de frontera\nbcs = [bc_t, bc_b, bc_r, bc_l]",
"Ahora ya podemos resolver la EDP utilizando la función dolfin.solve. El vector resultante, luego lo podemos convertir a una matriz de NumPy y utilizarla para graficar la solución con Matplotlib.",
"# Resolviendo la EDP\nu_sol = dolfin.Function(V)\ndolfin.solve(a == L, u_sol, bcs)\n\n# graficando la solución\nu_mat = u_sol.vector().array().reshape(N1+1, N2+1)\n\nx = np.linspace(0, 1, N1+2)\ny = np.linspace(0, 1, N1+2)\nX, Y = np.meshgrid(x, y)\n\nfig, ax = plt.subplots(1, 1, figsize=(8, 6))\n\nc = ax.pcolor(X, Y, u_mat, vmin=-5, vmax=5, cmap=mpl.cm.get_cmap('RdBu_r'))\ncb = plt.colorbar(c, ax=ax)\nax.set_xlabel(r\"$x_1$\", fontsize=18)\nax.set_ylabel(r\"$x_2$\", fontsize=18)\ncb.set_label(r\"$u(x_1, x_2)$\", fontsize=18)\nfig.tight_layout()",
"Links útiles\n\nIntroducción al cálculo con python\nEcuaciones diferenciales con python\nEcuaciones en derivadas parciales con python\nSymPy\nSciPy\nproyecto FEniCS\n\n<center><h1>Muchas gracias!</h1></center>\n<center><h3>Raul E. Lopez Briega</h3></center>\nhttps://relopezbriega.github.io/\nhttps://relopezbriega.com.ar/\nTwitter: @relopezbriega\nSlides: https://relopezbriega.github.io/ecuaciones-diferenciales.html"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
letsgoexploring/linearsolve-package | examples/nk_model.ipynb | mit | [
"A New-Keynesian Model\nConsider the new-Keynesian business cycle model from Walsh (2017), chapter 8 expressed in log-linear terms:\n\\begin{align}\ny_t & = E_ty_{t+1} - \\sigma^{-1} (i_t - E_t\\pi_{t+1}) + g_t\\\n\\pi_t & = \\beta E_t\\pi_{t+1} + \\kappa y_t + u_t\\\ni_t & = \\phi_x y_t + \\phi_{\\pi} \\pi_t + v_t\\\nr_t & = i_t - E_t\\pi_{t+1}\\\ng_{t+1} & = \\rho_g g_{t} + \\epsilon_{t+1}^g\\\nu_{t+1} & = \\rho_u u_{t} + \\epsilon_{t+1}^u\\\nv_{t+1} & = \\rho_v v_{t} + \\epsilon_{t+1}^v\n\\end{align}\nwhere $y_t$ is the output gap (log-deviation of output from the natural rate), $\\pi_t$ is the quarterly rate of inflation between $t-1$ and $t$, $i_t$ is the nominal interest rate on funds moving between period $t$ and $t+1$, $r_t$ is the real interest rate, $g_t$ is the exogenous component of demand, $u_t$ is an exogenous component of inflation, and $v_t$ is the exogenous component of monetary policy.\nSince the model is already log-linear, there is no need to approximate the equilibrium conditions. We'll still use the .log_linear method to find the matrices $A$ and $B$, but we'll have to set the islinear option to True to avoid generating an error.\nImport requisite modules",
"# Import numpy, pandas, linearsolve, matplotlib.pyplot\nimport numpy as np\nimport pandas as pd\nimport linearsolve as ls\nimport matplotlib.pyplot as plt\nplt.style.use('classic')\n%matplotlib inline",
"Inintialize model and solve",
"# Input model parameters\nbeta = 0.99\nsigma= 1\neta = 1\nomega= 0.8\nkappa= (sigma+eta)*(1-omega)*(1-beta*omega)/omega\n\nrhor = 0.9\nphipi= 1.5\nphiy = 0\n\nrhog = 0.5\nrhou = 0.5\nrhov = 0.9\n\nSigma = 0.001*np.eye(3)\n\n# Store parameters\nparameters = pd.Series({\n 'beta':beta,\n 'sigma':sigma,\n 'eta':eta,\n 'omega':omega,\n 'kappa':kappa,\n 'rhor':rhor,\n 'phipi':phipi,\n 'phiy':phiy,\n 'rhog':rhog,\n 'rhou':rhou,\n 'rhov':rhov\n})\n\n\n# Define function that computes equilibrium conditions\ndef equations(variables_forward,variables_current,parameters):\n \n # Parameters \n p = parameters\n \n # Variables\n fwd = variables_forward\n cur = variables_current\n \n # Exogenous demand\n g_proc = p.rhog*cur.g - fwd.g\n \n # Exogenous inflation\n u_proc = p.rhou*cur.u - fwd.u\n \n # Exogenous monetary policy\n v_proc = p.rhov*cur.v - fwd.v\n \n # Euler equation\n euler_eqn = fwd.y -1/p.sigma*(cur.i-fwd.pi) + fwd.g - cur.y\n \n # NK Phillips curve evolution\n phillips_curve = p.beta*fwd.pi + p.kappa*cur.y + fwd.u - cur.pi\n \n # interest rate rule\n interest_rule = p.phiy*cur.y+p.phipi*cur.pi + fwd.v - cur.i\n \n # Fisher equation\n fisher_eqn = cur.i - fwd.pi - cur.r\n \n \n # Stack equilibrium conditions into a numpy array\n return np.array([\n g_proc,\n u_proc,\n v_proc,\n euler_eqn,\n phillips_curve,\n interest_rule,\n fisher_eqn\n ])\n\n# Initialize the nk model\nnk = ls.model(equations=equations,\n n_states=3,\n n_exo_states = 3,\n var_names=['g','u','v','i','r','y','pi'],\n parameters=parameters)\n\n# Set the steady state of the nk model\nnk.set_ss([0,0,0,0,0,0,0])\n\n# Find the log-linear approximation around the non-stochastic steady state\nnk.linear_approximation()\n\n# Solve the nk model\nnk.solve_klein(nk.a,nk.b)",
"Compute impulse responses and plot\nCompute impulse responses of the endogenous variables to a one percent shock to each exogenous variable.",
"# Compute impulse responses\nnk.impulse(T=11,t0=1,shocks=None)\n\n# Create the figure and axes\nfig = plt.figure(figsize=(12,12))\nax1 = fig.add_subplot(3,1,1)\nax2 = fig.add_subplot(3,1,2)\nax3 = fig.add_subplot(3,1,3)\n\n# Plot commands\nnk.irs['e_g'][['g','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Demand shock',ax=ax1).legend(loc='upper right',ncol=5)\nnk.irs['e_u'][['u','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Inflation shock',ax=ax2).legend(loc='upper right',ncol=5)\nnk.irs['e_v'][['v','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Interest rate shock',ax=ax3).legend(loc='upper right',ncol=5)",
"Construct a stochastic simulation and plot\nContruct a 151 period stochastic simulation by first siumlating the model for 251 periods and then dropping the first 100 values. The seed for the numpy random number generator is set to 0.",
"# Compute stochastic simulation\nnk.stoch_sim(T=151,drop_first=100,cov_mat=Sigma,seed=0)\n\n# Create the figure and axes\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(2,1,1)\nax2 = fig.add_subplot(2,1,2)\n\n# Plot commands\nnk.simulated[['y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Output, inflation, and interest rates',ax=ax1).legend(ncol=4)\nnk.simulated[['g','u','v']].plot(lw='5',alpha=0.5,grid=True,title='Exogenous demand, inflation, and policy',ax=ax2).legend(ncol=4,loc='lower right')\n\n# Plot simulated exogenous shocks\nnk.simulated[['e_g','g']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)\nnk.simulated[['e_u','u']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)\nnk.simulated[['e_v','v']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
malnoxon/board-game-data-science | stage4/stage4_report.ipynb | gpl-3.0 | [
"Stage 4, Report\nhttps://github.com/anhaidgroup/py_entitymatching/blob/master/notebooks/vldb_demo/Demo_notebook_v6.ipynb",
"import py_entitymatching as em\nimport os\nimport pandas as pd\n\n# specify filepaths for tables A and B. \npath_A = 'newTableA.csv'\npath_B = 'tableB.csv'\n# read table A; table A has 'ID' as the key attribute\nA = em.read_csv_metadata(path_A, key='id')\n# read table B; table B has 'ID' as the key attribute\nB = em.read_csv_metadata(path_B, key='id')",
"Filling in Missing Values",
"# Impute missing values\n\n# Manually set metadata properties, as current py_entitymatching.impute_table()\n# requires 'fk_ltable', 'fk_rtable', 'ltable', 'rtable' properties\nem.set_property(A, 'fk_ltable', 'id')\nem.set_property(A, 'fk_rtable', 'id')\nem.set_property(A, 'ltable', A)\nem.set_property(A, 'rtable', A)\n\nA_all_attrs = list(A.columns.values)\nA_impute_attrs = ['year','min_num_players','max_num_players','min_gameplay_time','max_gameplay_time','min_age']\nA_exclude_attrs = list(set(A_all_attrs) - set(A_impute_attrs))\nA1 = em.impute_table(A, exclude_attrs=A_exclude_attrs, missing_val='NaN', strategy='most_frequent', axis=0, val_all_nans=0, verbose=True)\n\n# Compare number of missing values to check the results\nprint(sum(A['min_num_players'].isnull()))\nprint(sum(A1['min_num_players'].isnull()))\n\n# Do the same thing for B\nem.set_property(B, 'fk_ltable', 'id')\nem.set_property(B, 'fk_rtable', 'id')\nem.set_property(B, 'ltable', B)\nem.set_property(B, 'rtable', B)\n\nB_all_attrs = list(B.columns.values)\n# TODO: add 'min_age'\nB_impute_attrs = ['year','min_num_players','max_num_players','min_gameplay_time','max_gameplay_time']\nB_exclude_attrs = list(set(B_all_attrs) - set(B_impute_attrs))\nB1 = em.impute_table(B, exclude_attrs=B_exclude_attrs, missing_val='NaN', strategy='most_frequent', axis=0, val_all_nans=0, verbose=True)\n\n# Compare number of missing values to check the results\nprint(sum(B['min_num_players'].isnull()))\nprint(sum(B1['min_num_players'].isnull()))\n\n\n# Load the pre-labeled data\nS = em.read_csv_metadata('sample_labeled.csv', \n key='_id',\n ltable=A1, rtable=B1, \n fk_ltable='ltable_id', fk_rtable='rtable_id')\n\n# Split S into I an J\nIJ = em.split_train_test(S, train_proportion=0.75, random_state=35)\nI = IJ['train']\nJ = IJ['test']\n\n\ncorres = em.get_attr_corres(A1, B1)\nprint(corres)",
"Generating Features\nHere, we generate all the features we decided upon after our final iteration of cross validation and debugging. We only use the relevant subset of all these features in the reported iterations below.",
"# Generate a set of features\n#import pdb; pdb.set_trace();\nimport py_entitymatching.feature.attributeutils as au\nimport py_entitymatching.feature.simfunctions as sim\nimport py_entitymatching.feature.tokenizers as tok\n\nltable = A1\nrtable = B1\n\n# Get similarity functions for generating the features for matching\nsim_funcs = sim.get_sim_funs_for_matching()\n# Get tokenizer functions for generating the features for matching\ntok_funcs = tok.get_tokenizers_for_matching()\n\n# Get the attribute types of the input tables\nattr_types_ltable = au.get_attr_types(ltable)\nattr_types_rtable = au.get_attr_types(rtable)\n\n# Get the attribute correspondence between the input tables\nattr_corres = au.get_attr_corres(ltable, rtable)\nprint(attr_types_ltable['name'])\nprint(attr_types_rtable['name'])\nattr_types_ltable['name'] = 'str_bt_5w_10w'\nattr_types_rtable['name'] = 'str_bt_5w_10w'\n\n\n\n# Get the features\nF = em.get_features(ltable, rtable, attr_types_ltable,\n attr_types_rtable, attr_corres,\n tok_funcs, sim_funcs)\n\n#F = em.get_features_for_matching(A1, B1)\nprint(F['feature_name'])\n\n\n#TODO get name feature!\n#http://pradap-www.cs.wisc.edu/cs638/py_entitymatching/user-manual/_modules/py_entitymatching/feature/simfunctions.html#get_sim_funs_for_matching\n#name_feature = em.get_feature_fn('name', em.get_tokenizers_for_matching(), em.get_sim_funs_for_matching())\n#print(name_feature)\n#em.add_feature(F, 'name_dist', name_feature)\n#print(F['feature_name'])",
"Cross Validation Method",
"def cross_validation_eval(H):\n cv_iter = pd.DataFrame(columns=['Precision', 'Recall', 'F1'])\n\n # Matchers\n matchers = [em.DTMatcher(name='DecisionTree', random_state=0),\n em.RFMatcher(name='RandomForest', random_state=0),\n em.SVMMatcher(name='SVM', random_state=0),\n em.NBMatcher(name='NaiveBayes'),\n em.LogRegMatcher(name='LogReg', random_state=0),\n ]\n \n for m in matchers:\n prec_result = em.select_matcher([m], table=H, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'],\n k=5,\n target_attr='label', metric='precision', random_state=0)\n recall_result = em.select_matcher([m], table=H, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'],\n k=5,\n target_attr='label', metric='recall', random_state=0)\n f1_result = em.select_matcher([m], table=H, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'],\n k=5,\n target_attr='label', metric='f1', random_state=0)\n cv_iter = cv_iter.append(\n pd.DataFrame([\n [prec_result['cv_stats']['Mean score'][0],\n recall_result['cv_stats']['Mean score'][0],\n f1_result['cv_stats']['Mean score'][0],\n ]],\n index=[m.name],\n columns=['Precision', 'Recall', 'F1']))\n return cv_iter",
"Iteration 1: CV",
"# Subset of features we used on our first iteration\ninclude_features = [\n 'min_num_players_min_num_players_lev_dist',\n 'max_num_players_max_num_players_lev_dist',\n 'min_gameplay_time_min_gameplay_time_lev_dist',\n 'max_gameplay_time_max_gameplay_time_lev_dist',\n]\nF_1 = F.loc[F['feature_name'].isin(include_features)]\n\n# Convert the I into a set of feature vectors using F\nH_1 = em.extract_feature_vecs(I, feature_table=F_1, attrs_after='label', show_progress=False)\nH_1.head(10)\n\ncross_validation_eval(H_1)",
"Iteration 2: Debug",
"PQ = em.split_train_test(H_1, train_proportion=0.80, random_state=0)\nP = PQ['train']\nQ = PQ['test']\n\n\n# Convert the I into a set of feature vectors using F\n# Here, we add name edit distance as a feature\ninclude_features_2 = [\n 'min_num_players_min_num_players_lev_dist',\n 'max_num_players_max_num_players_lev_dist',\n 'min_gameplay_time_min_gameplay_time_lev_dist',\n 'max_gameplay_time_max_gameplay_time_lev_dist',\n 'name_name_lev_dist'\n]\nF_2 = F.loc[F['feature_name'].isin(include_features_2)]\nH_2 = em.extract_feature_vecs(I, feature_table=F_2, attrs_after='label', show_progress=False)\nH_2.head(10)\n# Split H into P and Q\nPQ = em.split_train_test(H_2, train_proportion=0.75, random_state=0)\nP = PQ['train']\nQ = PQ['test']\n\n",
"Iteration 3: CV",
"# Convert the I into a set of feature vectors using F\n# Here, we add name edit distance as a feature\ninclude_features_3 = [\n 'min_num_players_min_num_players_lev_dist',\n 'max_num_players_max_num_players_lev_dist',\n 'min_gameplay_time_min_gameplay_time_lev_dist',\n 'max_gameplay_time_max_gameplay_time_lev_dist',\n 'name_name_lev_dist'\n]\nF_3 = F.loc[F['feature_name'].isin(include_features_3)]\nH_3 = em.extract_feature_vecs(I, feature_table=F_3, attrs_after='label', show_progress=False)\n\ncross_validation_eval(H_3)",
"Iteration 4: CV",
"# Convert the I into a set of feature vectors using F\n# Here, we add name edit distance as a feature\ninclude_features_4 = [\n 'min_num_players_min_num_players_lev_dist',\n 'max_num_players_max_num_players_lev_dist',\n 'min_gameplay_time_min_gameplay_time_lev_dist',\n 'max_gameplay_time_max_gameplay_time_lev_dist',\n 'name_name_jac_qgm_3_qgm_3'\n]\nF_4 = F.loc[F['feature_name'].isin(include_features_4)]\nH_4 = em.extract_feature_vecs(I, feature_table=F_4, attrs_after='label', show_progress=False)\ncross_validation_eval(H_4)",
"Train-Test Set Accuracy",
"# Apply train, test set evaluation\nI_table = em.extract_feature_vecs(I, feature_table=F_2, attrs_after='label', show_progress=False)\nJ_table = em.extract_feature_vecs(J, feature_table=F_2, attrs_after='label', show_progress=False)\n\nmatchers = [\n #em.DTMatcher(name='DecisionTree', random_state=0),\n #em.RFMatcher(name='RF', random_state=0),\n #em.NBMatcher(name='NaiveBayes'),\n em.LogRegMatcher(name='LogReg', random_state=0),\n #em.SVMMatcher(name='SVM', random_state=0)\n]\n\nfor m in matchers:\n m.fit(table=I_table, exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'], target_attr='label')\n J_table['prediction'] = m.predict(\n table=J_table, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], \n target_attr='label',\n )\n print(m.name)\n em.print_eval_summary(em.eval_matches(J_table, 'label', 'prediction'))\n J_table.drop('prediction', axis=1, inplace=True)\n print('')\n \nlog_matcher = matchers[0]\n\nJ_table['prediction'] = m.predict(\n table=J_table, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'], \n target_attr='label',\n )\nprint(m.name)\nem.print_eval_summary(em.eval_matches(J_table, 'label', 'prediction'))\nJ_table.drop('prediction', axis=1, inplace=True)\nprint('')\ncandidate_set_C1.csv"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pombredanne/pythran | docs/examples/Third Party Libraries.ipynb | bsd-3-clause | [
"Using third-party Native Libraries\nSometimes, the functionnality you need are onmy available in third-party native libraries. There's still an opportunity to use them from within Pythran, using Pythran support for capsules. \nPythran Code\nThe pythran code requires function pointers to the third-party functions, passed as parameters to your pythran routine, as in the following:",
"import pythran\n%load_ext pythran.magic\n\n%%pythran \n#pythran export pythran_cbrt(float64(float64), float64)\n\ndef pythran_cbrt(libm_cbrt, val):\n return libm_cbrt(val)",
"In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function.\nThis capsule can be created using ctypes:",
"import ctypes\n\n# capsulefactory\nPyCapsule_New = ctypes.pythonapi.PyCapsule_New\nPyCapsule_New.restype = ctypes.py_object\nPyCapsule_New.argtypes = ctypes.c_void_p, ctypes.c_char_p, ctypes.c_void_p\n\n# load libm\nlibm = ctypes.CDLL('libm.so.6')\n\n# extract the proper symbol\ncbrt = libm.cbrt\n\n# wrap it\ncbrt_capsule = PyCapsule_New(cbrt, \"double(double)\".encode(), None)",
"The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line.",
"pythran_cbrt(cbrt_capsule, 8.)",
"With Pointers\nNow, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran?",
"%%pythran\n\n#pythran export pythran_sincos(None(float64, float64*, float64*), float64)\ndef pythran_sincos(libm_sincos, val):\n import numpy as np\n val_sin, val_cos = np.empty(1), np.empty(1)\n libm_sincos(val, val_sin, val_cos)\n return val_sin[0], val_cos[0]",
"There is some magic happening here:\n\n\nNone is used to state the function pointer does not return anything.\n\n\nIn order to create pointers, we actually create empty one-dimensional array and let pythran handle them as pointer. Beware that you're in charge of all the memory checking stuff!\n\n\nApart from that, we can now call our function with the proper capsule parameter.",
"sincos_capsule = PyCapsule_New(libm.sincos, \"unchecked anyway\".encode(), None)\n\npythran_sincos(sincos_capsule, 0.)",
"With Pythran\nIt is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world.\nOne just need to use the capsule keyword to indicate we want to generate a capsule.",
"%%pythran\n\n## This is the capsule.\n#pythran export capsule corp((int, str), str set)\ndef corp(param, lookup):\n res, key = param\n return res if key in lookup else -1\n\n## This is some dummy callsite\n#pythran export brief(int, int((int, str), str set)):\ndef brief(val, capsule):\n return capsule((val, \"doctor\"), {\"some\"})\n",
"It's not possible to call the capsule directly, it's an opaque structure.",
"try:\n corp((1,\"some\"),set())\nexcept TypeError as e:\n print(e)",
"It's possible to pass it to the according pythran function though.",
"brief(1, corp)",
"With Cython\nThe capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython: api and __pyx_capi__. nogil is of importance here: Pythran releases the GIL, so better not call a cythonized function that uses it.",
"!find -name 'cube*' -delete\n\n%%file cube.pyx\n#cython: language_level=3\ncdef api double cube(double x) nogil:\n return x * x * x\n\nfrom setuptools import setup\nfrom Cython.Build import cythonize\n\n_ = setup(\n name='cube',\n ext_modules=cythonize(\"cube.pyx\"),\n zip_safe=False,\n # fake CLI call\n script_name='setup.py',\n script_args=['--quiet', 'build_ext', '--inplace']\n)",
"The cythonized module has a special dictionary that holds the capsule we're looking for.",
"import sys\nsys.path.insert(0, '.')\nimport cube\nprint(type(cube.__pyx_capi__['cube']))\n\ncython_cube = cube.__pyx_capi__['cube']\npythran_cbrt(cython_cube, 2.)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: SANDBOX-1\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-1', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IanHawke/Southampton-PV-NumericalMethods-2016 | solutions/04-Boundary-Value-Problems.ipynb | mit | [
"Boundary Value Problems\nIn Giles Richardson's notes the simple one-dimensional inorganic solar cell model is presented. This model includes equations for the hole current density $j_n$ and the electron current density $j_p$ in terms of the hole and electron densities $n$ and $p$, the intrinsic carrier density $n_i$, the thermal generation rate $G$, and material dependent constants. After non-dimensionalization we end up with equations of the form\n\\begin{align}\n \\frac{\\text{d} j_p}{\\text{d} x} &= \\Theta \\left( n_i^2 - n p \\right) + G, \\\n \\frac{\\text{d} j_n}{\\text{d} x} &= -\\Theta \\left( n_i^2 - n p \\right) - G.\n\\end{align}\nThese two ODEs require two boundary conditions, which are (thanks to the symmetry of the problem)\n\\begin{align}\n \\left. \\left( j_p - j_n \\right) \\right|{x=0} &=0, \\\n \\left. j_n \\right|{x=1} &= 0.\n\\end{align}\nWe will assume that $n_i$ and $G$ are given constants, and that the charge densities $n, p$ have been found as functions of space already: in this case we'll use\n\\begin{align}\n n &= \\exp(-x-x^2/100) \\left(1 - x^2 \\right)^2, \\\n p & = \\exp(x-x^2/100) \\left(1 - x^2 \\right)^2.\n\\end{align}",
"from __future__ import division\nimport numpy\nfrom matplotlib import pyplot\n%matplotlib notebook\n\ndef n(x):\n return numpy.exp(-x) * numpy.exp(-x**2/100) * (1-x**2)**2\ndef p(x):\n return numpy.exp(x) * numpy.exp(-x**2/100) * (1-x**2)**2\n\n\nx = numpy.linspace(0, 1)\n\npyplot.figure(figsize=(10,6))\npyplot.plot(x, n(x), label=r\"$n$\")\npyplot.plot(x, p(x), label=r\"$p$\")\npyplot.legend()\npyplot.xlabel(r\"$x$\")\npyplot.show()",
"This is a Boundary Value Problem. It's an ordinary differential equation where the boundary conditions are given at different points - here at $x=0$ and $x=1$.\nBoundary value problems can be problematic: even when properly set up (same number of boundary conditions as equations, reasonable domain) they need not have any solutions, or they can have a unique solution, or they can have multiple - even infinitely many - solutions! Adding numerics just adds difficulty. However, it's still perfectly feasible to find solutions, when they exist.\nShooting\nWe can use a lot of the technology and methods we've seen already to solve boundary value problems. This relies on one key feature: if we have a solution to the initial value problem with the same differential equation, with boundary conditions at the start that match the BVP, and a solution that matches the BVP at the end, then it is a solution of the BVP.\nTo phrase that for the problem above: if we have a value $J$ for $j_p$ at $x=0$ then we know (from the boundary condition at $x=0$) that $j_n=J$. We then solve the initial value problem\n\\begin{equation}\n \\frac{\\text{d}}{\\text{d}x} \\begin{pmatrix} j_p \\ j_n \\end{pmatrix} = \\begin{pmatrix} \\Theta \\left( n_i^2 - n p \\right) + G \\ -\\Theta \\left( n_i^2 - n p \\right) - G \\end{pmatrix}, \\qquad \\begin{pmatrix} j_p \\ j_n \\end{pmatrix}(0) = \\begin{pmatrix} J \\ J \\end{pmatrix}.\n\\end{equation}\nThis gives us both $j_p$ and $j_n$ as functions of $x$. Our solutions clearly depend on the initial value $J$. If our value of $J$ is such that $j_n(1) = 0$ then we match the boundary condition at $x=1$. We've then built a solution that solves the differential equation, and matches all the boundary conditions: it is a solution of the BVP.\nWe can solve the initial value problem using any of the techniques used earlier: here we'll use odeint. The solution will be $j_n(x;J)$ and $j_p(x;J)$, showing how the solution depends on the initial data. We can then evaluate this solution at $x=1$: we want\n\\begin{equation}\n F(J) = j_n(1;J) = 0.\n\\end{equation}\nThis is a nonlinear root-finding problem, where evaluating the function whose root we are trying to find involves solving an initial value problem.\nLet's implement this, assuming $\\Theta = 0.9, G = 1, n_i = 0.6$. The critical value of $J$ is between $0$ and $5$.",
"Theta = 0.9\nG = 1\nni = 0.6\n\nfrom scipy.integrate import odeint\nfrom scipy.optimize import brentq\n\ndef f_ivp(y, x):\n jp, jn = y\n dydx = numpy.zeros_like(y)\n dydx[0] = -Theta*(ni**2-n(x)*p(x)) + G\n dydx[1] = -dydx[0]\n return dydx\n\ndef F_root(J):\n y0 = [J, J]\n y = odeint(f_ivp, y0, [0, 1])\n jn1 = y[-1,1]\n return jn1\n\nJ_critical = brentq(F_root, 0, 5)\n\nsolution = odeint(f_ivp, [J_critical, J_critical], x)\npyplot.figure(figsize=(10, 6))\npyplot.plot(x, solution[:,0], label=r\"$j_p$\")\npyplot.plot(x, solution[:,1], label=r\"$j_n$\")\npyplot.legend()\npyplot.xlabel(r\"$x$\")\npyplot.show()",
"Exercise\nRepeat this using Euler's method and bisection to see how much difference it makes."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mabevillar/rmtk | rmtk/vulnerability/derivation_fragility/equivalent_linearization/miranda_2000_firm_soils/miranda_2000_firm_soils.ipynb | agpl-3.0 | [
"Miranda (2000) for firm soils\nThis methodology, proposed in Miranda (2000), aims to estimate the maximum lateral inelastic displacement demands on a structure based on the maximum lateral elastic displacement demands for sites with average shear-wave velocities higher than 180 m/s. A reduction factor based on the displacement ductility ratio and the period of vibration of the system is used to estimate the inelastic displacements, which are then used as inputs to build the fragility model.\nNote: To run the code in a cell:\n\nClick on the cell to select it.\nPress SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.",
"import miranda_2000_firm_soils\nfrom rmtk.vulnerability.common import utils\n%matplotlib inline",
"Load capacity curves\nIn order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.\nPlease provide the location of the file containing the capacity curves using the parameter capacity_curves_file.",
"capacity_curves_file = \"../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv\"\n\ncapacity_curves = utils.read_capacity_curves(capacity_curves_file)\nutils.plot_capacity_curves(capacity_curves)",
"Load ground motion records\nPlease indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.\nNote: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.\nThe parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.",
"gmrs_folder = '../../../../../../rmtk_data/accelerograms'\nminT, maxT = 0.01, 2.00\n\ngmrs = utils.read_gmrs(gmrs_folder)\n#utils.plot_response_spectra(gmrs, minT, maxT)",
"Load damage state thresholds\nPlease provide the path to your damage model file using the parameter damage_model_file in the cell below.\nThe damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.",
"damage_model_file = \"../../../../../../rmtk_data/damage_model.csv\"\n\ndamage_model = utils.read_damage_model(damage_model_file)",
"Obtain the damage probability matrix\nThe parameter damping_ratio needs to be defined in the cell below in order to calculate the damage probability matrix.",
"damping_ratio = 0.05\n\nPDM, Sds = miranda_2000_firm_soils.calculate_fragility(capacity_curves, gmrs, damage_model, damping_ratio)",
"Fit lognormal CDF fragility curves\nThe following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:\n1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are \"PGA\", \"Sd\" and \"Sa\".\n2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.\n3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are \"least squares\" and \"max likelihood\".",
"IMT = \"Sa\"\nperiod = 2.0\nregression_method = \"least squares\"\n\nfragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio, \n IMT, damage_model, regression_method)",
"Plot fragility functions\nThe following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:\n* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions",
"minIML, maxIML = 0.01, 2.00\n\nutils.plot_fragility_model(fragility_model, minIML, maxIML)",
"Save fragility functions\nThe derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n2. minIML and maxIML: These parameters define the bounds of applicability of the functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".",
"taxonomy = \"RC\"\noutput_type = \"csv\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)",
"Obtain vulnerability function\nA vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level. \nThe following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:\n1. cons_model_file: This parameter specifies the path of the consequence model file.\n2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.\n3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are \"lognormal\", \"beta\", and \"PMF\".",
"cons_model_file = \"../../../../../../rmtk_data/cons_model.csv\"\nimls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, \n 0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]\ndistribution_type = \"lognormal\"\n\ncons_model = utils.read_consequence_model(cons_model_file)\nvulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model, \n imls, distribution_type)",
"Plot vulnerability function",
"utils.plot_vulnerability_model(vulnerability_model)",
"Save vulnerability function\nThe derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".",
"taxonomy = \"RC\"\noutput_type = \"nrml\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kevinjliang/Duke-Tsinghua-MLSS-2017 | 03B_Generative_Adversarial_Network.ipynb | apache-2.0 | [
"Generative Adversarial Network in Tensorflow\nGenerative Adversarial Networks, introduced by Ian Goodfellow in 2014, are neural nets we can train to produce new images (or other kinds of data) that look as though they came from our true data distribution. In this notebook, we'll implement a small GAN for generating images that look as though they come from the MNIST dataset.\nThe key insight behind the GAN is to pit two neural networks against each other. On the one hand is the Generator, a neural network that takes random noise as input and produces an image as output. On the other hand is the Discriminator, which takes in an image and classifies it as real (from MNIST) or fake (from our Generator). During training, we alternate between training the Generator to fool the Discriminator, and training the Discriminator to call the Generator's bluff.\nImplementing a GAN in Tensorflow will give you practice turning more involved models into working code, and is also a great showcase for Tensorflow's variable scope feature. (Variable scope has made cameos in previous tutorials, but we'll discuss it in a bit more depth here. If you want to see how variable scope is used in TensorFlow Slim, definitely go revisit Kevin Liang's VAE tutorial!)\nImports",
"%matplotlib inline\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport time\n\n# Use if running on a GPU\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nconfig.log_device_placement = True",
"Loading the data\nAs in previous examples, we'll use MNIST, because it's a small and easy-to-use dataset that comes bundled with Tensorflow.",
"from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)",
"Utility functions\nLet's define some utility functions that will help us quickly construct layers for use in our model. There are two things worth noting here:\n\nInstead of tf.Variable, we use tf.get_variable. \n\nThe reason for this is a bit subtle, and you may want to skip this and come back to it once you've seen the rest of the code. Here's the basic explanation. Later on in this notebook, we will call fully_connected_layer from a couple different places. Sometimes, we will want new variables to be added to the graph, because we are creating an entirely new layer of our network. Other times, however, we will want to use the same weights as an already-existing layer, but acting on different inputs. \nFor example, the Discriminator network will appear twice in our computational graph; in one case, the input neurons will be connected to the \"real data\" placeholder (which we will feed MNIST images), and in the other, they will be connected to the output of the Generator. Although these networks form two separate parts of our computational graph, we want them to share the same weights: conceptually, there is one Discriminator function that gets applied twice, not two different functions altogether. Since tf.Variable always creates a new variable when called, it would not be appropriate for use here.\nVariable scoping solves this problem. Whenever we are adding nodes to a graph, we are operating within a scope. Scopes can be named, and you can create a new scope using tf.variable_scope('name') (more on this later). When a scope is open, it can optionally be in reuse mode. The result of calling tf.get_variable depends on whether you are in reuse mode or not. If not (this is the default), tf.get_variable will create a new variable, or cause an error if a variable by the same name already exists in the current scope. If you are in reuse mode, the behavior is the opposite: tf.get_variable will look up and return an existing variable (with the specified name) within your scope, or throw an error if it doesn't exist. By carefully controlling our scopes later on, we can create exactly the graph we want, with variables shared across the graph where appropriate.\n\nThe variables_from_scope function lists all variables created within a given scope. This will be useful later, when we want to update all \"discriminator\" variables, but no \"generator\" variables, or vice versa.",
"def shape(tensor):\n \"\"\"\n Get the shape of a tensor. This is a compile-time operation,\n meaning that it runs when building the graph, not running it.\n This means that it cannot know the shape of any placeholders \n or variables with shape determined by feed_dict.\n \"\"\"\n return tuple([d.value for d in tensor.get_shape()])\n\ndef fully_connected_layer(in_tensor, out_units, activation_function=tf.nn.relu):\n \"\"\"\n Add a fully connected layer to the default graph, taking as input `in_tensor`, and\n creating a hidden layer of `out_units` neurons. This should be called within a unique variable\n scope. Creates variables W and b, and computes activation_function(in * W + b).\n \"\"\"\n _, num_features = shape(in_tensor)\n W = tf.get_variable(\"weights\", [num_features, out_units], initializer=tf.truncated_normal_initializer(stddev=0.1))\n b = tf.get_variable(\"biases\", [out_units], initializer=tf.constant_initializer(0.1))\n return activation_function(tf.matmul(in_tensor, W) + b)\n\ndef variables_from_scope(scope_name):\n \"\"\"\n Returns a list of all variables in a given scope. This is useful when\n you'd like to back-propagate only to weights in one part of the network\n (in our case, the generator or the discriminator).\n \"\"\"\n return tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=scope_name)",
"We'll also provide a simple function for displaying a few 28-pixel images. This will help us understand the progress of our GAN as it trains; we'll use it to visualize the generated 'fake digit' images.",
"def visualize_row(images, img_width=28, cmap='gray'):\n \"\"\"\n Takes in a tensor of images of given width, and displays them in a column\n in a plot, using `cmap` to map from numbers to colors.\n \"\"\"\n im = np.reshape(images, [-1, img_width])\n plt.figure()\n plt.imshow(im, cmap=cmap)\n plt.show()",
"Generator\nA GAN is made up of two smaller networks: a generator and a discriminator. The generator is responsible for sampling images from a distribution that we hope will get closer and closer, as we train, to the real data distribution.\nNeural networks are deterministic, so in order to sample a new image from the generator, we first create some random noise z (in our case, z will be a 100-dimensional uniform random variable) and then feed that noise to the network. You can think of z as being a latent, low-dimensional representation of some image G(z), though in a vanilla GAN, it is usually difficult to interpret z's components in a meaningful way.\nOur generator is a dead-simple multi-layer perceptron (feed-forward network), with 128 hidden units.",
"def generator(z):\n \"\"\"\n Given random noise `z`, use a simple MLP with 128 hidden units to generate a\n sample image (784 values between 0 and 1, enforced with the sigmoid function).\n \"\"\"\n with tf.variable_scope(\"fc1\"):\n fc1 = fully_connected_layer(z, 128)\n with tf.variable_scope(\"fc2\"):\n return fully_connected_layer(fc1, 784, activation_function=tf.sigmoid)",
"Discriminator\nAlthough it isn't necesssary, it makes some sense for our discriminator to mirror the generator's architecture, as we do here. The discriminator takes in an image (perhaps a real one from the MNIST dataset, perhaps a fake one from our generator), and attempts to classify it as real (1) or fake (0). Our architecture is again a simple MLP, taking 784 pixels down to 128 hidden units, and finally down to a probability.",
"def discriminator(x):\n \"\"\"\n This discriminator network takes in a tensor with shape [batch, 784], and classifies\n each example image as real or fake. The network it uses is quite simple: a fully connected\n layer with ReLU activation takes us down to 128 dimensions, then we collapse that to 1 number \n in [0, 1] using a fully-connected layer with sigmoid activation. The result can be interpreted\n as a probability, the discriminator's strength-of-belief that a sample is from the \n real data distribution.\n \"\"\"\n with tf.variable_scope(\"fc1\"):\n fc1 = fully_connected_layer(x, 128)\n with tf.variable_scope(\"fc2\"):\n return fully_connected_layer(fc1, 1, activation_function=tf.sigmoid)",
"GAN\nGiven a generator and discriminator, we can now set up the GAN's computational graph. \nWe use Tensorflow's variable scope feature for two purposes. \n\n\nFirst, it helps separate the variables used by the generator and by the discriminator; this is important, because when training, we want to alternate between updating each set of variables according to a different objective. \n\n\nSecond, scoping helps us reuse the same set of discriminator weights both for the operations we perform on real images and for those performed on fake images. To achieve this, after calling discriminator for the first time (and creating these weight variables), we tell our current scope to reuse_variables(), meaning that on our next call to discriminator, existing variables will be reused rather than creating new ones.",
"def gan(batch_size, z_dim):\n \"\"\"\n Given some details about the training procedure (batch size, dimension of z),\n this function sets up the rest of the computational graph for the GAN.\n It returns a dictionary containing six ops/tensors: `train_d` and `train_g`, the \n optimization steps for the discriminator and generator, `real_data` and `noise`, \n two placeholders that should be fed in during training, `d_loss`, the discriminator loss\n (useful for estimating progress toward convergence), and `fake_data`, which can be \n evaluated (with noise in the feed_dict) to sample from the generator's distribution.\n \"\"\"\n z = tf.placeholder(tf.float32, [batch_size, z_dim], name='z')\n x = tf.placeholder(tf.float32, [batch_size, 784], name='x')\n\n with tf.variable_scope('generator'):\n fake_x = generator(z)\n\n with tf.variable_scope('discriminator') as scope:\n d_on_real = discriminator(x)\n scope.reuse_variables()\n d_on_fake = discriminator(fake_x)\n\n g_loss = -tf.reduce_mean(tf.log(d_on_fake))\n d_loss = -tf.reduce_mean(tf.log(d_on_real) + tf.log(1. - d_on_fake))\n\n optimize_d = tf.train.AdamOptimizer().minimize(d_loss, var_list=variables_from_scope(\"discriminator\"))\n optimize_g = tf.train.AdamOptimizer().minimize(g_loss, var_list=variables_from_scope(\"generator\"))\n\n return {'train_d': optimize_d,\n 'train_g': optimize_g,\n 'd_loss': d_loss,\n 'fake_data': fake_x,\n 'real_data': x,\n 'noise': z}",
"Training a GAN\nOur training procedure is a bit more involved than in past demos. Here are the main differences:\n1. Each iteration, we first train the generator, then (separately) the discriminator.\n2. Each iteration, we need to feed in a batch of images, just as in previous notebooks. But we also need a batch of noise samples. For this, we use Numpy's np.random.uniform function.\n3. Every 1000 iterations, we log some data to the console and visualize a few samples from our generator.",
"def train_gan(iterations, batch_size=50, z_dim=100):\n \"\"\"\n Construct and train the GAN.\n \"\"\"\n model = gan(batch_size=batch_size, z_dim=z_dim)\n \n def make_noise():\n return np.random.uniform(-1.0, 1.0, [batch_size, z_dim])\n\n def next_feed_dict():\n return {model['real_data']: mnist.train.next_batch(batch_size)[0],\n model['noise']: make_noise()}\n \n initialize_all = tf.global_variables_initializer()\n with tf.Session(config=config) as sess:\n sess.run(initialize_all)\n start_time = time.time()\n for t in range(iterations):\n sess.run(model['train_g'], feed_dict=next_feed_dict())\n _, d_loss = sess.run([model['train_d'], model['d_loss']], feed_dict=next_feed_dict())\n\n if t % 1000 == 0 or t+1 == iterations:\n fake_data = sess.run(model['fake_data'], feed_dict={model['noise']: make_noise()})\n print('Iter [%8d] Time [%5.4f] d_loss [%.4f]' % (t, time.time() - start_time, d_loss))\n visualize_row(fake_data[:5])",
"Moment of truth\nIt's time to run our GAN! Watch as it learns to draw recognizable digits in about three minutes.",
"train_gan(25000)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OSGeo-live/CesiumWidget | GSOC/notebooks/Projects/GRASS/python-grass-addons/01_scripting_library.ipynb | apache-2.0 | [
"Introduction to the GRASS GIS 7 Python Scripting Library\nThe GRASS GIS 7 Python Scripting Library provides functions to call GRASS modules within scripts as subprocesses.\nThe most often used functions include:\n* run_command: most often used with modules which output raster/vector data where text output is not expected\n* read_command: used when we are interested in text output\n* parse_command: used with modules producing text output as key=value pair\n* write_command: for modules expecting text input from either standard input or file\nBesides, this library provides several wrapper functions for often called modules.\nCalling GRASS GIS modules\nWe start by importing GRASS GIS Python Scripting Library:",
"import grass.script as gscript",
"Before running any GRASS raster modules, you need to set the computational region using g.region. In this example, we set the computational extent and resolution to the raster layer elevation.",
"gscript.run_command('g.region', raster='elevation')",
"The run_command() function is the most commonly used one. Here, we apply the focal operation average (r.neighbors) to smooth the elevation raster layer. Note that the syntax is similar to bash syntax, just the flags are specified in a parameter.",
"gscript.run_command('r.neighbors', input='elevation', output='elev_smoothed', method='average', flags='c')",
"Specifics of interactive interpreters\nWhen using the GUI, we can look at the map right away. In scripts, we rarely render a map. In IPython Notebook we are able to show a map as a result. For this we use function view() from a custom Python module render supplied with these notebooks. (This function might be part of GRASS GIS Python API for IPython in the future.)",
"from render import view\nview(rasters=['elev_smoothed'])",
"To simplify the re-running of examples, we set the environmental variable GRASS_OVERWRITE,\nwhich allows direct overwriting of results from previous runs, bypassing the overwrite checks.",
"import os\nos.environ['GRASS_OVERWRITE'] = '1'",
"When an unrecoverable error occurs (due to incorrect parameter use or other issues), the GRASS GIS functions usually print the error message and end the program execution (by calling the exit() function). However, when working in an interactive environment such as IPython, the behavior can be changed using set_raise_on_error() function. The following call will cause GRASS GIS Python functions to raise an exception instead of calling exit().",
"gscript.set_raise_on_error(True)",
"Calling GRASS GIS modules with textual input or output\nTextual output from modules can be captured using the read_command() function.",
"print(gscript.read_command('g.region', flags='p'))\n\nprint(gscript.read_command('r.univar', map='elev_smoothed', flags='g'))",
"Certain modules can produce output in key-value format which is enabled by the g flag. The parse_command() function automatically parses this output and returns a dictionary. In this example, we call g.proj to display the projection parameters of the actual location.",
"gscript.parse_command('g.proj', flags='g')",
"For comparison, below is the same example, but using the read_command() function.",
"print(gscript.read_command('g.proj', flags='g'))",
"Certain modules require the text input be in a file or provided as standard input. Using the write_command() function we can conveniently pass the string to the module. Here, we are creating a new vector with one point with v.in.ascii.\nNote that stdin parameter is not used as a module parameter, but its content is passed as standard input to the subprocess.",
"gscript.write_command('v.in.ascii', input='-', stdin='%s|%s' % (635818.8, 221342.4), output='view_point')",
"Convenient wrapper functions\nSome modules have wrapper functions to simplify frequent tasks.\nWe can obtain the information about the vector layer which we just created with the v.info wrapper.",
"gscript.vector_info('view_point')",
"It is also possible to retrieve the raster layer history (r.support) and layer information (r.info) or to query (r.what) raster layer pixel values.",
"gscript.raster_what('elevation', [[635818.8, 221342.4], [635710, 221330]])",
"As another example, the r.mapcalc wrapper for raster algebra allows using a long expressions.",
"gscript.mapcalc(\"elev_strip = if(elevation > 100 && elevation < 125, elevation, null())\")\nprint(gscript.read_command('r.univar', map='elev_strip', flags='g'))",
"The g.region wrapper is a convenient way to retrieve the current region settings (i.e., computational region). It returns a dictionary with values converted to appropriate types (floats and ints).",
"region = gscript.region()\nprint region\n# cell area in map units (in projected Locations)\nregion['nsres'] * region['ewres']",
"We can list data stored in a GRASS GIS location with g.list wrappers. With this function, the map layers are grouped by mapsets (in this example, raster layers):",
"gscript.list_grouped(['raster'])",
"Here is an example of a different g.list wrapper which structures the output as list of pairs (name, mapset).\nWe obtain current mapset with g.gisenv wrapper.",
"current_mapset = gscript.gisenv()['MAPSET']\ngscript.list_pairs('raster', mapset=current_mapset)",
"Exercise\nDerive a new slope layer from the raster layer elevation using r.slope.aspect.\nThen calculate and report the slope average and median values (hint: see r.univar).\nExport all raster layers from your mapset with a name prefix \"elev_*\" as GeoTiff (see r.out.gdal). Don't forget to set the current region for each map in order to match the individual exported raster layer extents and resolutions since they may differ from each other."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
azjps/bokeh | examples/howto/charts/scatter.ipynb | bsd-3-clause | [
"from bokeh.sampledata.autompg import autompg as df\nfrom bokeh.sampledata.olympics2014 import data\nfrom bokeh.sampledata.iris import flowers\n\nfrom bokeh.charts import Scatter, output_notebook, show\nfrom bokeh.charts.operations import blend\nfrom bokeh.charts.utils import df_from_json\nimport pandas as pd\n\noutput_notebook()\n\nscatter0 = Scatter(\n df, x='mpg', title=\"x='mpg'\", xlabel=\"Miles Per Gallon\")\nshow(scatter0)\n\nscatter1 = Scatter(\n df, x='mpg', y='hp', title=\"x='mpg', y='hp'\",\n xlabel=\"Miles Per Gallon\", ylabel=\"Horsepower\", legend='top_right')\nshow(scatter1)\n\nscatter2 = Scatter(\n df, x='mpg', y='hp', color='cyl', title=\"x='mpg', y='hp', color='cyl'\",\n xlabel=\"Miles Per Gallon\", ylabel=\"Horsepower\", legend='top_right')\nshow(scatter2)\n\nscatter3 = Scatter(\n df, x='mpg', y='hp', color='origin', title=\"x='mpg', y='hp', color='origin'\",\n xlabel=\"Miles Per Gallon\", ylabel=\"Horsepower\", legend='top_right')\nshow(scatter3)\n\nscatter4 = Scatter(\n df, x='mpg', y='hp', color='cyl', marker='origin', title=\"x='mpg', y='hp', color='cyl', marker='origin'\",\n xlabel=\"Miles Per Gallon\", ylabel=\"Horsepower\", legend='top_right')\nshow(scatter4)",
"Example with nested json/dict like data, which has been pre-aggregated and pivoted",
"df2 = df_from_json(data)\ndf2 = df2.sort('total', ascending=False)\ndf2 = df2.head(10)\ndf2 = pd.melt(df2, id_vars=['abbr', 'name'])\n\nscatter5 = Scatter(\n df2, x='value', y='name', color='variable', title=\"x='value', y='name', color='variable'\",\n xlabel=\"Medals\", ylabel=\"Top 10 Countries\", legend='bottom_right')\nshow(scatter5)",
"Use blend operator to \"stack\" variables",
"scatter6 = Scatter(flowers, x=blend('petal_length', 'sepal_length', name='length'),\n y=blend('petal_width', 'sepal_width', name='width'), color='species',\n title='x=petal_length+sepal_length, y=petal_width+sepal_width, color=species',\n legend='top_right')\nshow(scatter6)"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/spectral-density | tf/mnist_spectral_density.ipynb | apache-2.0 | [
"MNIST Hessian Spectral Density Calculator\nThis notebook trains a simple MLP for MNIST, runs the Lanczos algorithm on its full-batch Hessian, and then plots the spectral density. This shows how to use the python TensorFlow LanczosExperiment class.",
"import os\nimport sys\n\nimport matplotlib.pyplot as plt\n\nimport numpy as np\nimport tensorflow as tf\n\nimport experiment_utils\nimport lanczos_experiment\nimport tensorflow_datasets as tfds\n\nsys.path.insert(0, os.path.abspath(\"./../jax\"))\nimport density\n\nCOLAB_PATH = '/tmp/spectral-density'\nTRAIN_PATH = os.path.join(COLAB_PATH, 'train')\nLANCZOS_PATH = os.path.join(COLAB_PATH, 'lanczos')\n\nos.makedirs(TRAIN_PATH)\nos.makedirs(LANCZOS_PATH)\n\nIMAGE_SIZE = 28\nNUM_CLASSES = 10\n\nBATCH_SIZE = 32\nLEARNING_RATE = 0.02\n\nNUM_TRAIN_STEPS = 10000\nNUM_SUMMARIZE_STEPS = 1000\nNUM_LANCZOS_STEPS = 90\n\ndef data_fn(num_epochs=None, shuffle=False, initializable=False):\n \"\"\"Returns tf.data dataset for MNIST.\"\"\"\n dataset = tfds.load(name=\"mnist\", split=tfds.Split.TRAIN)\n dataset = dataset.repeat(num_epochs)\n \n if shuffle:\n dataset = dataset.shuffle(buffer_size=1024)\n dataset = dataset.batch(BATCH_SIZE)\n\n if initializable: \n iterator = dataset.make_initializable_iterator()\n init_op = iterator.initializer\n else:\n iterator = dataset.make_one_shot_iterator()\n init_op = None\n \n output = iterator.get_next() \n images = (tf.to_float(output['image']) - 128) / 128.0\n one_hot_labels = tf.one_hot(output['label'], NUM_CLASSES) \n return images, one_hot_labels, init_op\n\ndef model_fn(features, one_hot_labels):\n \"\"\"Builds MLP for MNIST and computes loss.\n\n Args:\n features: a [batch_size, height, width, channels] float32 tensor.\n one_hot_labels: A [batch_size, NUM_CLASSES] int tensor.\n \n Returns:\n A scalar loss tensor, and a [batch_size, NUM_CLASSES] prediction tensor.\n \"\"\"\n net = tf.reshape(features, [BATCH_SIZE, IMAGE_SIZE * IMAGE_SIZE])\n net = tf.layers.dense(net, 256, activation=tf.nn.relu)\n net = tf.layers.dense(net, 256, activation=tf.nn.relu)\n net = tf.layers.dense(net, NUM_CLASSES)\n \n loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(\n logits=net, labels=one_hot_labels))\n \n return loss, tf.nn.softmax(net)",
"Train a MNIST model.",
"tf.reset_default_graph()\n\nimages, one_hot_labels, _ = data_fn(num_epochs=None, shuffle=True, initializable=False) \n\nloss, predictions = model_fn(images, one_hot_labels)\n\naccuracy = tf.reduce_mean(tf.to_float(tf.equal(tf.math.argmax(predictions, axis=1),\n tf.math.argmax(one_hot_labels, axis=1))))\n\ntrain_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(loss)\nsaver = tf.train.Saver(max_to_keep=None)\n\n# Simple training loop that saves the model checkpoint every 1000 steps.\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n for i in range(NUM_TRAIN_STEPS):\n if i % NUM_SUMMARIZE_STEPS == 0:\n saver.save(sess, os.path.join(TRAIN_PATH, 'model.ckpt'), global_step=i)\n \n outputs = sess.run([loss, train_op])\n \n if i % NUM_SUMMARIZE_STEPS == 0:\n print 'Step: ', i, 'Loss: ', outputs[0] \n \n # Save a final checkpoint.\n saver.save(sess, os.path.join(TRAIN_PATH, 'model.ckpt'), \n global_step=NUM_TRAIN_STEPS)\n\n# Check that the model fits the training data.\nwith tf.Session() as sess:\n saver.restore(sess, os.path.join(TRAIN_PATH, 'model.ckpt-10000'))\n \n minibatch_accuracy = 0.0\n for i in range(100):\n minibatch_accuracy += sess.run(accuracy) / 100\n \nprint 'Accuracy on training data:', minibatch_accuracy ",
"Run Lanczos on the MNIST model.",
"tf.reset_default_graph()\n\ncheckpoint_to_load = os.path.join(TRAIN_PATH, 'model.ckpt-10000')\n\n# For Lanczos, the tf.data pipeline should have some very specific characteristics:\n# 1. It should stop after a single epoch.\n# 2. It should be deterministic (i.e., no data augmentation).\n# 3. It should be initializable (we use it to restart the pipeline for each Lanczos iteration).\nimages, one_hot_labels, init = data_fn(num_epochs=1, shuffle=False, initializable=True)\n\nloss, _ = model_fn(images, one_hot_labels)\n\n# Setup for Lanczos mode.\nrestore_specs = [\n experiment_utils.RestoreSpec(tf.trainable_variables(),\n checkpoint_to_load)]\n\n# This callback is used to restart the tf.data pipeline for each Lanczos\n# iteration on each worker (the chief has a slightly different callback). You \n# can check the logs to see the status of the computation: new \n# phases of Lanczos iteration are indicated by \"New phase i\", and local steps \n# per worker are logged with \"Local step j\".\ndef end_of_input(sess, train_op):\n try:\n sess.run(train_op)\n except tf.errors.OutOfRangeError:\n sess.run(init)\n return True\n return False\n\n# This object stores the state for the phases of the Lanczos iteration.\nexperiment = lanczos_experiment.LanczosExperiment(\n loss, \n worker=0, # These two flags will change when the number of workers > 1.\n num_workers=1,\n save_path=LANCZOS_PATH, \n end_of_input=end_of_input,\n lanczos_steps=NUM_LANCZOS_STEPS,\n num_draws=1,\n output_address=LANCZOS_PATH)\n\n# For distributed training, there are a few options:\n# Multi-gpu single worker: Partition the tf.data per tower of the model, and pass the aggregate\n# loss to the LanczosExperiment class.\n# Multi-gpu multi worker: Set num_workers in LanczosExperiment to be equal to the number of workers.\n\n# These have to be ordered.\ntrain_op = experiment.get_train_op()\nsaver = experiment.get_saver(checkpoint_to_load, restore_specs)\ninit_fn = experiment.get_init_fn()\ntrain_fn = experiment.get_train_fn()\nlocal_init_op = tf.group(tf.local_variables_initializer(), init)\n\ntrain_step_kwargs = {}\n\n# The LanczosExperiment class is designed with slim in mind since it gives us\n# very specific control of the main training loop.\ntf.contrib.slim.learning.train(\n train_op,\n train_step_kwargs=train_step_kwargs,\n train_step_fn=train_fn,\n logdir=LANCZOS_PATH,\n is_chief=True,\n init_fn=init_fn,\n local_init_op=local_init_op,\n global_step=tf.zeros([], dtype=tf.int64), # Dummy global step.\n saver=saver,\n save_interval_secs=0, # The LanczosExperiment class controls saving.\n summary_op=None, # DANGER DANGER: Do not change this.\n summary_writer=None)\n\n# This cell takes a little time to run: maybe 7 mins.",
"Visualize the Hessian eigenvalue density.",
"# Outputs are saved as numpy saved files. The most interesting ones are \n# 'tridiag_1' and 'lanczos_vec_1'.\nwith open(os.path.join(LANCZOS_PATH, 'tridiag_1'), 'rb') as f:\n tridiagonal = np.load(f)\n\n # For legacy reasons, we need to squeeze tridiagonal.\n tridiagonal = np.squeeze(tridiagonal)\n # Note that the output shape is [NUM_LANCZOS_STEPS, NUM_LANCZOS_STEPS].\n print tridiagonal.shape\n\n# The function tridiag_to_density computes the density (i.e., trace estimator \n# the standard Gaussian c * exp(-(x - t)**2.0 / 2 sigma**2.0) where t is \n# from a uniform grid. Passing a reasonable sigma**2.0 to this function is \n# important -- somewhere between 1e-3 and 1e-5 seems to work best.\ndensity, grids = density.tridiag_to_density([tridiagonal])\n\n# We add a small epsilon to make the plot not ugly.\nplt.semilogy(grids, density + 1.0e-7)\nplt.xlabel('$\\lambda$')\nplt.ylabel('Density')\nplt.title('MNIST hessian eigenvalue density at step 10000')",
"Note that this is only one draw so not all the individual peaks are the exact same height, we can make this more accurate by taking more draws.\nExercise left to reader: run multiple draws and see what the density looks like!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jamesfolberth/jupyterhub_AWS_deployment | notebooks/20Q/setup_sportsDataset.ipynb | bsd-3-clause | [
"Loads the sports data\nRun this script to load the data. Your job after loading the data is to make a 20 questions style game (see www.20q.net )\nThis dataset is a list of 25 sports, each rated (by Stephen) with a yes/no answer to each of 13 questions\nKnowing the answers to all 13 questions uniquely identifies each sport. Can you do it in less than 13 questions? In fewer questions than the trained decision tree?\nRead in the list of sports\nThere should be 25 sports. We can print them out, so you know what the choices are",
"import csv\nsports = [] # This is a python \"list\" data structure (it is \"mutable\")\n# The file has a list of sports, one per line.\n# There are spaces in some names, but no commas or weird punctuation\nwith open('data/SportsDataset_ListOfSports.csv','r') as csvfile:\n myreader = csv.reader(csvfile)\n for index, row in enumerate( myreader ):\n sports.append(' '.join(row) ) # the join() call merges all fields\n# Make a look-up table: if you input the name of the sport, it tells you the index\n# Also, print out a list of all the sports, to make sure it looks OK\nSport2Index = {}\nfor ind, sprt in enumerate( sports ):\n Sport2Index[sprt] = ind\n print('Sport #', ind,'is',sprt)\n# And example usage of the index lookup:\nprint('The sport \"', sports[7],'\" has 0-based index', Sport2Index[sports[7]])",
"Read in the list of questions/attributes\nThere were 13 questions",
"# this csv file has only a single row\nquestions = []\nwith open('data/SportsDataset_ListOfAttributes.csv','r') as csvfile:\n myreader = csv.reader( csvfile )\n for row in myreader:\n questions = row\nQuestion2Index = {}\nfor ind, quest in enumerate( questions ):\n Question2Index[quest] = ind\n print('Question #', ind,': ',quest)\n# And example usage of the index lookup:\nprint('The question \"', questions[10],'\" has 0-based index', Question2Index[questions[10]])",
"Read in the training data\nThe columns of X correspond to questions, and rows correspond to more data. The rows of y are the movie indices. The values of X are 1, -1 or 0 (see YesNoDict for encoding)",
"YesNoDict = { \"Yes\": 1, \"No\": -1, \"Unsure\": 0, \"\": 0 }\n# Load from the csv file.\n# Note: the file only has \"1\"s, because blanks mean \"No\"\n\nX = []\nwith open('data/SportsDataset_DataAttributes.csv','r') as csvfile:\n myreader = csv.reader(csvfile)\n for row in myreader:\n data = [];\n for col in row:\n data.append( col or \"-1\")\n X.append( list(map(int,data)) ) # integers, not strings\n\n# This data file is listed in the same order as the sports\n# The variable \"y\" contains the index of the sport\ny = range(len(sports)) # this doesn't work\ny = list( map(int,y) ) # Instead, we need to ask python to really enumerate it!",
"Your turn: train a decision tree classifier",
"from sklearn import tree\n# the rest is up to you",
"Use the trained classifier to play a 20 questions game\nYou may want to use from sklearn.tree import _tree and 'tree.DecisionTreeClassifier' with commands like tree_.children_left[node], tree_.value[node], tree_.feature[node], and `tree_.threshold[node]'.",
"# up to you"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mercybenzaquen/foundations-homework | foundations_hw/05/Homework5_Graded.ipynb | mit | [
"What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?",
"#my IPA key b577eb5b46ad4bec8ee159c89208e220\n#base url http://api.nytimes.com/svc/books/{version}/lists",
"graded = 8/8",
"import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-05-10&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\nprint(best_seller.keys())\n\n\nprint(type(best_seller))\n\nprint(type(best_seller['results']))\n\nprint(len(best_seller['results']))\n\nprint(best_seller['results'][0])\n\nmother_best_seller_results_2009 = best_seller['results']\n\nfor item in mother_best_seller_results_2009:\n print(\"This books ranks #\", item['rank'], \"on the list\") #just to make sure they are in order\n for book in item['book_details']:\n print(book['title'])\n \n \n\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2009 were:\")\nfor item in mother_best_seller_results_2009:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n \n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-05-09&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller_2010 = response.json()\nprint(best_seller.keys())\n\nprint(best_seller_2010['results'][0])\n\nmother_best_seller_2010_results = best_seller_2010['results']\n\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2010 were:\")\nfor item in mother_best_seller_2010_results:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n \n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-06-21&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\n\nfather_best_seller_results_2009 = best_seller['results']\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2009 were:\")\nfor item in father_best_seller_results_2009:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n \n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-06-20&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\n\nfather_best_seller_results_2010 = best_seller['results']\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2010 were:\")\nfor item in father_best_seller_results_2010:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n ",
"2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?",
"import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\n\n\nprint(best_seller.keys())\n\nprint(len(best_seller['results']))\n\nbook_categories_2009 = best_seller['results']\n\n\nfor item in book_categories_2009:\n print(item['display_name'])\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2015-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\nprint(len(best_seller['results']))\n\n\nbook_categories_2015 = best_seller['results']\nfor item in book_categories_2015:\n print(item['display_name'])",
"3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?\nTip: Add \"Libya\" to your search to make sure (-ish) you're talking about the right guy.",
"import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\ngadafi = response.json()\n\nprint(gadafi.keys())\nprint(gadafi['response'])\nprint(gadafi['response'].keys())\nprint(gadafi['response']['docs']) #so no results for GADAFI. \n\nprint('The New York times has not used the name Gadafi to refer to Muammar Gaddafi')\n\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\ngaddafi = response.json()\n\nprint(gaddafi.keys())\nprint(gaddafi['response'].keys())\nprint(type(gaddafi['response']['meta']))\nprint(gaddafi['response']['meta'])\n\nprint(\"'The New York times used the name Gaddafi to refer to Muammar Gaddafi\", gaddafi['response']['meta']['hits'], \"times\")\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nkadafi = response.json()\nprint(kadafi.keys())\nprint(kadafi['response'].keys())\nprint(type(kadafi['response']['meta']))\nprint(kadafi['response']['meta'])\n\n\nprint(\"'The New York times used the name Kadafi to refer to Muammar Gaddafi\", kadafi['response']['meta']['hits'], \"times\")\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nqaddafi = response.json()\n\nprint(qaddafi.keys())\nprint(qaddafi['response'].keys())\nprint(type(qaddafi['response']['meta']))\nprint(qaddafi['response']['meta'])\n\nprint(\"'The New York times used the name Qaddafi to refer to Muammar Gaddafi\", qaddafi['response']['meta']['hits'], \"times\")",
"4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?",
"import requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nhipster = response.json()\n\n\nprint(hipster.keys())\nprint(hipster['response'].keys())\nprint(hipster['response']['docs'][0])\nhipster_info= hipster['response']['docs']\n\n\nprint('These articles all had the word hipster in them and were published in 1995') #ordered from oldest to newest\nfor item in hipster_info:\n print(item['headline']['main'], item['pub_date'])\n\nfor item in hipster_info:\n if item['headline']['main'] == \"SOUND\":\n \n print(\"This is the first article to mention the word hispter in 1995 and was titled:\", item['headline']['main'],\"and it was publised on:\", item['pub_date'])\n print(\"This is the lead paragraph of\", item['headline']['main'],item['lead_paragraph'])\n ",
"5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?",
"import requests\nresponse = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date=19500101&end_date=19593112&api-key=b577eb5b46ad4bec8ee159c89208e220')\nmarriage_1959 = response.json()\n\nprint(marriage_1959.keys())\nprint(marriage_1959['response'].keys())\nprint(marriage_1959['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1959['response']['meta']['hits'], \"between 1950-1959\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19600101&end_date=19693112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_1969 = response.json()\n\nprint(marriage_1969.keys())\nprint(marriage_1969['response'].keys())\nprint(marriage_1969['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1969['response']['meta']['hits'], \"between 1960-1969\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19700101&end_date=19783112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_1978 = response.json()\n\nprint(marriage_1978.keys())\nprint(marriage_1978['response'].keys())\nprint(marriage_1978['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1978['response']['meta']['hits'], \"between 1970-1978\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19800101&end_date=19893112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_1989 = response.json()\n\nprint(marriage_1989.keys())\nprint(marriage_1989['response'].keys())\nprint(marriage_1989['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1989['response']['meta']['hits'], \"between 1980-1989\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19900101&end_date=20003112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_2000 = response.json()\n\nprint(marriage_2000.keys())\nprint(marriage_2000['response'].keys())\nprint(marriage_2000['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_2000['response']['meta']['hits'], \"between 1990-2000\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20000101&end_date=20093112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_2009 = response.json()\n\nprint(marriage_2009.keys())\nprint(marriage_2009['response'].keys())\nprint(marriage_2009['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_2009['response']['meta']['hits'], \"between 2000-2009\")\n\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20100101&end_date=20160609&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_2016 = response.json()\n\nprint(marriage_2016.keys())\nprint(marriage_2016['response'].keys())\nprint(marriage_2016['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_2016['response']['meta']['hits'], \"between 2010-present\")\n\n",
"6) What section talks about motorcycles the most?\nTip: You'll be using facets",
"import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycles&facet_field=section_name&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmotorcycles = response.json()\n\n\n\nprint(motorcycles.keys())\n\nprint(motorcycles['response'].keys())\n\nprint(motorcycles['response']['facets']['section_name']['terms'])\n\nmotorcycles_info= motorcycles['response']['facets']['section_name']['terms']\nprint(motorcycles_info)\nprint(\"These are the sections that talk the most about motorcycles:\")\nprint(\"_________________\")\nfor item in motorcycles_info:\n print(\"The\",item['term'],\"section mentioned motorcycle\", item['count'], \"times\")\n\n\nmotorcycle_info= motorcycles['response']['facets']['section_name']['terms']\nmost_motorcycle_section = 0\nsection_name = \"\" \nfor item in motorcycle_info:\n if item['count']>most_motorcycle_section:\n most_motorcycle_section = item['count']\n section_name = item['term']\n\nprint(section_name, \"is the sections that talks the most about motorcycles, with\", most_motorcycle_section, \"mentions of the word\")",
"7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?\nTip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.",
"import requests\nresponse = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=b577eb5b46ad4bec8ee159c89208e220')\nmovies_reviews_20 = response.json()\n\nprint(movies_reviews_20.keys())\n\n\n\n\nprint(movies_reviews_20['results'][0])\n\ncritics_pick = 0\nnot_a_critics_pick = 0\nfor item in movies_reviews_20['results']:\n print(item['display_title'], item['critics_pick'])\n if item['critics_pick'] == 1:\n print(\"-------------CRITICS PICK!\")\n critics_pick = critics_pick + 1\n else:\n print(\"-------------NOT CRITICS PICK!\")\n not_a_critics_pick = not_a_critics_pick + 1\nprint(\"______________________\") \nprint(\"There were\", critics_pick, \"critics picks in the last 20 revies by the NYT\")\n\nimport requests\nresponse = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=20&api-key=b577eb5b46ad4bec8ee159c89208e220')\nmovies_reviews_40 = response.json()\n\nprint(movies_reviews_40.keys())\n\n\nimport requests\nresponse = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=40&api-key=b577eb5b46ad4bec8ee159c89208e220')\nmovies_reviews_60 = response.json()\n\nprint(movies_reviews_60.keys())\n\nnew_medium_list = movies_reviews_20['results'] + movies_reviews_40['results']\n\nprint(len(new_medium_list))\n\ncritics_pick = 0\nnot_a_critics_pick = 0\nfor item in new_medium_list:\n print(item['display_title'], item['critics_pick'])\n if item['critics_pick'] == 1:\n print(\"-------------CRITICS PICK!\")\n critics_pick = critics_pick + 1\n else:\n print(\"-------------NOT CRITICS PICK!\")\n not_a_critics_pick = not_a_critics_pick + 1\nprint(\"______________________\") \nprint(\"There were\", critics_pick, \"critics picks in the last 40 revies by the NYT\")\n\nnew_big_list = movies_reviews_20['results'] + movies_reviews_40['results'] + movies_reviews_60['results']\n\nprint(new_big_list[0])\n\nprint(len(new_big_list))\n\ncritics_pick = 0\nnot_a_critics_pick = 0\nfor item in new_big_list:\n print(item['display_title'], item['critics_pick'])\n if item['critics_pick'] == 1:\n print(\"-------------CRITICS PICK!\")\n critics_pick = critics_pick + 1\n else:\n print(\"-------------NOT CRITICS PICK!\")\n not_a_critics_pick = not_a_critics_pick + 1\nprint(\"______________________\") \nprint(\"There were\", critics_pick, \"critics picks in the last 60 revies by the NYT\")",
"8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?",
"medium_list = movies_reviews_20['results'] + movies_reviews_40['results']\nprint(type(medium_list))\nprint(medium_list[0])\nfor item in medium_list:\n print(item['byline'])\n \n\n\nall_critics = []\nfor item in medium_list:\n all_critics.append(item['byline'])\nprint(all_critics)\n\nunique_medium_list = set(all_critics)\nprint(unique_medium_list)\n\nprint(\"___________________________________________________\")\n\nprint(\"This is a list of the authors who have written the NYT last 40 movie reviews, in descending order:\")\nfrom collections import Counter\ncount = Counter(all_critics)\nprint(count)\n\nprint(\"___________________________________________________\")\n\n\nprint(\"This is a list of the top 3 authors who have written the NYT last 40 movie reviews:\")\ncount.most_common(1)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eds-uga/csci1360e-su16 | lectures/L6.ipynb | mit | [
"Lecture 6: Conditionals and Error Handling\nCSCI 1360E: Foundations for Informatics and Analytics\nOverview and Objectives\nIn this lecture, we'll go over how to make \"decisions\" over the course of your code depending on the values certain variables take. We'll also introduce exceptions and how to handle them gracefully. By the end of the lecture, you should be able to\n\nBuild arbitrary conditional hierarchies to test a variety of possible circumstances\nCatch basic errors and present meaningful error messages in lieu of a Python crash\n\nPart 1: Conditionals\nUp until now (with the exception of that one review question about finding max and min elements in a list from L4), we've been somewhat hobbled in our coding prowess; we've lacked the tools to make different decisions depending on the values our variables take.\nIn fact, let's go ahead and look at the problem of finding the maximum value in a list.",
"x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]",
"If we want to figure out the maximum value, we'll obviously need a loop to check each element of the list (which we know how to do), and a variable to store the maximum.",
"max_val = 0\nfor element in x:\n \n # ... now what?\n\n pass",
"We also know we can check relative values, like max_val < element. If this evaluates to True, we know we've found a number in the list that's bigger than our current candidate for maximum value. But how do we execute code until this condition, and this condition alone?\nEnter if / elif / else statements! (otherwise known as \"conditionals\")\nWe can use the keyword if, followed by a statement that evaluates to either True or False, to determine whether or not to execute the code. For a straightforward example:",
"x = 5\nif x < 5:\n print(\"How did this happen?!\") # Spoiler alert: this won't happen.\n\nif x == 5:\n print(\"Working as intended.\")",
"In conjunction with if, we also have an else clause that we can use to execute whenever the if statement doesn't:",
"x = 5\nif x < 5:\n print(\"How did this happen?!\") # Spoiler alert: this won't happen.\nelse:\n print(\"Correct.\")",
"This is great! We can finally finish computing the maximum element of a list!",
"x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]\nmax_val = 0\nfor element in x:\n if max_val < element:\n max_val = element\n\nprint(\"The maximum element is: {}\".format(max_val))",
"We can test conditions! But what if we have multifaceted decisions to make? Let's look at a classic example: assigning letter grades from numerical grades.",
"student_grades = {\n 'Jen': 82,\n 'Shannon': 75,\n 'Natasha': 94,\n 'Benjamin': 48,\n}",
"We know the 90-100 range is an \"A\", 80-89 is a \"B\", and so on. We can't do just a standard \"if / else\", since we have more than two possibilities here.\nThe third and final component of conditionals is the elif statement (short for \"else if\").\nelif allows us to evaluate as many options as we'd like, all within the same conditional context (this is important). So for our grading example, it might look like this:",
"for student, grade in student_grades.items():\n letter = ''\n if grade >= 90:\n letter = \"A\"\n elif grade >= 80:\n letter = \"B\"\n elif grade >= 70:\n letter = \"C\"\n elif grade >= 60:\n letter = \"D\"\n else:\n letter = \"F\"\n \n print(\"{}'s letter grade: {}\".format(student, letter))\n",
"Ok, that's neat. But there's still one more edge case: what happens if we want to enforce multiple conditions simultaneously?\nTo illustrate, let's go back to our example of finding the maximum value in a list, and this time, let's try to find the second-largest value in the list. For simplicity, let's say we've already found the largest value.",
"x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]\nmax_val = 81 # We've already found it!\nsecond_largest = 0",
"Here's the rub: we now have two constraints to enforce--the second largest value needs to be larger than pretty much everything in the list, but also needs to be smaller than the maximum value. Not something we can encode using if / elif / else.\nInstead, we'll use two more keywords integral to conditionals: and and or.",
"for element in x:\n if second_largest < element and element < max_val:\n second_largest = element\n\nprint(\"The second-largest element is: {}\".format(second_largest))",
"The first condition, second_largest < element, is the same as before: if our current estimate of the second largest element is smaller than the latest element we're looking at, it's definitely a candidate for second-largest.\n\n\nThe second condition, element < max_val, is what ensures we don't just pick the largest value again. This enforces the constraint that the current element we're looking at is also less than our known maximum value.\n\n\nThe and keyword glues these two conditions together: it requires that they BOTH be True before the code inside the statement is allowed to execute.\n\n\nIt would be easy to replicate this with \"nested\" conditionals:",
"second_largest = 0\nfor element in x:\n if second_largest < element:\n if element < max_val:\n second_largest = element\n\nprint(\"The second-largest element is: {}\".format(second_largest))",
"...but your code starts getting a little unwieldy with so many indentations.\nYou can glue as many comparisons as you want together with and; the whole statement will only be True if every single condition evaluates to True. This is what and means: everything must be True.\nThe other side of this coin is or. Like and, you can use it to glue together multiple constraints. Unlike and, the whole statement will evaluate to True as long as at least ONE condition is True. This is far less stringent than and, where ALL conditions had to be True.",
"numbers = [1, 2, 5, 6, 7, 9, 10]\nfor num in numbers:\n if num == 2 or num == 4 or num == 6 or num == 8 or num == 10:\n print(\"{} is an even number.\".format(num))",
"In this contrived example, I've glued together a bunch of constraints. Obviously, these constraints are mutually exclusive; a number can't be equal to both 2 and 4 at the same time, so num == 2 and num == 4 would never evaluate to True. However, using or, only one of them needs to be True for the statement underneath to execute.\nThere's a little bit of intuition to it.\n\n\n\"I want this AND this\" has the implication of both at once.\n\n\n\"I want this OR this\" sounds more like either one would be adequate.\n\n\nOne other important tidbit, concerning not only conditionals, but also lists and booleans: the not keyword.\nAn often-important task in data science, when you have a list of things, is querying whether or not some new piece of information you just received is already in your list. You could certainly loop through the list, asking \"is my new_item == list[item i]?\". But, thankfully, there's a better way:",
"import random\nlist_of_numbers = [random.randint(1, 100) for i in range(10)] # Generaets 10 random numbers, between 1 and 100.\nif 13 not in list_of_numbers:\n print(\"Aw man, my lucky number isn't here!\")",
"Notice a couple things here--\n\nList comprehensions make an appearance! Can you parse it out?\nThe if statement asks if the number 13 is NOT found in list_of_numbers\nWhen that statement evaluates to True--meaning the number is NOT found--it prints the message.\n\nIf you omit the not keyword, then the question becomes: \"is this number in the list?\"",
"import random\nlist_of_numbers = [random.randint(1, 2) for i in range(10)] # Generaets 10 random numbers, between 1 and 2. Yep.\nif 1 in list_of_numbers:\n print(\"Sweet, found a 1!\")",
"This works for strings as well: 'some_string' in some_list will return True if that string is indeed found in the list.\nBe careful with this. Typing issues can hit you full force here: if you ask if 0 in some_list, and it's a list of floats, then this operation will always evaluate to False.\nSimilarly, if you ask if \"shannon\" in name_list, it will look for the precise string \"shannon\" and return False even if the string \"Shannon\" is in the list. With great power, etc etc.\nPart 2: Error Handling\nYes, errors: plaguing us since Windows 95 (but really, since well before then).\n\nBy now, I suspect you've likely seen your fair share of Python crashes.\n\n\nNotImplementedError from the homework assignments\n\n\nTypeError from trying to multiply an integer by a string\n\n\nKeyError from attempting to access a dictionary key that didn't exist\n\n\nIndexError from referencing a list beyond its actual length\n\n\nor any number of other error messages. These are the standard way in which Python (and most other programming languages) handles error messages.\nThe error is known as an Exception. Some other terminology here includes:\n\n\nAn exception is raised when such an error occurs. This is why you see the code snippet raise NotImplementedError in your homeworks. In other languages such as Java, an exception is \"thrown\" instead of \"raised\", but the meanings are equivalent.\n\n\nWhen you are writing code that could potentially raise an exception, you can also write code to catch the exception and handle it yourself. When an exception is caught, that means it is handled without crashing the program.\n\n\nHere's a fairly classic example: divide by zero!\n\nLet's say we're designing a simple calculator application that divides two numbers. We'll ask the user for two numbers, divide them, and return the quotient. Seems simple enough, right?",
"def divide(x, y):\n return x / y\n\ndivide(11, 0)",
"D'oh! The user fed us a 0 for the denominator and broke our calculator. Meanie-face.\nSo we know there's a possibility of the user entering a 0. This could be malicious or simply by accident. Since it's only one value that could crash our app, we could in principle have an if statement that checks if the denominator is 0. That would be simple and perfectly valid.\nBut for the sake of this lecture, let's assume we want to try and catch the ZeroDivisionError ourselves and handle it gracefully.\nTo do this, we use something called a try / except block, which is very similar in its structure to if / elif / else blocks.\nFirst, put the code that could potentially crash your program inside a try statement. Under that, have a except statement that defines\n\nA variable for the error you're catching, and\nAny code that dictates how you want to handle the error",
"def divide_safe(x, y):\n quotient = 0\n try:\n quotient = x / y\n except ZeroDivisionError:\n print(\"You tried to divide by zero. Why would you do that?!\")\n return quotient",
"Now if our user tries to be snarky again--",
"divide_safe(11, 0)",
"No error, no crash! Just a \"helpful\" error message.\nLike conditionals, you can also create multiple except statements to handle multiple different possible exceptions:",
"import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\n else:\n raise ValueError(\"This happens when you try to multiply a string\")\nexcept NameError:\n print(\"Caught a NameError!\")\nexcept ValueError:\n print(\"Nope, it was actually a ValueError.\")",
"If you download this notebook or run it with mybinder and re-run the above cell, the exception should flip randomly between the two.\nAlso like conditionals, you can handle multiple errors simultaneously. If, like in the previous example, your code can raise multiple exceptions, but you want to handle them all the same way, you can stack them all in one except statement:",
"import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\n else:\n raise ValueError(\"This happens when you try to multiply a string\")\nexcept (NameError, ValueError): # MUST have the parentheses!\n print(\"Caught...well, some kinda error, not sure which.\")",
"If you're like me, and you're writing code that you know could raise one of several errors, but are too lazy to look up specifically what errors are possible, you can create a \"catch-all\" by just not specifying anything:",
"import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\n else:\n raise ValueError(\"This happens when you try to multiply a string\")\nexcept:\n print(\"I caught something!\")",
"Finally--and this is really getting into what's known as control flow (quite literally: \"controlling the flow\" of your program)--you can tack an else statement onto the very end of your exception-handling block to add some final code to the handler.\nWhy? This is code that is only executed if NO exception occurs. Let's go back to our random number example: instead of raising one of two possible exceptions, we'll raise an exception only if we flip a 1.",
"import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\nexcept:\n print(\"I caught something!\")\nelse:\n print(\"HOORAY! Lucky coin flip!\")",
"Again, if you run this notebook yourself and execute the above cell multiple times, you should see it flip between \"I caught something!\" and \"HOORAY!\", signifying that the else clause only executes if no exceptions are raised.\nReview Questions\nSome questions to discuss and consider:\n1: Go back to the if / elif / else example about student grades. Let's assume, instead of elif for the different conditions, you used a bunch of if statements, e.g. if grade >= 90, if grade >= 80, if grade >= 70, and so on; effectively, you didn't use elif at all, but just used if. What would the final output be in this case?\n2: We saw that you can add an else statement to the end of an exception handling block, which will run code in the event that no exception is raised. Why is this useful? Why not add the code you want to run in the try block itself?\n3: With respect to error handling, we discussed try, except, and else statements. There is actually one more: finally, which executes no matter what, regardless of whether an exception occurs or not. Why would this be useful?\nCourse Administrivia\nHow was A1?\nHow is A2 going?\nAdditional Resources\n\nMatthes, Eric. Python Crash Course. 2016. ISBN-13: 978-1593276034\nGrus, Joel. Data Science from Scratch. 2015. ISBN-13: 978-1491901427"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
synthicity/activitysim | activitysim/examples/example_estimation/notebooks/02_school_location.ipynb | agpl-3.0 | [
"Estimating School Location Choice\nThis notebook illustrates how to re-estimate a single model component for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries",
"import larch # !conda install larch #for estimation\nimport pandas as pd\nimport numpy as np\nimport yaml \nimport larch.util.excel\nimport os",
"We'll work in our test directory, where ActivitySim has saved the estimation data bundles.",
"os.chdir('test')",
"Load data and prep model for estimation",
"modelname=\"school_location\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)",
"Review data loaded from EDB\nNext we can review what was read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.\ncoefficients",
"data.coefficients",
"alt_values",
"data.alt_values",
"chooser_data",
"data.chooser_data",
"landuse",
"data.landuse",
"spec",
"data.spec",
"size_spec",
"data.size_spec",
"Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.",
"model.estimate(method='BHHH', options={'maxiter':1000})",
"Estimated coefficients",
"model.parameter_summary()",
"Output Estimation Results",
"from activitysim.estimation.larch import update_coefficients, update_size_spec\nresult_dir = data.edb_directory/\"estimated\"",
"Write updated utility coefficients",
"update_coefficients(\n model, data, result_dir,\n output_file=f\"{modelname}_coefficients_revised.csv\",\n);",
"Write updated size coefficients",
"update_size_spec(\n model, data, result_dir, \n output_file=f\"{modelname}_size_terms.csv\",\n)",
"Write the model estimation report, including coefficient t-statistic and log likelihood",
"model.to_xlsx(\n result_dir/f\"{modelname}_model_estimation.xlsx\", \n data_statistics=False,\n);",
"Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file and *_size_terms.csv file to the configs folder, rename them to *_coefficients.csv and destination_choice_size_terms.csv, and run ActivitySim in simulation mode. Note that all the location\nand desintation choice models share the same destination_choice_size_terms.csv input file, so if you\nare updating all these models, you'll need to ensure that updated sections of this file for each model\nare joined together correctly.",
"pd.read_csv(result_dir/f\"{modelname}_coefficients_revised.csv\")\n\npd.read_csv(result_dir/f\"{modelname}_size_terms.csv\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
daniel-koehn/Theory-of-seismic-waves-II | 05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb | gpl-3.0 | [
"Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, notebook style sheet by L.A. Barba, N.C. Clementi",
"# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../../style/custom.css'\nHTML(open(css_file, \"r\").read())",
"Simple absorbing boundary for 2D acoustic FD modelling\nRealistic FD modelling results for surface seismic acquisition geometries require a further modification of the 2D acoustic FD code. Except for the free surface boundary condition on top of the model, we want to suppress the artifical reflections from the other boundaries.\nSuch absorbing boundaries can be implemented by different approaches. A comprehensive overview is compiled in \nGao et al. 2015, Comparison of artificial absorbing boundaries for acoustic wave equation modelling\nBefore implementing the absorbing boundary frame, we modify some parts of the optimized 2D acoustic FD code:",
"# Import Libraries \n# ----------------\nimport numpy as np\nfrom numba import jit\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\n\n# Ignore Warning Messages\n# -----------------------\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# Definition of initial modelling parameters\n# ------------------------------------------\nxmax = 2000.0 # maximum spatial extension of the 1D model in x-direction (m)\nzmax = xmax # maximum spatial extension of the 1D model in z-direction (m)\ndx = 10.0 # grid point distance in x-direction (m)\ndz = dx # grid point distance in z-direction (m)\n\ntmax = 0.75 # maximum recording time of the seismogram (s)\ndt = 0.0010 # time step\n\nvp0 = 3000. # P-wave speed in medium (m/s)\n\n# acquisition geometry\nxsrc = 1000.0 # x-source position (m)\nzsrc = xsrc # z-source position (m)\n\nf0 = 100.0 # dominant frequency of the source (Hz)\nt0 = 0.1 # source time shift (s)\n\nisnap = 2 # snapshot interval (timesteps)",
"In order to modularize the code, we move the 2nd partial derivatives of the wave equation into a function update_d2px_d2pz, so the application of the JIT decorator can be restriced to this function:",
"@jit(nopython=True) # use JIT for C-performance\ndef update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz):\n \n for i in range(1, nx - 1):\n for j in range(1, nz - 1):\n \n d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2 \n d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2\n \n return d2px, d2pz ",
"In the FD modelling code FD_2D_acoustic_JIT, a more flexible model definition is introduced by the function model. The block Initalize animation of pressure wavefield before the time loop displays the velocity model and initial pressure wavefield. During the time-loop, the pressure wavefield is updated with \nimage.set_data(p.T)\nfig.canvas.draw()\nat the every isnap timestep:",
"# FD_2D_acoustic code with JIT optimization\n# -----------------------------------------\ndef FD_2D_acoustic_JIT(dt,dx,dz,f0): \n \n # define model discretization\n # ---------------------------\n\n nx = (int)(xmax/dx) # number of grid points in x-direction\n print('nx = ',nx)\n\n nz = (int)(zmax/dz) # number of grid points in x-direction\n print('nz = ',nz)\n\n nt = (int)(tmax/dt) # maximum number of time steps \n print('nt = ',nt)\n\n isrc = (int)(xsrc/dx) # source location in grid in x-direction\n jsrc = (int)(zsrc/dz) # source location in grid in x-direction\n\n # Source time function (Gaussian)\n # -------------------------------\n src = np.zeros(nt + 1)\n time = np.linspace(0 * dt, nt * dt, nt)\n\n # 1st derivative of Gaussian\n src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))\n \n # define clip value: 0.1 * absolute maximum value of source wavelet\n clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2 \n \n # Define model\n # ------------ \n vp = np.zeros((nx,nz))\n vp = model(nx,nz,vp,dx,dz)\n vp2 = vp**2 \n \n # Initialize empty pressure arrays\n # --------------------------------\n p = np.zeros((nx,nz)) # p at time n (now)\n pold = np.zeros((nx,nz)) # p at time n-1 (past)\n pnew = np.zeros((nx,nz)) # p at time n+1 (present)\n d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p\n d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p \n \n # Initalize animation of pressure wavefield \n # ----------------------------------------- \n fig = plt.figure(figsize=(7,3.5)) # define figure size\n plt.tight_layout()\n extent = [0.0,xmax,zmax,0.0] # define model extension\n \n # Plot pressure wavefield movie\n ax1 = plt.subplot(121)\n image = plt.imshow(p.T, animated=True, cmap=\"RdBu\", extent=extent, \n interpolation='nearest', vmin=-clip, vmax=clip) \n plt.title('Pressure wavefield')\n plt.xlabel('x [m]')\n plt.ylabel('z [m]')\n \n # Plot Vp-model\n ax2 = plt.subplot(122)\n image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest', \n extent=extent)\n \n plt.title('Vp-model')\n plt.xlabel('x [m]')\n plt.setp(ax2.get_yticklabels(), visible=False) \n \n divider = make_axes_locatable(ax2)\n cax2 = divider.append_axes(\"right\", size=\"2%\", pad=0.1)\n fig.colorbar(image1, cax=cax2) \n plt.ion() \n plt.show(block=False)\n \n # Calculate Partial Derivatives\n # -----------------------------\n for it in range(nt):\n \n # FD approximation of spatial derivative by 3 point operator\n d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz)\n\n # Time Extrapolation\n # ------------------\n pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)\n\n # Add Source Term at isrc\n # -----------------------\n # Absolute pressure w.r.t analytical solution\n pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2 \n \n # Remap Time Levels\n # -----------------\n pold, p = p, pnew\n \n # display pressure snapshots \n if (it % isnap) == 0:\n image.set_data(p.T)\n fig.canvas.draw() ",
"Homogeneous block model without absorbing boundary frame\nAs a reference, we first model the homogeneous block model, defined in the function model, without an absorbing boundary frame:",
"# Homogeneous model\ndef model(nx,nz,vp,dx,dz):\n \n vp += vp0 \n \n return vp",
"After defining the modelling parameters, we can run the modified FD code ...",
"%matplotlib notebook\ndx = 5.0 # grid point distance in x-direction (m)\ndz = dx # grid point distance in z-direction (m)\nf0 = 100.0 # centre frequency of the source wavelet (Hz)\n\n# calculate dt according to the CFL-criterion\ndt = dx / (np.sqrt(2.0) * vp0)\n\nFD_2D_acoustic_JIT(dt,dx,dz,f0)",
"Notice the strong, artifical boundary reflections in the wavefield movie\nSimple absorbing Sponge boundary\nThe simplest, and unfortunately least efficient, absorbing boundary was developed by Cerjan et al. (1985). It is based on the simple idea to damp the pressure wavefields $p^n_{i,j}$ and $p^{n+1}_{i,j}$ in an absorbing boundary frame by an exponential function:\n\\begin{equation}\nf_{abs} = exp(-a^2(FW-i)^2), \\nonumber\n\\end{equation}\nwhere $FW$ denotes the thickness of the boundary frame in gridpoints, while the factor $a$ defines the damping variation within the frame. It is import to avoid overlaps of the damping profile in the model corners, when defining the absorbing function:",
"# Define simple absorbing boundary frame based on wavefield damping \n# according to Cerjan et al., 1985, Geophysics, 50, 705-708\ndef absorb(nx,nz):\n \n FW = # thickness of absorbing frame (gridpoints) \n a = # damping variation within the frame\n \n coeff = np.zeros(FW)\n \n # define coefficients\n\n\n # initialize array of absorbing coefficients\n absorb_coeff = np.ones((nx,nz))\n\n # compute coefficients for left grid boundaries (x-direction)\n\n\n # compute coefficients for right grid boundaries (x-direction) \n\n\n # compute coefficients for bottom grid boundaries (z-direction) \n\n\n return absorb_coeff",
"This implementation of the Sponge boundary sets a free-surface boundary condition on top of the model, while inciding waves at the other boundaries are absorbed:",
"# Plot absorbing damping profile\n# ------------------------------\nfig = plt.figure(figsize=(6,4)) # define figure size\nextent = [0.0,xmax,0.0,zmax] # define model extension\n\n# calculate absorbing boundary weighting coefficients\nnx = 400\nnz = 400\nabsorb_coeff = absorb(nx,nz)\n\nplt.imshow(absorb_coeff.T)\nplt.colorbar()\nplt.title('Sponge boundary condition')\nplt.xlabel('x [m]')\nplt.ylabel('z [m]')\nplt.show()",
"The FD code itself requires only some small modifications, we have to add the absorb function to define the amount of damping in the boundary frame and apply the damping function to the pressure wavefields pnew and p",
"# FD_2D_acoustic code with JIT optimization\n# -----------------------------------------\ndef FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0): \n \n # define model discretization\n # ---------------------------\n\n nx = (int)(xmax/dx) # number of grid points in x-direction\n print('nx = ',nx)\n\n nz = (int)(zmax/dz) # number of grid points in x-direction\n print('nz = ',nz)\n\n nt = (int)(tmax/dt) # maximum number of time steps \n print('nt = ',nt)\n\n isrc = (int)(xsrc/dx) # source location in grid in x-direction\n jsrc = (int)(zsrc/dz) # source location in grid in x-direction\n\n # Source time function (Gaussian)\n # -------------------------------\n src = np.zeros(nt + 1)\n time = np.linspace(0 * dt, nt * dt, nt)\n\n # 1st derivative of Gaussian\n src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))\n \n # define clip value: 0.1 * absolute maximum value of source wavelet\n clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2\n \n # Define absorbing boundary frame\n # ------------------------------- \n\n \n # Define model\n # ------------ \n vp = np.zeros((nx,nz))\n vp = model(nx,nz,vp,dx,dz)\n vp2 = vp**2 \n \n # Initialize empty pressure arrays\n # --------------------------------\n p = np.zeros((nx,nz)) # p at time n (now)\n pold = np.zeros((nx,nz)) # p at time n-1 (past)\n pnew = np.zeros((nx,nz)) # p at time n+1 (present)\n d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p\n d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p \n \n # Initalize animation of pressure wavefield \n # ----------------------------------------- \n fig = plt.figure(figsize=(7,3.5)) # define figure size\n plt.tight_layout()\n extent = [0.0,xmax,zmax,0.0] # define model extension\n \n # Plot pressure wavefield movie\n ax1 = plt.subplot(121)\n image = plt.imshow(p.T, animated=True, cmap=\"RdBu\", extent=extent, \n interpolation='nearest', vmin=-clip, vmax=clip) \n plt.title('Pressure wavefield')\n plt.xlabel('x [m]')\n plt.ylabel('z [m]')\n \n # Plot Vp-model\n ax2 = plt.subplot(122)\n image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest', \n extent=extent)\n \n plt.title('Vp-model')\n plt.xlabel('x [m]') \n plt.setp(ax2.get_yticklabels(), visible=False) \n \n divider = make_axes_locatable(ax2)\n cax2 = divider.append_axes(\"right\", size=\"2%\", pad=0.1)\n fig.colorbar(image1, cax=cax2) \n plt.ion() \n plt.show(block=False)\n \n # Calculate Partial Derivatives\n # -----------------------------\n for it in range(nt):\n \n # FD approximation of spatial derivative by 3 point operator\n d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz)\n\n # Time Extrapolation\n # ------------------\n pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)\n\n # Add Source Term at isrc\n # -----------------------\n # Absolute pressure w.r.t analytical solution\n pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2\n \n # Apply absorbing boundary frame to p and pnew\n\n \n # Remap Time Levels\n # -----------------\n pold, p = p, pnew\n \n # display pressure snapshots \n if (it % isnap) == 0:\n image.set_data(p.T)\n fig.canvas.draw() ",
"Let's evaluate the influence of the Sponge boundaries on the artifical boundary reflections:",
"%matplotlib notebook\ndx = 5.0 # grid point distance in x-direction (m)\ndz = dx # grid point distance in z-direction (m)\nf0 = 100.0 # centre frequency of the source wavelet (Hz)\n\n# calculate dt according to the CFL-criterion\ndt = dx / (np.sqrt(2.0) * vp0)\n\nFD_2D_acoustic_JIT_absorb(dt,dx,dz,f0)",
"As you can see, the boundary reflections are significantly damped. However, some spurious reflections are still visible. The suppression of these reflections requires more sophisticated absorbing boundaries like Perfectly Matched Layers (PMLs).\nWhat we learned:\n\nHow to suppress boundary reflections in order to implement realistic half-space models by the simple absorbing Sponge boundary"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/ukesm1-0-mmh/atmos.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: UKESM1-0-MMH\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:27\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-mmh', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dsacademybr/PythonFundamentos | Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb | gpl-3.0 | [
"<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>\nDownload: http://github.com/dsacademybr",
"# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())",
"NumPy\nPara importar numpy, utilize: \nimport numpy as np\nVocê também pode utilizar: \nfrom numpy import * . Isso evitará a utilização de np., mas este comando importará todos os módulos do NumPy.\nPara atualizar o NumPy, abra o prompt de comando e digite: pip install numpy -U",
"# Importando o NumPy\nimport numpy as np\n\nnp.__version__",
"Criando Arrays",
"# Help\nhelp(np.array)\n\n# Array criado a partir de uma lista:\nvetor1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8])\n\nprint(vetor1)\n\n# Um objeto do tipo ndarray é um recipiente multidimensional de itens do mesmo tipo e tamanho.\ntype(vetor1)\n\n# Usando métodos do array NumPy\nvetor1.cumsum()\n\n# Criando uma lista. Perceba como listas e arrays são objetos diferentes, com diferentes propriedades\nlst = [0, 1, 2, 3, 4, 5, 6, 7, 8]\n\nlst\n\ntype(lst)\n\n# Imprimindo na tela um elemento específico no array\nvetor1[0] \n\n# Alterando um elemento do array\nvetor1[0] = 100\n\nprint(vetor1)\n\n# Não é possível incluir elemento de outro tipo\nvetor1[0] = 'Novo elemento'\n\n# Verificando o formato do array\nprint(vetor1.shape)",
"Funções NumPy",
"# A função arange cria um vetor contendo uma progressão aritmética a partir de um intervalo - start, stop, step\nvetor2 = np.arange(0., 4.5, .5)\n\nprint(vetor2)\n\n# Verificando o tipo do objeto\ntype(vetor2)\n\n# Formato do array\nnp.shape(vetor2)\n\nprint (vetor2.dtype)\n\nx = np.arange(1, 10, 0.25)\nprint(x)\n\nprint(np.zeros(10))\n\n# Retorna 1 nas posições em diagonal e 0 no restante\nz = np.eye(3)\n\nz\n\n# Os valores passados como parâmetro, formam uma diagonal\nd = np.diag(np.array([1, 2, 3, 4]))\n\nd\n\n# Array de números complexos\nc = np.array([1+2j, 3+4j, 5+6*1j])\n\nc\n\n# Array de valores booleanos\nb = np.array([True, False, False, True])\n\nb\n\n# Array de strings\ns = np.array(['Python', 'R', 'Julia'])\n\ns\n\n# O método linspace (linearly spaced vector) retorna um número de \n# valores igualmente distribuídos no intervalo especificado \nnp.linspace(0, 10)\n\nprint(np.linspace(0, 10, 15))\n\nprint(np.logspace(0, 5, 10))",
"Criando Matrizes",
"# Criando uma matriz\nmatriz = np.array([[1,2,3],[4,5,6]]) \n\nprint(matriz)\n\nprint(matriz.shape)\n\n# Criando uma matriz 2x3 apenas com números \"1\"\nmatriz1 = np.ones((2,3))\n\nprint(matriz1)\n\n# Criando uma matriz a partir de uma lista de listas\nlista = [[13,81,22], [0, 34, 59], [21, 48, 94]]\n\n# A função matrix cria uma matria a partir de uma sequência\nmatriz2 = np.matrix(lista)\n\nmatriz2\n\ntype(matriz2)\n\n# Formato da matriz\nnp.shape(matriz2)\n\nmatriz2.size\n\nprint(matriz2.dtype)\n\nmatriz2.itemsize\n\nmatriz2.nbytes\n\nprint(matriz2[2,1])\n\n# Alterando um elemento da matriz\nmatriz2[1,0] = 100\n\nmatriz2\n\nx = np.array([1, 2]) # NumPy decide o tipo dos dados\ny = np.array([1.0, 2.0]) # NumPy decide o tipo dos dados\nz = np.array([1, 2], dtype=np.float64) # Forçamos um tipo de dado em particular\n\nprint (x.dtype, y.dtype, z.dtype)\n\nmatriz3 = np.array([[24, 76], [35, 89]], dtype=float)\n\nmatriz3\n\nmatriz3.itemsize\n\nmatriz3.nbytes\n\nmatriz3.ndim\n\nmatriz3[1,1]\n\nmatriz3[1,1] = 100\n\nmatriz3",
"Usando o Método random() do NumPy",
"print(np.random.rand(10))\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport matplotlib as mat\nmat.__version__\n\nprint(np.random.rand(10))\n\nplt.show((plt.hist(np.random.rand(1000))))\n\nprint(np.random.randn(5,5))\n\nplt.show(plt.hist(np.random.randn(1000)))\n\nimagem = np.random.rand(30, 30)\nplt.imshow(imagem, cmap = plt.cm.hot) \nplt.colorbar() ",
"Operações com datasets",
"import os\nfilename = os.path.join('iris.csv')\n\n# No Windows use !more iris.csv. Mac ou Linux use !head iris.csv\n!head iris.csv\n#!more iris.csv\n\n# Carregando um dataset para dentro de um array\narquivo = np.loadtxt(filename, delimiter=',', usecols=(0,1,2,3), skiprows=1)\nprint (arquivo)\n\ntype(arquivo)\n\n# Gerando um plot a partir de um arquivo usando o NumPy\nvar1, var2 = np.loadtxt(filename, delimiter=',', usecols=(0,1), skiprows=1, unpack=True)\nplt.show(plt.plot(var1, var2, 'o', markersize=8, alpha=0.75))",
"Estatística",
"# Criando um array\nA = np.array([15, 23, 63, 94, 75])\n\n# Em estatística a média é o valor que aponta para onde mais se concentram os dados de uma distribuição.\nnp.mean(A)\n\n# O desvio padrão mostra o quanto de variação ou \"dispersão\" existe em \n# relação à média (ou valor esperado). \n# Um baixo desvio padrão indica que os dados tendem a estar próximos da média.\n# Um desvio padrão alto indica que os dados estão espalhados por uma gama de valores.\nnp.std(A)\n\n# Variância de uma variável aleatória é uma medida da sua dispersão \n# estatística, indicando \"o quão longe\" em geral os seus valores se \n# encontram do valor esperado\nnp.var(A)\n\nd = np.arange(1, 10)\n\nd\n\nnp.sum(d)\n\n# Retorna o produto dos elementos\nnp.prod(d)\n\n# Soma acumulada dos elementos\nnp.cumsum(d)\n\na = np.random.randn(400,2)\nm = a.mean(0)\nprint (m, m.shape)\n\nplt.plot(a[:,0], a[:,1], 'o', markersize=5, alpha=0.50)\nplt.plot(m[0], m[1], 'ro', markersize=10)\nplt.show()",
"Outras Operações com Arrays",
"# Slicing\na = np.diag(np.arange(3))\n\na\n\na[1, 1]\n\na[1]\n\nb = np.arange(10)\n\nb\n\n# [start:end:step]\nb[2:9:3] \n\n# Comparação\na = np.array([1, 2, 3, 4])\nb = np.array([4, 2, 2, 4])\na == b\n\nnp.array_equal(a, b)\n\na.min()\n\na.max()\n\n# Somando um elemento ao array\nnp.array([1, 2, 3]) + 1.5\n\n# Usando o método around\na = np.array([1.2, 1.5, 1.6, 2.5, 3.5, 4.5])\n\nb = np.around(a)\n\nb\n\n# Criando um array\nB = np.array([1, 2, 3, 4])\n\nB\n\n# Copiando um array\nC = B.flatten()\n\nC\n\n# Criando um array\nv = np.array([1, 2, 3])\n\n# Adcionando uma dimensão ao array\nv[:, np.newaxis], v[:,np.newaxis].shape, v[np.newaxis,:].shape\n\n# Repetindo os elementos de um array\nnp.repeat(v, 3)\n\n# Repetindo os elementos de um array\nnp.tile(v, 3)\n\n# Criando um array\nw = np.array([5, 6])\n\n# Concatenando\nnp.concatenate((v, w), axis=0)\n\n# Copiando arrays\nr = np.copy(v)\n\nr",
"Conheça a Formação Cientista de Dados, um programa completo, 100% online e 100% em português, com mais de 400 horas de carga horária, mais de 1.200 aulas em vídeos e dezenas de projetos, que vão ajudá-lo a se tornar um dos profissionais mais cobiçados do mercado de análise de dados. Clique no link abaixo, faça sua inscrição, comece hoje mesmo e aumente sua empregabilidade:\nhttps://www.datascienceacademy.com.br/bundle/formacao-cientista-de-dados\nFim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
harmsm/pythonic-science | chapters/01_simulation/00_random-sampling.ipynb | unlicense | [
"You are trying to measure a difference in the $K_{D}$ of two proteins binding to a ligand. From previous experiments, you know that the values of replicate measurements of $K_{D}$ follow a normal distribution with $\\sigma = 2\\ \\mu M$. How many measurements would you need to make to confidently tell the difference between two proteins with $K_{D} = 10 \\mu M$ and $K_{D} = 12 \\mu M$?\nGoals\n\nKnow how to use basic numpy.random functions to sample from distributions\nBegin to understand how to write a simulation to probe possible experimental outcomes\n\nCreate a new notebook with this cell at the top",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Figure out how to use np.random.choice to simulate 1,000 tosses of a fair coin\nnp.random uses a \"pseudorandom\" number generator to simulate choices\n\nString of numbers that has the same statistical properties as random numbers\nNumbers are actually generated deterministically\n\nNumbers look random...",
"numbers = np.random.random(100000)\nplt.hist(numbers)",
"But numbers are actually deterministic...",
"def simple_psuedo_random(current_value,\n multiplier=13110243,\n divisor=13132):\n\n return current_value*multiplier % divisor \n\nseed = 10218888\n\nout = []\ncurrent = seed\nfor i in range(1000):\n current = simple_psuedo_random(current)\n out.append(current)\n\nplt.hist(out)",
"python uses the Mersenne Twister to generate pseudorandom numbers\n\nWhat does the seed do?",
"seed = 1021888\n\nout = []\ncurrent = seed\nfor i in range(1000):\n current = simple_psuedo_random(current)\n out.append(current)\n",
"What will we see if I run this cell twice in a row?",
"s1 = np.random.random(10)\nprint(s1)\n",
"What will we see if I run this cell twice in a row?",
"np.random.seed(5235412)\ns1 = np.random.random(10)\nprint(s1)\n",
"A seed lets you specify which pseudo-random numbers you will use.\n\nIf you use the same seed, you will get identical samples. \nIf you use a different seed, you will get wildly different samples.\n\nmatplotlib.pyplot.hist",
"numbers = np.random.normal(size=10000)\ncounts, bins, junk = plt.hist(numbers,\n range(-10,10))\n",
"Basic histogram plotting syntax\npython\nCOUNTS, BIN_EDGES, GRAPHICS_BIT = plt.hist(ARRAY_TO_BIN,BINS_TO_USE)\nFigure out how the function works and report back to the class\n\nWhat the function does\nArguments normal people would care about\nWhat it returns",
"np.random.normal\nnp.random.binomial\nnp.random.uniform\nnp.random.poisson\nnp.random.choice\nnp.random.shuffle",
"Calculate:\n\n1000 random samples from a normal distribution with a mean of 5 and a standard deviation of 2. \nCreate a histogram with a bin size of 1."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arcyfelix/Courses | 18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb | apache-2.0 | [
"Time Series Exercise - Solutions\nFollow along with the instructions in bold. Watch the solutions video if you get stuck!\nThe Data\n Source: https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75#!ds=22ox&display=line \nMonthly milk production: pounds per cow. Jan 62 - Dec 75\n Import numpy pandas and matplotlib",
"import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'",
"milk = pd.read_csv('monthly-milk-production.csv',index_col='Month')",
"Check out the head of the dataframe",
"milk.head()",
"Make the index a time series by using: \nmilk.index = pd.to_datetime(milk.index)",
"milk.index = pd.to_datetime(milk.index)",
"Plot out the time series data.",
"milk.plot()",
"Train Test Split\n Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future) \n Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 3 months of data is the test set, with everything before it is the training.",
"milk.info()\n\ntrain_set = milk.head(156)\n\ntest_set = milk.tail(12)",
"Scale the Data\n Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!",
"from sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\n\ntrain_scaled = scaler.fit_transform(train_set)\n\ntest_scaled = scaler.transform(test_set)",
"Batch Function\n We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!",
"def next_batch(training_data,batch_size,steps):\n \"\"\"\n INPUT: Data, Batch Size, Time Steps per batch\n OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:]\n \"\"\"\n \n # STEP 1: Use np.random.randint to set a random starting point index for the batch.\n # Remember that each batch needs have the same number of steps in it.\n # This means you should limit the starting point to len(data)-steps\n \n # STEP 2: Now that you have a starting index you'll need to index the data from\n # the random start to random start + steps. Then reshape this data to be (1,steps)\n \n # STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:]\n # You'll need to reshape these into tensors for the RNN. Depending on your indexing it\n # will be either .reshape(-1,steps-1,1) or .reshape(-1,steps,1)\n\ndef next_batch(training_data,batch_size,steps):\n \n \n # Grab a random starting point for each batch\n rand_start = np.random.randint(0,len(training_data)-steps) \n\n # Create Y data for time series in the batches\n y_batch = np.array(training_data[rand_start:rand_start+steps+1]).reshape(1,steps+1)\n\n return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1) ",
"Setting Up The RNN Model\n Import TensorFlow",
"import tensorflow as tf",
"The Constants\n Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these): \n* Number of Inputs (1)\n* Number of Time Steps (12)\n* Number of Neurons per Layer (100)\n* Number of Outputs (1)\n* Learning Rate (0.003)\n* Number of Iterations for Training (4000)\n* Batch Size (1)",
"# Just one feature, the time series\nnum_inputs = 1\n# Num of steps in each batch\nnum_time_steps = 12\n# 100 neuron layer, play with this\nnum_neurons = 100\n# Just one output, predicted time series\nnum_outputs = 1\n\n## You can also try increasing iterations, but decreasing learning rate\n# learning rate you can play with this\nlearning_rate = 0.03 \n# how many iterations to go through (training steps), you can play with this\nnum_train_iterations = 4000\n# Size of the batch of data\nbatch_size = 1",
"Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.",
"X = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs])\ny = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])",
"Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.",
"# Also play around with GRUCell\ncell = tf.contrib.rnn.OutputProjectionWrapper(\n tf.contrib.rnn.BasicLSTMCell(num_units=num_neurons, activation=tf.nn.relu),\n output_size=num_outputs) ",
"Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)",
"outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)",
"Loss Function and Optimizer\n Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.",
"loss = tf.reduce_mean(tf.square(outputs - y)) # MSE\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntrain = optimizer.minimize(loss)",
"Initialize the global variables",
"init = tf.global_variables_initializer()",
"Create an instance of tf.train.Saver()",
"saver = tf.train.Saver()",
"Session\n Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.",
"gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)\n\nwith tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:\n sess.run(init)\n \n for iteration in range(num_train_iterations):\n \n X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps)\n sess.run(train, feed_dict={X: X_batch, y: y_batch})\n \n if iteration % 100 == 0:\n \n mse = loss.eval(feed_dict={X: X_batch, y: y_batch})\n print(iteration, \"\\tMSE:\", mse)\n \n # Save Model for Later\n saver.save(sess, \"./ex_time_series_model\")",
"Predicting Future (Test Data)\n Show the test_set (the last 12 months of your original complete data set)",
"test_set",
"Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set! \nGenerative Session\nNOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set)\n Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.",
"with tf.Session() as sess:\n \n # Use your Saver instance to restore your saved rnn time series model\n saver.restore(sess, \"./ex_time_series_model\")\n\n # Create a numpy array for your genreative seed from the last 12 months of the \n # training set data. Hint: Just use tail(12) and then pass it to an np.array\n train_seed = list(train_scaled[-12:])\n \n ## Now create a for loop that \n for iteration in range(12):\n X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1)\n y_pred = sess.run(outputs, feed_dict={X: X_batch})\n train_seed.append(y_pred[0, -1, 0])",
"Show the result of the predictions.",
"train_seed",
"Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.",
"results = scaler.inverse_transform(np.array(train_seed[12:]).reshape(12,1))",
"Create a new column on the test_set called \"Generated\" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.",
"test_set['Generated'] = results",
"View the test_set dataframe.",
"test_set",
"Plot out the two columns for comparison.",
"test_set.plot()",
"Great Job!\nPlay around with the parameters and RNN layers, does a faster learning rate with more steps improve the model? What about GRU or BasicRNN units? What if you train the original model to not just predict one timestep ahead into the future, but 3 instead? Lots of stuff to add on here!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yl565/statsmodels | examples/notebooks/robust_models_0.ipynb | bsd-3-clause | [
"Robust Linear Models",
"%matplotlib inline\n\nfrom __future__ import print_function\nimport numpy as np\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom statsmodels.sandbox.regression.predstd import wls_prediction_std",
"Estimation\nLoad data:",
"data = sm.datasets.stackloss.load()\ndata.exog = sm.add_constant(data.exog)",
"Huber's T norm with the (default) median absolute deviation scaling",
"huber_t = sm.RLM(data.endog, data.exog, M=sm.robust.norms.HuberT())\nhub_results = huber_t.fit()\nprint(hub_results.params)\nprint(hub_results.bse)\nprint(hub_results.summary(yname='y',\n xname=['var_%d' % i for i in range(len(hub_results.params))]))",
"Huber's T norm with 'H2' covariance matrix",
"hub_results2 = huber_t.fit(cov=\"H2\")\nprint(hub_results2.params)\nprint(hub_results2.bse)",
"Andrew's Wave norm with Huber's Proposal 2 scaling and 'H3' covariance matrix",
"andrew_mod = sm.RLM(data.endog, data.exog, M=sm.robust.norms.AndrewWave())\nandrew_results = andrew_mod.fit(scale_est=sm.robust.scale.HuberScale(), cov=\"H3\")\nprint('Parameters: ', andrew_results.params)",
"See help(sm.RLM.fit) for more options and module sm.robust.scale for scale options\nComparing OLS and RLM\nArtificial data with outliers:",
"nsample = 50\nx1 = np.linspace(0, 20, nsample)\nX = np.column_stack((x1, (x1-5)**2))\nX = sm.add_constant(X)\nsig = 0.3 # smaller error variance makes OLS<->RLM contrast bigger\nbeta = [5, 0.5, -0.0]\ny_true2 = np.dot(X, beta)\ny2 = y_true2 + sig*1. * np.random.normal(size=nsample)\ny2[[39,41,43,45,48]] -= 5 # add some outliers (10% of nsample)",
"Example 1: quadratic function with linear truth\nNote that the quadratic term in OLS regression will capture outlier effects.",
"res = sm.OLS(y2, X).fit()\nprint(res.params)\nprint(res.bse)\nprint(res.predict())",
"Estimate RLM:",
"resrlm = sm.RLM(y2, X).fit()\nprint(resrlm.params)\nprint(resrlm.bse)",
"Draw a plot to compare OLS estimates to the robust estimates:",
"fig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax.plot(x1, y2, 'o',label=\"data\")\nax.plot(x1, y_true2, 'b-', label=\"True\")\nprstd, iv_l, iv_u = wls_prediction_std(res)\nax.plot(x1, res.fittedvalues, 'r-', label=\"OLS\")\nax.plot(x1, iv_u, 'r--')\nax.plot(x1, iv_l, 'r--')\nax.plot(x1, resrlm.fittedvalues, 'g.-', label=\"RLM\")\nax.legend(loc=\"best\")",
"Example 2: linear function with linear truth\nFit a new OLS model using only the linear term and the constant:",
"X2 = X[:,[0,1]] \nres2 = sm.OLS(y2, X2).fit()\nprint(res2.params)\nprint(res2.bse)",
"Estimate RLM:",
"resrlm2 = sm.RLM(y2, X2).fit()\nprint(resrlm2.params)\nprint(resrlm2.bse)",
"Draw a plot to compare OLS estimates to the robust estimates:",
"prstd, iv_l, iv_u = wls_prediction_std(res2)\n\nfig, ax = plt.subplots(figsize=(8,6))\nax.plot(x1, y2, 'o', label=\"data\")\nax.plot(x1, y_true2, 'b-', label=\"True\")\nax.plot(x1, res2.fittedvalues, 'r-', label=\"OLS\")\nax.plot(x1, iv_u, 'r--')\nax.plot(x1, iv_l, 'r--')\nax.plot(x1, resrlm2.fittedvalues, 'g.-', label=\"RLM\")\nlegend = ax.legend(loc=\"best\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fonnesbeck/scientific-python-workshop | notebooks/Scikit Learn.ipynb | cc0-1.0 | [
"Introduction to Scikit-learn\nThe scikit-learn package is an open-source library that provides a robust set of machine learning algorithms for Python. It is built upon the core Python scientific stack (i.e. NumPy, SciPy, Cython), and has a simple, consistent API, making it useful for a wide range of statistical learning applications.\n<img src=\"http://1.bp.blogspot.com/-ME24ePzpzIM/UQLWTwurfXI/AAAAAAAAANw/W3EETIroA80/s1600/drop_shadows_background.png\" width=\"800px\"/>\nWhat is Machine Learning?\nMachine Learning (ML) is about coding programs that automatically adjust their performance from exposure to information encoded in data. This learning is achieved via tunable parameters that are automatically adjusted according to performance criteria.\nMachine Learning can be considered a subfield of Artificial Intelligence (AI).\nThere are three major classes of ML:\nSupervised learning\n: Algorithms which learn from a training set of labeled examples (exemplars) to generalize to the set of all possible inputs. Examples of supervised learning include regression and support vector machines.\nUnsupervised learning\n: Algorithms which learn from a training set of unlableled examples, using the features of the inputs to categorize inputs together according to some statistical criteria. Examples of unsupervised learning include k-means clustering and kernel density estimation.\nReinforcement learning\n: Algorithms that learn via reinforcement from a critic that provides information on the quality of a solution, but not on how to improve it. Improved solutions are achieved by iteratively exploring the solution space. We will not cover RL in this workshop.\nRepresenting Data in scikit-learn\nMost machine learning algorithms implemented in scikit-learn expect data to be stored in a\ntwo-dimensional array or matrix. The arrays can be\neither numpy arrays, or in some cases scipy.sparse matrices.\nThe size of the array is expected to be [n_samples, n_features]\n\nn_samples: The number of samples: each sample is an item to process (e.g. classify).\n A sample can be a document, a picture, a sound, a video, an astronomical object,\n a row in database or CSV file,\n or whatever you can describe with a fixed set of quantitative traits.\nn_features: The number of features or distinct traits that can be used to describe each\n item in a quantitative manner. Features are generally real-valued, but may be boolean or\n discrete-valued in some cases.\n\nThe number of features must be fixed in advance. However it can be very high dimensional\n(e.g. millions of features) with most of them being zeros for a given sample. This is a case\nwhere scipy.sparse matrices can be useful, in that they are\nmuch more memory-efficient than numpy arrays.\nExample: Iris morphometrics\nOne of the datasets included with scikit-learn is a set of measurements for flowers, each being a member of one of three species: Iris Setosa, Iris Versicolor or Iris Virginica. \n<img src=\"images/blueflagiris_flower_lg.jpg\" width=\"400px\"/>",
"from sklearn.datasets import load_iris\niris = load_iris()\n\niris.keys()\n\nn_samples, n_features = iris.data.shape\nn_samples, n_features\n\niris.data[0]",
"The information about the class of each sample is stored in the target attribute of the dataset:",
"iris.target\n\niris.target_names\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\n\nx_index = 3\ny_index = 2\n\n# this formatter will label the colorbar with the correct target names\nformatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])\n\nplt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target)\nplt.colorbar(ticks=[0, 1, 2], format=formatter)\nplt.xlabel(iris.feature_names[x_index])\nplt.ylabel(iris.feature_names[y_index])\n\nfrom sklearn.decomposition import PCA\n\npca = PCA(n_components=2, whiten=True).fit(iris.data)\nX_pca = pca.transform(iris.data)\n\nplt.scatter(X_pca[:, 0], X_pca[:, 1], c=iris.target)\nplt.colorbar(ticks=[0, 1, 2], format=formatter)\nvar_explained = pca.explained_variance_ratio_ * 100\nplt.xlabel('First Component: {0:.1f}%'.format(var_explained[0]))\nplt.ylabel('Second Component: {0:.1f}%'.format(var_explained[1]))",
"scikit-learn interface\nAll objects within scikit-learn share a uniform common basic API consisting of three complementary interfaces: \n\nestimator interface for building and fitting models\npredictor interface for making predictions\ntransformer interface for converting data.\n\nThe estimator interface is at the core of the library. It defines instantiation mechanisms of objects and exposes a fit method for learning a model from training data. All supervised and unsupervised learning algorithms (e.g., for classification, regression or clustering) are offered as objects implementing this interface. Machine learning tasks like feature extraction, feature selection or dimensionality reduction are also provided as estimators.\nScikit-learn strives to have a uniform interface across all methods. For example, a typical estimator follows this template:",
"class Estimator(object):\n \n def fit(self, X, y=None):\n \"\"\"Fit model to data X (and y)\"\"\"\n self.some_attribute = self.some_fitting_method(X, y)\n return self\n \n def predict(self, X_test):\n \"\"\"Make prediction based on passed features\"\"\"\n pred = self.make_prediction(X_test)\n return pred",
"For a given scikit-learn estimator object named model, several methods are available. Irrespective of the type of estimator, there will be a fit method:\n\nmodel.fit : fit training data. For supervised learning applications, this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)). For unsupervised learning applications, this accepts only a single argument, the data X (e.g. model.fit(X)).\n\n\nDuring the fitting process, the state of the estimator is stored in attributes of the estimator instance named with a trailing underscore character (_). For example, the sequence of regression trees sklearn.tree.DecisionTreeRegressor is stored in estimators_ attribute.\n\nThe predictor interface extends the notion of an estimator by adding a predict method that takes an array X_test and produces predictions based on the learned parameters of the estimator. In the case of supervised learning estimators, this method typically returns the predicted labels or values computed by the model. Some unsupervised learning estimators may also implement the predict interface, such as k-means, where the predicted values are the cluster labels.\nsupervised estimators are expected to have the following methods:\n\nmodel.predict : given a trained model, predict the label of a new set of data. This method accepts one argument, the new data X_new (e.g. model.predict(X_new)), and returns the learned label for each object in the array.\nmodel.predict_proba : For classification problems, some estimators also provide this method, which returns the probability that a new observation has each categorical label. In this case, the label with the highest probability is returned by model.predict().\nmodel.score : for classification or regression problems, most (all?) estimators implement a score method. Scores are between 0 and 1, with a larger score indicating a better fit.\n\nSince it is common to modify or filter data before feeding it to a learning algorithm, some estimators in the library implement a transformer interface which defines a transform method. It takes as input some new data X_test and yields as output a transformed version. Preprocessing, feature selection, feature extraction and dimensionality reduction algorithms are all provided as transformers within the library.\nunsupervised estimators will always have these methods:\n\nmodel.transform : given an unsupervised model, transform new data into the new basis. This also accepts one argument X_new, and returns the new representation of the data based on the unsupervised model.\nmodel.fit_transform : some estimators implement this method, which more efficiently performs a fit and a transform on the same input data.\n\nRegression Analysis\nTo demonstrate how scikit-learn is used, let's conduct a logistic regression analysis on a dataset for very low birth weight (VLBW) infants.\nData on 671 infants with very low (less than 1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center by OShea et al. (1992). Of interest is the relationship between the outcome intra-ventricular hemorrhage and the predictors birth weight, gestational age, presence of pneumothorax, mode of delivery, single vs. multiple birth, and whether the birth occurred at Duke or at another hospital with later transfer to Duke. A secular trend in the outcome is also of interest.\nThe metadata for this dataset can be found here.",
"import pandas as pd\n\nvlbw = pd.read_csv(\"../data/vlbw.csv\", index_col=0)\n\nsubset = vlbw[['ivh', 'gest', 'bwt', 'delivery', 'inout', \n 'pltct', 'lowph', 'pneumo', 'twn', 'apg1']].dropna()\n\n# Extract response variable\ny = subset.ivh.replace({'absent':0, 'possible':1, 'definite':1})\n\n# Standardize some variables\nX = subset[['gest', 'bwt', 'pltct', 'lowph']]\nX0 = (X - X.mean(axis=0)) / X.std(axis=0)\n\n# Recode some variables\nX0['csection'] = subset.delivery.replace({'vaginal':0, 'abdominal':1})\nX0['transported'] = subset.inout.replace({'born at Duke':0, 'transported':1})\nX0[['pneumo', 'twn', 'apg1']] = subset[['pneumo', 'twn','apg1']]\nX0.head()",
"We split the data into a training set and a testing set. By default, 25% of the data is reserved for testing. This is the first of multiple ways that we will see to do this.",
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X0, y)",
"The LogisticRegression model in scikit-learn employs a regularization coefficient C, which defaults to 1. The amount of regularization is lower with larger values of C.\nRegularization penalizes the values of regression coefficients, while smaller ones let the coefficients range widely. Scikit-learn includes two penalties: a l2 penalty which penalizes the sum of the squares of the coefficients (the default), and a l1 penalty which penalizes the sum of the absolute values.\nThe reason for doing regularization is to let us to include more covariates than our data might otherwise allow. We only have a few coefficients, so we will set C to a large value.",
"from sklearn.linear_model import LogisticRegression\n\nlrmod = LogisticRegression(C=1000)\nlrmod.fit(X_train, y_train)\n\npred_train = lrmod.predict(X_train)\npred_test = lrmod.predict(X_test)\n\npd.crosstab(y_train, pred_train, \n rownames=[\"Actual\"], colnames=[\"Predicted\"])\n\npd.crosstab(y_test, pred_test, \n rownames=[\"Actual\"], colnames=[\"Predicted\"])\n\nfor name, value in zip(X0.columns, lrmod.coef_[0]):\n print('{0}:\\t{1:.2f}'.format(name, value))",
"We can bootstrap some confidence intervals:",
"import numpy as np\n\nn = 1000\nboot_samples = np.empty((n, len(lrmod.coef_[0])))\n\nfor i in np.arange(n):\n boot_ind = np.random.randint(0, len(X0), len(X0))\n y_i, X_i = y.values[boot_ind], X0.values[boot_ind]\n \n lrmod_i = LogisticRegression(C=1000)\n lrmod_i.fit(X_i, y_i)\n\n boot_samples[i] = lrmod_i.coef_[0]\n\nboot_samples.sort(axis=0)\n\nboot_se = boot_samples[[25, 975], :].T\n\ncoefs = lrmod.coef_[0]\nplt.plot(coefs, 'r.')\nfor i in range(len(coefs)):\n plt.errorbar(x=[i,i], y=boot_se[i], color='red')\nplt.xlim(-0.5, 8.5)\nplt.xticks(range(len(coefs)), X0.columns.values, rotation=45)\nplt.axhline(0, color='k', linestyle='--')",
"References\n\nscikit-learn user's guide"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gtrichards/PHYS_T480 | NeuralNetworks.ipynb | mit | [
"Neural Networks\nG. Richards (2016), where I found this video series particularly helpful in trying to simplify the explanation https://www.youtube.com/watch?v=bxe2T-V8XRs.\nUpdates from V. Baker, leaning heavily on Thomas Trappenberg's Fundamentals of Computational Neuroscience(https://web.cs.dal.ca/~tt/fundamentals/)\nThe field of computational neuroscience studies the functions of the brain in the context of information processing capabilities. Efforts can be roughly dividied into either the (fascinating, but not (yet!) all that useful) exploration of actual brain functions and the (useful, but less interesting) simplified models that can be used for practical data analysis. \nNeural Networks are designed with the idea of mimicking the function of real neural networks in the brain.\nIn the image below the circles on the left represent the attributes of our input data, $X$, which here is 3 dimensional. The circles in the middle represent the neurons. They take in the information from the input and, based on some criterion decide whether or not to \"fire\". The collective results of the neurons in the hidden layer produce the output, $y$, which is represented by the circles on the right, which here is 2 dimensional result. The lines connecting the circles represent the synapses. This is a simple example with just one layer of neurons; however, there can be many layers of neurons.\n\nIn more detail:\nThe job of a synapses is to take input values and multiply them by some weight before passing them to the neuron (hidden layer):\n$$z = \\sum_i w x_i$$\nThe neuron then sums up the inputs from all of the synapses connected to it and applies an \"activation function\". For example a sigmoid activation function.\n$$a = \\frac{1}{1+e^{-z}}.$$\n\nWhat the neural network does is to learn the weights of the synapses that are needed to produce an accurate model of $y_{\\rm train}$.\nRather than think about the inputs individually, we can write this process in matrix form as\n$$X W^{(1)} = Z^{(2)}.$$\nIf $D$ is the number of attributes (here 3) and $H$ is the number of neurons in the hidden layer (here 4), then $X$ is an $N\\times D$ matrix, while $W^{(1)}$ is a $D\\times H$ matrix. The result, $Z^{(2)}$, is then an $N\\times H$ matrix.\nWe then apply the activation function to each entry of $Z^{(2)}$ independently: \n$$A^{(2)} = f(Z^{(2)}),$$\nwhere $A^{(2)}$ is the output of the neurons in the hidden layer and is also $N\\times H$.\nThese values are then the inputs for the next set of synapses, where we multiply the inputs by another set of weights, $W^{(2)}:$\n$$A^{(2)} W^{(2)} = Z^{(3)},$$\nwhere $W^{(2)}$ is an $H\\times O$ matrix and $Z^{(3)}$ is an $N\\times O$ matrix with $O$-dimensional output.\nAnother activation function is then applied to $Z^{(3)}$ to give\n$$\\hat{y} = f(Z^{(3)}),$$\nwhich is our estimator of $y$.\nFor example we might have $N=100$ people for which we have measured \n* shoe size\n* belt size\n* hat size\nfor whom we know their height and weight. \nThen we are going to use this to predict the height and weight for people where we only know shoe size, belt size, and hat size.\nThe neural network then essentially boils down to determining the weights, which are usually initialized randomly.\nWe do that by minimizing the cost function (which compares the true values of $y$ to our predicted values). Typically:\n$$ {\\rm Cost} = J = \\sum\\frac{1}{2}(y - \\hat{y})^2.$$\nIf we just had 1 weight and we wanted to check 1000 possible values, that wouldn't be so bad. But we have 20 weights, which means checking $20^{1000}$ possible combinations. Remember the curse of dimensionality? That might take a while. Indeed, far, far longer than the age of the Universe.\nHow about just checking 3 points for each weight and see if we can at least figure out which way is \"down hill\"? That's a start.\nBut we could just as easily rewrite $J$ as\n$$ J = \\sum\\frac{1}{2}\\left(y - f\\left( f(X W^{(1)}) W^{(2)} \\right) \\right)^2$$\nand then compute\n$$\\frac{\\partial J}{\\partial W}$$\nin order to determine the slope of the cost function for each weight. This is the gradient descent method.\nWe'll want $\\partial J/\\partial W^{(1)}$ and $\\partial J/\\partial W^{(2)}$ separately. This allows us to backpropagate the error contributions along each neuron and to change the weights where they most need to be changed. It is like each observations gets a vote on which way is \"down hill\". We compute the vector sum to decide the ultimate down hill direction.\nOnce we know the down hill direction from the derivative, we update the weights by subtracting a scalar times that derivative from the original weights. That's obviously much faster than randomly sampling all the possible combinations of weights. Once the weights are set, then you have your Neural Network classifier/regressor.\n\nScikit-Learn has both unsupervised Neural Network and supervised Neural Network examples. Apparently these are new as Jake VanderPlas didn't know about them.\nLet's try to use the supervised regression algorithm on the Boston House Price dataset.",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom sklearn.datasets import load_boston\nfrom sklearn.model_selection import train_test_split\n\nboston = load_boston()\n#print boston.DESCR\n\nX = boston.data\ny = boston.target\n\nXtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.25, random_state=42) # Complete\n\nfrom sklearn.neural_network import MLPRegressor\nclf = MLPRegressor(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)\nclf.fit(Xtrain, ytrain)\n\n# Look at the weights\nprint [coef.shape for coef in clf.coefs_]\n\nypred = clf.predict(Xtest)\n#print ypred, ytest\n\nfig = plt.figure(figsize=(6, 6))\nplt.scatter(ytest,ypred)\nplt.xlabel(\"Actual Value [x$1000]\")\nplt.ylabel(\"Predicted Value [x$1000]\")\nplt.show()",
"Of course, that only predicts the value for a fraction of the data set. I don't think that I have made it entirely clear how to use cross-validation to get a prediction for the full training set, so let's do that now. We'll use Scikit-Learn's cross_val_predict.",
"from sklearn.model_selection import cross_val_predict\n\nyCVpred = cross_val_predict(clf, X, y, cv=5) # Complete\n\nfig = plt.figure(figsize=(6, 6))\nplt.scatter(y,yCVpred)\nplt.xlabel(\"Actual Value [x$1000]\")\nplt.ylabel(\"Predicted Value [x$1000]\")\nplt.show()",
"Let's try to use the multi-layer perceptron classifier on the digits data set. We will use a single hidden layer to keep the training time reasonable.",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom sklearn import datasets, cross_validation, neural_network, svm, metrics\nfrom sklearn.neural_network import MLPClassifier \n\ndigits = datasets.load_digits()\nimages_and_labels = list(zip(digits.images, digits.target))\n \nfor index, (image, label) in enumerate(images_and_labels[:4]):\n plt.subplot(2, 4, index + 1)\n plt.axis('off')\n plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title('Training: %i' % label)\nplt.show()\n \n# To apply a classifier on this data, we need to flatten the image, to\n# turn the data in a (samples, feature) matrix:\nn_samples = len(digits.images)\ndata = digits.images.reshape((n_samples, -1))\n\n# Create a classifier: a support vector classifier\nclassifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(15,) )\n\n# We learn the digits on the first half of the digits\nclassifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])\nprint(\"Training set score: %f\" % classifier.score(data[n_samples / 2:], digits.target[n_samples / 2:]))\n\n# Now predict the value of the digit on the second half:\nexpected = digits.target[n_samples / 2:]\npredicted = classifier.predict(data[n_samples / 2:])\n\nprint(\"Classification report for classifier %s:\\n%s\\n\"\n % (classifier, metrics.classification_report(expected, predicted)))\nprint(\"Confusion matrix:\\n%s\" % metrics.confusion_matrix(expected, predicted))",
"This looks pretty good! In general increasing the size of the hidden layer will improve performance at the cost of longer training time. Now try training networks with a hidden layer size of 5 to 20. At what point does performance stop improving?",
"from sklearn.model_selection import cross_val_score\n\nhidden_size = np.arange(5,20)\nscores = np.array([])\nfor sz in hidden_size:\n classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(sz,) )\n #classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])\n scores = np.append(scores, np.mean(cross_val_score(classifier, data, digits.target, cv=5)))\n \n#plt.plot(hidden_size,scores)\nfig = plt.figure()\nax = plt.gca()\nax.plot(hidden_size,scores,'x-')\nplt.show()\n",
"Our basic perceptron can do a pretty good job recognizing handwritten digits, assuming the digits are all centered in an 8x8 image. What happens if we embed the digit images at random locations within a 32x32 image? Try increasing the size of the hidden layer and see if we can improve the performance.",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom sklearn import datasets, cross_validation, neural_network, svm, metrics\nfrom sklearn.neural_network import MLPClassifier \n\ndigits = datasets.load_digits()\nresize = 32 #Size of larger image to embed the digits\nimages_ex = np.zeros((digits.target.size,resize,resize))\nfor index, image in enumerate(digits.images):\n offrow = np.random.randint(low=0,high=resize-8,size=1)\n offcol = np.random.randint(low=0,high=resize-8,size=1)\n images_ex[index,offrow:offrow+8,offcol:offcol+8] = digits.images[index,:,:]\n \nfor jj in range(1,4):\n fig = plt.figure()\n ax1 = fig.add_subplot(1,2,2)\n ax1.imshow(images_ex[jj,:,:],aspect='auto',origin='lower',cmap=plt.cm.gray_r, interpolation='nearest')\n ax2 = fig.add_subplot(1,2,1)\n ax2.imshow(digits.images[jj,:,:],aspect='auto',origin='lower',cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title(digits.target[jj])\nplt.show()\n \n# To apply a classifier on this data, we need to flatten the image, to\n# turn the data in a (samples, feature) matrix:\nn_samples = len(digits.images)\ndata_ex = images_ex.reshape((n_samples,-1))\n\n# Create a classifier: Multi-layer perceptron\nclassifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(64,) )\n\nclassifier.fit(data_ex[:n_samples / 2], digits.target[:n_samples / 2])\n\n# Now predict the value of the digit on the second half:\nexpected = digits.target[n_samples / 2:]\npredicted = classifier.predict(data_ex[n_samples / 2:])\n\nprint(\"Classification report for classifier %s:\\n%s\\n\"\n % (classifier, metrics.classification_report(expected, predicted)))\nprint(\"Confusion matrix:\\n%s\" % metrics.confusion_matrix(expected, predicted))",
"Well that fell apart quickly! We're at roughly the point where neural networks faded from popularity in the 90s. Perceptrons generated intense interest because they were biologically inspired and could be applied generically to any supervised learning problem. However they weren't extensible to more realistic problems, and for supervised learning there were techniques such as support vector machines that provided better performance and avoided the explosion in training time seen for large perceptrons.\nRecent interest in neural networks surged in 2012 when a team using a deep convolutional neural network aceived record results classifying objects in the ImageNet data set. Some examples of the types of classification performed on the dataset are shown below.\nThis is clearly much more sophisticated than our basic perceptron. \"Deep\" networks consist of tens of layers with thousands of neurons. These large networks have become usabel thanks to two breakthroughs: the use of sparse layers and the power of graphics processing units (GPUs).\nMany image processing tasks involve convolving an image with a 2-dimensional kernel as shown below.\n\nThe sparse layers or convolutional layers in a deep network contain a large number of hidden nodes but very few synapses. The sparseness arises from the relatively small size of a typical convolution kernel (15x15 is a large kernel), so a hidden node representing one output of the convolution is connected to only a few input nodes. Compare this the our previous perceptron, in which every hidden node was connected to every input node.\nEven though the total number of connections is greatly reduced in the sparse layers, the total number of nodes and connections in a modern deep network is still enormous. Luckily, training these networks turns out to be a great task for GPU acceleration! Serious work using neural networks is almost always done usign specialized GPU-accelerated platforms.\nThe Keras framework provides a Python environment for CNN development. Keras uses the TensorFlow module for backend processing.\nInstalling Keras is simple with pip: pip install tensorflow pip install keras",
"from keras.models import Sequential\nfrom keras.layers import Dense, Activation, Dropout, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\n\n#Create a model\nmodel = Sequential()\n\n#Use two sparse layers to learn useful, translation-invariant features\nmodel.add(Convolution2D(32,7,7,border_mode='valid',input_shape=(32,32,1)))\nmodel.add(Activation('relu'))\nmodel.add(Convolution2D(32,5,5))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\n\n#Add dense layers to do the actual classification\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))\n\n\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop',metrics=['accuracy'])\nmodel.summary()\n\n#Keras has some particular requirements for data formats...\ndataX = images_ex.reshape(images_ex.shape[0],images_ex.shape[1],images_ex.shape[2],1)\ndataY = np_utils.to_categorical(digits.target)\n\n#Train the model. We get a summary of performance after each training epoch\nmodel.fit(dataX, dataY, validation_split=0.1, batch_size=128, nb_epoch=10)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
diegocavalca/Studies | deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb | cc0-1.0 | [
"Deep Neural Network for Image Classification: Application\nWhen you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course! \nYou will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation. \nAfter this assignment you will be able to:\n- Build and apply a deep neural network to supervised learning. \nLet's get started!\n1 - Packages\nLet's first import all the packages that you will need during this assignment. \n- numpy is the fundamental package for scientific computing with Python.\n- matplotlib is a library to plot graphs in Python.\n- h5py is a common package to interact with a dataset that is stored on an H5 file.\n- PIL and scipy are used here to test your model with your own picture at the end.\n- dnn_app_utils provides the functions implemented in the \"Building your Deep Neural Network: Step by Step\" assignment to this notebook.\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.",
"import time\nimport numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nimport scipy\nfrom PIL import Image\nfrom scipy import ndimage\nfrom dnn_app_utils_v2 import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)",
"2 - Dataset\nYou will use the same \"Cat vs non-Cat\" dataset as in \"Logistic Regression as a Neural Network\" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!\nProblem Statement: You are given a dataset (\"data.h5\") containing:\n - a training set of m_train images labelled as cat (1) or non-cat (0)\n - a test set of m_test images labelled as cat and non-cat\n - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).\nLet's get more familiar with the dataset. Load the data by running the cell below.",
"train_x_orig, train_y, test_x_orig, test_y, classes = load_data()",
"The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.",
"# Example of a picture\nindex = 73\nplt.imshow(train_x_orig[index])\nprint (\"y = \" + str(train_y[0,index]) + \". It's a \" + classes[train_y[0,index]].decode(\"utf-8\") + \" picture.\")\n\n# Explore your dataset \nm_train = train_x_orig.shape[0]\nnum_px = train_x_orig.shape[1]\nm_test = test_x_orig.shape[0]\n\nprint (\"Number of training examples: \" + str(m_train))\nprint (\"Number of testing examples: \" + str(m_test))\nprint (\"Each image is of size: (\" + str(num_px) + \", \" + str(num_px) + \", 3)\")\nprint (\"train_x_orig shape: \" + str(train_x_orig.shape))\nprint (\"train_y shape: \" + str(train_y.shape))\nprint (\"test_x_orig shape: \" + str(test_x_orig.shape))\nprint (\"test_y shape: \" + str(test_y.shape))",
"As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.\n<img src=\"images/imvectorkiank.png\" style=\"width:450px;height:300px;\">\n<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>",
"# Reshape the training and test examples \ntrain_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The \"-1\" makes reshape flatten the remaining dimensions\ntest_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T\n\n# Standardize data to have feature values between 0 and 1.\ntrain_x = train_x_flatten/255.\ntest_x = test_x_flatten/255.\n\nprint (\"train_x's shape: \" + str(train_x.shape))\nprint (\"test_x's shape: \" + str(test_x.shape))\n",
"$12,288$ equals $64 \\times 64 \\times 3$ which is the size of one reshaped image vector.\n3 - Architecture of your model\nNow that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.\nYou will build two different models:\n- A 2-layer neural network\n- An L-layer deep neural network\nYou will then compare the performance of these models, and also try out different values for $L$. \nLet's look at the two architectures.\n3.1 - 2-layer neural network\n<img src=\"images/2layerNN_kiank.png\" style=\"width:650px;height:400px;\">\n<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>\n<u>Detailed Architecture of figure 2</u>:\n- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. \n- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.\n- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.\n- You then repeat the same process.\n- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias). \n- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.\n3.2 - L-layer deep neural network\nIt is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:\n<img src=\"images/LlayerNN_kiank.png\" style=\"width:650px;height:400px;\">\n<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>\n<u>Detailed Architecture of figure 3</u>:\n- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).\n- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.\n- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.\n- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.\n3.3 - General methodology\nAs usual you will follow the Deep Learning methodology to build the model:\n 1. Initialize parameters / Define hyperparameters\n 2. Loop for num_iterations:\n a. Forward propagation\n b. Compute cost function\n c. Backward propagation\n d. Update parameters (using parameters, and grads from backprop) \n 4. Use trained parameters to predict labels\nLet's now implement those two models!\n4 - Two-layer neural network\nQuestion: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:\npython\ndef initialize_parameters(n_x, n_h, n_y):\n ...\n return parameters \ndef linear_activation_forward(A_prev, W, b, activation):\n ...\n return A, cache\ndef compute_cost(AL, Y):\n ...\n return cost\ndef linear_activation_backward(dA, cache, activation):\n ...\n return dA_prev, dW, db\ndef update_parameters(parameters, grads, learning_rate):\n ...\n return parameters",
"### CONSTANTS DEFINING THE MODEL ####\nn_x = 12288 # num_px * num_px * 3\nn_h = 7\nn_y = 1\nlayers_dims = (n_x, n_h, n_y)\n\n# GRADED FUNCTION: two_layer_model\n\ndef two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):\n \"\"\"\n Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.\n \n Arguments:\n X -- input data, of shape (n_x, number of examples)\n Y -- true \"label\" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)\n layers_dims -- dimensions of the layers (n_x, n_h, n_y)\n num_iterations -- number of iterations of the optimization loop\n learning_rate -- learning rate of the gradient descent update rule\n print_cost -- If set to True, this will print the cost every 100 iterations \n \n Returns:\n parameters -- a dictionary containing W1, W2, b1, and b2\n \"\"\"\n \n np.random.seed(1)\n grads = {}\n costs = [] # to keep track of the cost\n m = X.shape[1] # number of examples\n (n_x, n_h, n_y) = layers_dims\n \n # Initialize parameters dictionary, by calling one of the functions you'd previously implemented\n ### START CODE HERE ### (≈ 1 line of code)\n parameters = initialize_parameters(n_x, n_h, n_y)\n ### END CODE HERE ###\n \n # Get W1, b1, W2 and b2 from the dictionary parameters.\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n \n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: \"X, W1, b1\". Output: \"A1, cache1, A2, cache2\".\n ### START CODE HERE ### (≈ 2 lines of code)\n A1, cache1 = linear_activation_forward(X, W1, b1, 'relu')\n A2, cache2 = linear_activation_forward(A1, W2, b2, 'sigmoid')\n ### END CODE HERE ###\n \n # Compute cost\n ### START CODE HERE ### (≈ 1 line of code)\n cost = compute_cost(A2, Y)\n ### END CODE HERE ###\n \n # Initializing backward propagation\n dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))\n \n # Backward propagation. Inputs: \"dA2, cache2, cache1\". Outputs: \"dA1, dW2, db2; also dA0 (not used), dW1, db1\".\n ### START CODE HERE ### (≈ 2 lines of code)\n dA1, dW2, db2 = linear_activation_backward(dA2, cache2, 'sigmoid')\n dA0, dW1, db1 = linear_activation_backward(dA1, cache1, 'relu')\n ### END CODE HERE ###\n \n # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2\n grads['dW1'] = dW1\n grads['db1'] = db1\n grads['dW2'] = dW2\n grads['db2'] = db2\n \n # Update parameters.\n ### START CODE HERE ### (approx. 1 line of code)\n parameters = update_parameters(parameters, grads, learning_rate)\n ### END CODE HERE ###\n\n # Retrieve W1, b1, W2, b2 from parameters\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n \n # Print the cost every 100 training example\n if print_cost and i % 100 == 0:\n print(\"Cost after iteration {}: {}\".format(i, np.squeeze(cost)))\n if print_cost and i % 100 == 0:\n costs.append(cost)\n \n # plot the cost\n\n plt.plot(np.squeeze(costs))\n plt.ylabel('cost')\n plt.xlabel('iterations (per tens)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters",
"Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the \"Cost after iteration 0\" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.",
"parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)",
"Expected Output:\n<table> \n <tr>\n <td> **Cost after iteration 0**</td>\n <td> 0.6930497356599888 </td>\n </tr>\n <tr>\n <td> **Cost after iteration 100**</td>\n <td> 0.6464320953428849 </td>\n </tr>\n <tr>\n <td> **...**</td>\n <td> ... </td>\n </tr>\n <tr>\n <td> **Cost after iteration 2400**</td>\n <td> 0.048554785628770206 </td>\n </tr>\n</table>\n\nGood thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.\nNow, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.",
"predictions_train = predict(train_x, train_y, parameters)",
"Expected Output:\n<table> \n <tr>\n <td> **Accuracy**</td>\n <td> 1.0 </td>\n </tr>\n</table>",
"predictions_test = predict(test_x, test_y, parameters)",
"Expected Output:\n<table> \n <tr>\n <td> **Accuracy**</td>\n <td> 0.72 </td>\n </tr>\n</table>\n\nNote: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called \"early stopping\" and we will talk about it in the next course. Early stopping is a way to prevent overfitting. \nCongratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.\n5 - L-layer Neural Network\nQuestion: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:\npython\ndef initialize_parameters_deep(layer_dims):\n ...\n return parameters \ndef L_model_forward(X, parameters):\n ...\n return AL, caches\ndef compute_cost(AL, Y):\n ...\n return cost\ndef L_model_backward(AL, Y, caches):\n ...\n return grads\ndef update_parameters(parameters, grads, learning_rate):\n ...\n return parameters",
"### CONSTANTS ###\nlayers_dims = [12288, 20, 7, 5, 1] # 5-layer model\n\n# GRADED FUNCTION: L_layer_model\n\ndef L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009\n \"\"\"\n Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.\n \n Arguments:\n X -- data, numpy array of shape (number of examples, num_px * num_px * 3)\n Y -- true \"label\" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)\n layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).\n learning_rate -- learning rate of the gradient descent update rule\n num_iterations -- number of iterations of the optimization loop\n print_cost -- if True, it prints the cost every 100 steps\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n\n np.random.seed(1)\n costs = [] # keep track of cost\n \n # Parameters initialization.\n ### START CODE HERE ###\n parameters = initialize_parameters_deep(layers_dims)\n ### END CODE HERE ###\n \n # Loop (gradient descent)\n for i in range(0, num_iterations):\n\n # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.\n ### START CODE HERE ### (≈ 1 line of code)\n AL, caches = L_model_forward(X, parameters)\n ### END CODE HERE ###\n \n # Compute cost.\n ### START CODE HERE ### (≈ 1 line of code)\n cost = compute_cost(AL, Y)\n ### END CODE HERE ###\n \n # Backward propagation.\n ### START CODE HERE ### (≈ 1 line of code)\n grads = L_model_backward(AL, Y, caches)\n ### END CODE HERE ###\n \n # Update parameters.\n ### START CODE HERE ### (≈ 1 line of code)\n parameters = update_parameters(parameters, grads, learning_rate)\n ### END CODE HERE ###\n \n # Print the cost every 100 training example\n if print_cost and i % 100 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n if print_cost and i % 100 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(np.squeeze(costs))\n plt.ylabel('cost')\n plt.xlabel('iterations (per tens)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters",
"You will now train the model as a 5-layer neural network. \nRun the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the \"Cost after iteration 0\" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.",
"parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)",
"Expected Output:\n<table> \n <tr>\n <td> **Cost after iteration 0**</td>\n <td> 0.771749 </td>\n </tr>\n <tr>\n <td> **Cost after iteration 100**</td>\n <td> 0.672053 </td>\n </tr>\n <tr>\n <td> **...**</td>\n <td> ... </td>\n </tr>\n <tr>\n <td> **Cost after iteration 2400**</td>\n <td> 0.092878 </td>\n </tr>\n</table>",
"pred_train = predict(train_x, train_y, parameters)",
"<table>\n <tr>\n <td>\n **Train Accuracy**\n </td>\n <td>\n 0.985645933014\n </td>\n </tr>\n</table>",
"pred_test = predict(test_x, test_y, parameters)",
"Expected Output:\n<table> \n <tr>\n <td> **Test Accuracy**</td>\n <td> 0.8 </td>\n </tr>\n</table>\n\nCongrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. \nThis is good performance for this task. Nice job! \nThough in the next course on \"Improving deep neural networks\" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course). \n6) Results Analysis\nFirst, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.",
"print_mislabeled_images(classes, test_x, test_y, pred_test)",
"A few type of images the model tends to do poorly on include: \n- Cat body in an unusual position\n- Cat appears against a background of a similar color\n- Unusual cat color and species\n- Camera Angle\n- Brightness of the picture\n- Scale variation (cat is very large or small in image) \n7) Test with your own image (optional/ungraded exercise)\nCongratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Change your image's name in the following code\n 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!",
"## START CODE HERE ##\nmy_image = \"my_image.jpg\" # change this to the name of your image file \nmy_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)\n## END CODE HERE ##\n\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\nmy_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))\nmy_predicted_image = predict(my_image, my_label_y, parameters)\n\nplt.imshow(image)\nprint (\"y = \" + str(np.squeeze(my_predicted_image)) + \", your L-layer model predicts a \\\"\" + classes[int(np.squeeze(my_predicted_image)),].decode(\"utf-8\") + \"\\\" picture.\")",
"References:\n\nfor auto-reloading external module: http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.