markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
[Volver a la Tabla de Contenido](TOC) Error en la aplicación de la regla trapezoidal Recordando que estos esquemas provienen de la serie truncada de Taylor, el error se puede obtener determinando el primer término truncado en el esquema, que para la regla trapezoidal de aplicación simple corresponde a:\begin{equation*}\begin{split}E_t=-\frac{1}{12}f''(\xi)(b-a)^3\end{split}\label{eq:Ec5_21} \tag{5.21}\end{equation*}donde $f''(\xi)$ es la segunda derivada en el punto $\xi$ en el intervalo $[a,b]$, y $\xi$ es un valor que maximiza la evaluación de esta segunda derivada. Generalizando este concepto a la aplicación múltiple de la regla trapezoidal, se pueden sumar cada uno de los errores en cada segmento para dar:\begin{equation*}\begin{split}E_t=-\frac{(b-a)^3}{12n^3}\sum\limits_{i=1}^n f''(\xi_i)\end{split}\label{eq:Ec5_22} \tag{5.22}\end{equation*}el anterior resultado se puede simplificar estimando la media, o valor promedio, de la segunda derivada para todo el intervalo\begin{equation*}\begin{split}\bar{f''} \approx \frac{\sum \limits_{i=1}^n f''(\xi_i)}{n}\end{split}\label{eq:Ec5_23} \tag{5.23}\end{equation*}de esta ecuación se tiene que $\sum f''(\xi_i)\approx nf''$, y reemplazando en la ecuación [(5.23)](Ec5_23)\begin{equation*}\begin{split}E_t \approx \frac{(b-a)^3}{12n^2}\bar{f''}\end{split}\label{eq:Ec5_24} \tag{5.24}\end{equation*}De este resultado se observa que si se duplica el número de segmentos, el error de truncamiento se disminuirá a una cuarta parte. [Volver a la Tabla de Contenido](TOC) Reglas de Simpson Las [reglas de Simpson](https://en.wikipedia.org/wiki/Simpson%27s_rule) son esquemas de integración numérica en honor al matemático [*Thomas Simpson*](https://en.wikipedia.org/wiki/Thomas_Simpson), utilizado para obtener la aproximación de la integral empleando interpolación polinomial sustituyendo a $f(x)$. [Volver a la Tabla de Contenido](TOC) Regla de Simpson1/3 de aplicación simple La primera regla corresponde a una interpolación polinomial de segundo orden sustituida en la ecuación [(5.8)](Ec5_8) Fuente: wikipedia.com \begin{equation*}\begin{split}I=\int_a^b f(x)dx \approx \int_a^b p_2(x)dx\end{split}\label{eq:Ec5_25} \tag{5.25}\end{equation*}del esquema de interpolación de Lagrange para un polinomio de segundo grado, visto en el capitulo anterior, y remplazando en la integral arriba, se llega a \begin{equation*}\begin{split}I\approx\int_{x0}^{x2} \left[\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}f(x_0)+\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}f(x_1)+\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}f(x_2)\right]dx\end{split}\label{eq:Ec5_26} \tag{5.26}\end{equation*}realizando la integración de forma analítica y un manejo algebraico, resulta\begin{equation*}\begin{split}I\approx\frac{h}{3} \left[ f(x_0)+4f(x_1)+f(x_2)\right]\end{split}\label{eq:Ec5_27} \tag{5.27}\end{equation*}donde $h=(b-a)/2$ y los $x_{i+1} = x_i + h$ A continuación, vamos a comparar graficamente las funciones "exacta" (con muchos puntos) y una aproximada empleando alguna técnica de interpolación para $n=3$ puntos (Polinomio interpolante de orden $2$).
from scipy.interpolate import barycentric_interpolate # usaremos uno de los tantos métodos de interpolación dispobibles en las bibliotecas de Python n = 3 # puntos a interpolar para un polinomio de grado 2 xp = np.linspace(a,b,n) # generación de n puntos igualmente espaciados para la interpolación fp = funcion(xp) # evaluación de la función en los n puntos generados x = np.linspace(a, b, 100) # generación de 100 puntos igualmente espaciados y = barycentric_interpolate(xp, fp, x) # interpolación numérica empleando el método del Baricentro fig = plt.figure(figsize=(9, 6), dpi= 80, facecolor='w', edgecolor='k') ax = fig.add_subplot(111) l, = plt.plot(x, y) plt.plot(x, funcion(x), '-', c='red') plt.plot(xp, fp, 'o', c=l.get_color()) plt.annotate('Función "Real"', xy=(.63, 1.5), xytext=(0.8, 1.25),arrowprops=dict(facecolor='black', shrink=0.05),) plt.annotate('Función interpolada', xy=(.72, 1.75), xytext=(0.4, 2),arrowprops=dict(facecolor='black', shrink=0.05),) plt.grid(True) # muestra la malla de fondo plt.show() # muestra la gráfica
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Se observa que hay una gran diferencia entre las áreas que se estarían abarcando en la función llamada "*real*" (que se emplearon $100$ puntos para su generación) y la función *interpolada* (con únicamente $3$ puntos para su generación) que será la empleada en la integración numérica (aproximada) mediante la regla de *Simpson $1/3$*.Conscientes de esto, procederemos entonces a realizar el cálculo del área bajo la curva del $p_3(x)$ empleando el método de *Simpson $1/3$* Creemos un programa en *Python* para que nos sirva para cualquier función $f(x)$ que queramos integrar en cualquier intervalo $[a,b]$ empleando la regla de integración de *Simpson $1/3$*:
# se ingresan los valores del intervalo [a,b] a = float(input('Ingrese el valor del límite inferior: ')) b = float(input('Ingrese el valor del límite superior: ')) # cuerpo del programa por la regla de Simpson 1/3 h = (b-a)/2 # cálculo del valor de h x0 = a # valor del primer punto para la fórmula de S1/3 x1 = x0 + h # Valor del punto intermedio en la fórmula de S1/3 x2 = b # valor del tercer punto para la fórmula de S1/3 fx0 = funcion(x0) # evaluación de la función en el punto x0 fx1 = funcion(x1) # evaluación de la función en el punto x1 fx2 = funcion(x2) # evaluación de la función en el punto x2 int_S13 = h / 3 * (fx0 + 4*fx1 + fx2) #erel = np.abs(exacta - int_S13) / exacta * 100 print('el valor aproximado de la integral por la regla de Simpson1/3 es: ', int_S13, '\n') #print('el error relativo entre el valor real y el calculado es: ', erel,'%')
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
[Volver a la Tabla de Contenido](TOC) Error en la regla de Simpson 1/3 de aplicación simple El problema de calcular el error de esta forma es que realmente no conocemos el valor exacto. Para poder calcular el error al usar la regla de *Simpson 1/3*:\begin{equation*}\begin{split}-\frac{h^5}{90}f^{(4)}(\xi)\end{split}\label{eq:Ec5_28} \tag{5.28}\end{equation*}será necesario derivar cuatro veces la función original: $f(x)=e^{x^2}$. Para esto, vamos a usar nuevamente el cálculo simbólico (siempre deben verificar que la respuesta obtenida es la correcta!!!):
from sympy import * x = symbols('x')
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Derivamos cuatro veces la función $f(x)$ con respecto a $x$:
deriv4 = diff(4 / (1 + x**2),x,4) deriv4
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
y evaluamos esta función de la cuarta derivada en un punto $0 \leq \xi \leq 1$. Como la función $f{^{(4)}}(x)$ es creciente en el intervalo $[0,1]$ (compruébelo gráficamente y/o por las técnicas vistas en cálculo diferencial), entonces, el valor que hace máxima la cuarta derivada en el intervalo dado es:
x0 = 1.0 evald4 = deriv4.evalf(subs={x: x0}) print('El valor de la cuarta derivada de f en x0={0:6.2f} es {1:6.4f}: '.format(x0, evald4))
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Calculamos el error en la regla de *Simpson$1/3$*
errorS13 = abs(h**5*evald4/90) print('El error al usar la regla de Simpson 1/3 es: {0:6.6f}'.format(errorS13))
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Entonces, podemos expresar el valor de la integral de la función $f(x)=e^{x^2}$ en el intervalo $[0,1]$ usando la *Regla de Simpson $1/3$* como:$$\color{blue}{\int_0^1 \frac{4}{1 + x^2}dx} = \color{green}{3,133333} \color{red}{+ 0.004167}$$ Si lo fuéramos a hacer "a mano" $\ldots$ aplicando la fórmula directamente, con los siguientes datos:$h = \frac{(1.0 - 0.0)}{2.0} = 0.5$$x_0 = 0.0$$x_1 = 0.5$$x_2 = 1.0$$f(x) = \frac{4}{1 + x^2}$sustituyendo estos valores en la fórmula dada:$\int_0^1\frac{4}{1 + x^2}dx \approx \frac{0.5}{3} \left[f(0)+4f(0.5)+f(1)\right]$$\int_0^1\frac{4}{1 + x^2}dx \approx \frac{0.5}{3} \left[ \frac{4}{1 + 0^2} + 4\frac{4}{1 + 0.5^2} + \frac{4}{1 + 1^2} \right] \approx 3.133333$ [Volver a la Tabla de Contenido](TOC) Regla de simpson1/3 de aplicación múltiple Al igual que en la regla Trapezoidal, las reglas de Simpson también cuentan con un esquema de aplicación múltiple (llamada también compuesta). Supongamos que se divide el intervalo $[a,b]$ se divide en $n$ sub intervalos, con $n$ par, quedando la integral\begin{equation*}\begin{split}I=\int_{x_0}^{x_2}f(x)dx+\int_{x_2}^{x_4}f(x)dx+\ldots+\int_{x_{n-2}}^{x_n}f(x)dx\end{split}\label{eq:Ec5_29} \tag{5.29}\end{equation*}y sustituyendo en cada una de ellas la regla de Simpson1/3, se llega a\begin{equation*}\begin{split}I \approx 2h\frac{f(x_0)+4f(x_1)+f(x_2)}{6}+2h\frac{f(x_2)+4f(x_3)+f(x_4)}{6}+\ldots+2h\frac{f(x_{n-2})+4f(x_{n-1})+f(x_n)}{6}\end{split}\label{eq:Ec5_30} \tag{5.30}\end{equation*}entonces la regla de Simpson compuesta (o de aplicación múltiple) se escribe como:\begin{equation*}\begin{split}I=\int_a^bf(x)dx\approx \frac{h}{3}\left[f(x_0) + 2 \sum \limits_{j=1}^{n/2-1} f(x_{2j}) + 4 \sum \limits_{j=1}^{n/2} f(x_{2j-1})+f(x_n)\right]\end{split}\label{eq:Ec5_31} \tag{5.31}\end{equation*}donde $x_j=a+jh$ para $j=0,1,2, \ldots, n-1, n$ con $h=(b-a)/n$, $x_0=a$ y $x_n=b$. [Volver a la Tabla de Contenido](TOC) Implementación computacional regla de Simpson1/3 de aplicación múltiple [Volver a la Tabla de Contenido](TOC) Regla de Simpson 3/8 de aplicación simple Resulta cuando se sustituye la función $f(x)$ por una interpolación de tercer orden:\begin{equation*}\begin{split}I=\int_{a}^{b}f(x)dx = \frac{3h}{8}\left[ f(x_0)+3f(x_1)+3f(x_2)+f(x_3) \right]\end{split}\label{eq:Ec5_32} \tag{5.32}\end{equation*} Realizando un procedimiento similar al usado para la regla de *Simpson $1/3$*, pero esta vez empleando $n=4$ puntos:
# usaremos uno de los tantos métodos de interpolación dispobibles en las bibliotecas de Python n = 4 # puntos a interpolar para un polinomio de grado 2 xp = np.linspace(0,1,n) # generación de n puntos igualmente espaciados para la interpolación fp = funcion(xp) # evaluación de la función en los n puntos generados x = np.linspace(0, 1, 100) # generación de 100 puntos igualmente espaciados y = barycentric_interpolate(xp, fp, x) # interpolación numérica empleando el método del Baricentro fig = plt.figure(figsize=(9, 6), dpi= 80, facecolor='w', edgecolor='k') ax = fig.add_subplot(111) l, = plt.plot(x, y) plt.plot(x, funcion(x), '-', c='red') plt.plot(xp, fp, 'o', c=l.get_color()) plt.annotate('"Real"', xy=(.63, 1.5), xytext=(0.8, 1.25),arrowprops=dict(facecolor='black', shrink=0.05),) plt.annotate('Interpolación', xy=(.72, 1.75), xytext=(0.4, 2),arrowprops=dict(facecolor='black', shrink=0.05),) plt.grid(True) # muestra la malla de fondo plt.show() # muestra la gráfica # cuerpo del programa por la regla de Simpson 3/8 h = (b - a) / 3 # cálculo del valor de h int_S38 = 3 * h / 8 * (funcion(a) + 3*funcion(a + h) + 3*funcion(a + 2*h) + funcion(a + 3*h)) erel = np.abs(np.pi - int_S38) / np.pi * 100 print('el valor aproximado de la integral utilizando la regla de Simpson 3/8 es: ', int_S38, '\n') print('el error relativo entre el valor real y el calculado es: ', erel,'%')
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Para poder calcular el error al usar la regla de *Simpson 3/8*:$$\color{red}{-\frac{3h^5}{80}f^{(4)}(\xi)}$$será necesario derivar cuatro veces la función original. Para esto, vamos a usar nuevamente el cálculo simbólico (siempre deben verificar que la respuesta obtenida es la correcta!!!):
errorS38 = 3*h**5*evald4/80 print('El error al usar la regla de Simpson 3/8 es: ',errorS38)
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Entonces, podemos expresar el valor de la integral de la función $f(x)=e^{x^2}$ en el intervalo $[0,1]$ usando la *Regla de Simpson $3/8$* como:$$\color{blue}{\int_0^1\frac{4}{1 + x^2}dx} = \color{green}{3.138462} \color{red}{- 0.001852}$$ Aplicando la fórmula directamente, con los siguientes datos:$h = \frac{(1.0 - 0.0)}{3.0} = 0.33$$x_0 = 0.0$, $x_1 = 0.33$, $x_2 = 0.66$, $x_3 = 1.00$$f(x) = \frac{4}{1 + x^2}$sustituyendo estos valores en la fórmula dada:$\int_0^1\frac{4}{1 + x^2}dx \approx \frac{3\times0.3333}{8} \left[ \frac{4}{1 + 0^2} + 3\frac{4}{1 + 0.3333^2} +3\frac{4}{1 + 0.6666^2} + \frac{4}{1 + 1^2} \right] \approx 3.138462$Esta sería la respuesta si solo nos conformamos con lo que podemos hacer usando word... [Volver a la Tabla de Contenido](TOC) Regla de Simpson3/8 de aplicación múltiple Dividiendo el intervalo $[a,b]$ en $n$ sub intervalos de longitud $h=(b-a)/n$, con $n$ múltiplo de 3, quedando la integral\begin{equation*}\begin{split}I=\int_{x_0}^{x_3}f(x)dx+\int_{x_3}^{x_6}f(x)dx+\ldots+\int_{x_{n-3}}^{x_n}f(x)dx\end{split}\label{eq:Ec5_33} \tag{5.33}\end{equation*}sustituyendo en cada una de ellas la regla de Simpson3/8, se llega a\begin{equation*}\begin{split}I=\int_a^bf(x)dx\approx \frac{3h}{8}\left[f(x_0) + 3 \sum \limits_{i=0}^{n/3-1} f(x_{3i+1}) + 3 \sum \limits_{i=0}^{n/3-1}f(x_{3i+2})+2 \sum \limits_{i=0}^{n/3-2} f(x_{3i+3})+f(x_n)\right]\end{split}\label{eq:Ec5_34} \tag{5.34}\end{equation*}donde en cada sumatoria se deben tomar los valores de $i$ cumpliendo que $i=i+3$. [Volver a la Tabla de Contenido](TOC) Implementación computacional de la regla de Simpson3/8 de aplicación múltiple
#
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
[Volver a la Tabla de Contenido](TOC) Cuadratura de Gauss Introducción Retomando la idea inicial de los esquemas de [cuadratura](Quadrature), el valor de la integral definida se estima de la siguiente manera:\begin{equation*}\begin{split}I=\int_a^b f(x)dx \approx \sum \limits_{i=0}^n c_if(x_i)\end{split}\label{eq:Ec5_35} \tag{5.35}\end{equation*}Hasta ahora hemos visto los métodos de la regla trapezoidal y las reglas de Simpson más empleadas. En estos esquemas, la idea central es la distribución uniforme de los puntos que siguen la regla $x_i=x_0+ih$, con $i=0,1,2, \ldots, n$ y la evaluación de la función en estos puntos.Supongamos ahora que la restricción de la uniformidad en el espaciamiento de esos puntos fijos no es más considerada y se tiene la libertad de evaluar el área bajo una recta que conecte a dos puntos cualesquiera sobre la curva. Al ubicar estos puntos en forma “inteligente”, se puede definir una línea recta que equilibre los errores negativos y positivos Fuente: Chapra, S., Canale, R. Métodos Numéricos para ingenieros, 5a Ed. Mc. Graw Hill. 2007 De la figura de la derecha, se disponen de los puntos $x_0$ y $x_1$ para evaluar la función $f(x)$. Expresando la integral bajo la curva de forma aproximada dada en la la ecuación ([5.35](Ec5_35)), y empleando los límites de integración en el intervalo $[-1,1]$ por simplicidad (después se generalizará el concepto a un intervalo $[a,b]$), se tiene\begin{equation*}\begin{split}I=\int_{-1}^1 f(x)dx \approx c_0f(x_0)+c_1f(x_1)\end{split}\label{eq:Ec5_36} \tag{5.36}\end{equation*} [Volver a la Tabla de Contenido](TOC) Determinación de los coeficientes se tiene una ecuación con cuatro incógnitas ($c_0, c_1, x_0$ y $x_1$) que se deben determinar. Para ello, supongamos que disponemos de un polinomio de hasta grado 3, $f_3(x)$, de donde podemos construir cuatro ecuaciones con cuatro incógnitas de la siguiente manera:- $f_3(x)=1$:\begin{equation*}\begin{split}\int_{-1}^1 1dx = c_0 \times 1 + c_1 \times 1 = c_0 + c_1 = 2\end{split}\label{eq:Ec5_37} \tag{5.37}\end{equation*}- $f_3(x)=x$:\begin{equation*}\begin{split}\int_{-1}^1 xdx = c_0x_0 + c_1x_1 = 0\end{split}\label{eq:Ec5_38} \tag{5.38}\end{equation*}- $f_3(x)=x^2$:\begin{equation*}\begin{split}\int_{-1}^1 x^2dx = c_0x^2_0 + c_1x^2_1 = \frac{2}{3}\end{split}\label{eq:Ec5_39} \tag{5.39}\end{equation*}y por último- $f_3(x)=x^3$:\begin{equation*}\begin{split}\int_{-1}^1 x^3dx = c_0x^3_0 + c_1x^3_1 = 0\end{split}\label{eq:Ec5_40} \tag{5.40}\end{equation*}resolviendo simultáneamente las dos primeras ecuaciones para $c_0$ y $c_1$ en térm,inos de $x_0$ y $x_1$, se llega a\begin{equation*}\begin{split}c_0=\frac{2x_1}{x_1-x_0}, \quad c_1=-\frac{2x_0}{x_1-x_0}\end{split}\label{eq:Ec5_41} \tag{5.41}\end{equation*}reemplazamos estos dos valores en las siguientes dos ecuaciones\begin{equation*}\begin{split}\frac{2}{3}=\frac{2x_0^2x_1}{x_1-x_0}-\frac{2x_0x_1^2}{x_1-x_0}\end{split}\label{eq:Ec5_42} \tag{5.42}\end{equation*}\begin{equation*}\begin{split}0=\frac{2x_0^3x_1}{x_1-x_0}-\frac{2x_0x_1^3}{x_1-x_0}\end{split}\label{eq:Ec5_43} \tag{5.43}\end{equation*}de la ecuación ([5.43](Ec5_43)) se tiene\begin{equation*}\begin{split}x_0^3x_1&=x_0x_1^3 \\x_0^2 &= x_1^2\end{split}\label{eq:Ec5_44} \tag{5.44}\end{equation*}de aquí se tiene que $|x_0|=|x_1|$ (para considerar las raíces negativas recuerde que $\sqrt{a^2}= \pm a = |a|$), y como se asumió que $x_00$ (trabajando en el intervalo $[-1,1]$), llegándose finalmente a que $x_0=-x_1$. Reemplazando este resultado en la ecuación ([5.42](Ec5_42))\begin{equation*}\begin{split}\frac{2}{3}=2\frac{x_1^3+x_1^3}{2x_1}\end{split}\label{eq:Ec5_45} \tag{5.45}\end{equation*}despejando, $x_1^2=1/3$, y por último se llega a que\begin{equation*}\begin{split}x_0=-\frac{\sqrt{3}}{3}, \quad x_1=\frac{\sqrt{3}}{3}\end{split}\label{eq:Ec5_46} \tag{5.46}\end{equation*}reemplazando estos resultados en la ecuación ([5.41](Ec5_41)) y de la ecuación ([5.37](Ec5_37)), se tiene que $c_0=c_1=1$. Reescribiendo la ecuación ([5.36](Ec5_36)) con los valores encontrados se llega por último a:\begin{equation*}\begin{split}I=\int_{-1}^1 f(x)dx &\approx c_0f(x_0)+c_1f(x_1) \\&= f \left( \frac{-\sqrt{3}}{3}\right)+f \left( \frac{\sqrt{3}}{3}\right)\end{split}\label{eq:Ec5_47} \tag{5.47}\end{equation*}Esta aproximación realizada es "exacta" para polinomios de grado menor o igual a tres ($3$). La aproximación trapezoidal es exacta solo para polinomios de grado uno ($1$).***Ejemplo:*** Calcule la integral de la función $f(x)=x^3+2x^2+1$ en el intervalo $[-1,1]$ empleando tanto las técnicas analíticas como la cuadratura de Gauss vista.- ***Solución analítica (exacta)***$$\int_{-1}^1 (x^3+2x^2+1)dx=\left.\frac{x^4}{4}+\frac{2x^3}{3}+x \right |_{-1}^1=\frac{10}{3}$$- ***Aproximación numérica por Cuadratura de Gauss***\begin{equation*}\begin{split}\int_{-1}^1 (x^3+2x^2+1)dx &\approx1f\left(-\frac{\sqrt{3}}{3} \right)+1f\left(\frac{\sqrt{3}}{3} \right) \\&=-\frac{3\sqrt{3}}{27}+\frac{2\times 3}{9}+1+\frac{3\sqrt{3}}{27}+\frac{2\times 3}{9}+1 \\&=2+\frac{4}{3} \\&= \frac{10}{3}\end{split}\end{equation*} [Volver a la Tabla de Contenido](TOC) Cambios de los límites de integración Obsérvese que los límites de integración de la ecuación ([5.47](Ec5_47)) son de $-1$ a $1$. Esto se hizo para simplificar las matemáticas y para hacer la formulación tan general como fuera posible. Asumamos ahora que se desea determinar el valor de la integral entre dos límites cualesquiera $a$ y $b$. Supongamos también, que una nueva variable $x_d$ se relaciona con la variable original $x$ de forma lineal,\begin{equation*}\begin{split}x=a_0+a_1x_d\end{split}\label{eq:Ec5_48} \tag{5.48}\end{equation*}si el límite inferior, $x=a$, corresponde a $x_d=-1$, estos valores podrán sustituirse en la ecuación ([5.48](Ec5_48)) para obtener\begin{equation*}\begin{split}a=a_0+a_1(-1)\end{split}\label{eq:Ec5_49} \tag{5.49}\end{equation*}de manera similar, el límite superior, $x=b$, corresponde a $x_d=1$, para dar\begin{equation*}\begin{split}b=a_0+a_1(1)\end{split}\label{eq:Ec5_50} \tag{5.50}\end{equation*}resolviendo estas ecuaciones simultáneamente,\begin{equation*}\begin{split}a_0=(b+a)/2, \quad a_1=(b-a)/2\end{split}\label{eq:Ec5_51} \tag{5.51}\end{equation*}sustituyendo en la ecuación ([5.48](Ec5_48))\begin{equation*}\begin{split}x=\frac{(b+a)+(b-a)x_d}{2}\end{split}\label{eq:Ec5_52} \tag{5.52}\end{equation*}derivando la ecuación ([5.52](Ec5_52)),\begin{equation*}\begin{split}dx=\frac{b-a}{2}dx_d\end{split}\label{eq:Ec5_53} \tag{5.53}\end{equation*}Las ecuacio es ([5.51](Ec5_51)) y ([5.52](Ec5_52)) se pueden sustituir para $x$ y $dx$, respectivamente, en la evaluación de la integral. Estas sustituciones transforman el intervalo de integración sin cambiar el valor de la integral. En este caso\begin{equation*}\begin{split}\int_a^b f(x)dx = \frac{b-a}{2} \int_{-1}^1 f \left( \frac{(b+a)+(b-a)x_d}{2}\right)dx_d\end{split}\label{eq:Ec5_54} \tag{5.54}\end{equation*}Esta integral se puede aproximar como,\begin{equation*}\begin{split}\int_a^b f(x)dx \approx \frac{b-a}{2} \left[f\left( \frac{(b+a)+(b-a)x_0}{2}\right)+f\left( \frac{(b+a)+(b-a)x_1}{2}\right) \right]\end{split}\label{eq:Ec5_55} \tag{5.55}\end{equation*} [Volver a la Tabla de Contenido](TOC) Fórmulas de punto superior La fórmula anterior para la cuadratura de Gauss era de dos puntos. Se pueden desarrollar versiones de punto superior en la forma general:\begin{equation*}\begin{split}I \approx c_0f(x_0) + c_1f(x_1) + c_2f(x_2) +\ldots+ c_{n-1}f(x_{n-1})\end{split}\label{eq:Ec5_56} \tag{5.56}\end{equation*}con $n$, el número de puntos.Debido a que la cuadratura de Gauss requiere evaluaciones de la función en puntos espaciados uniformemente dentro del intervalo de integración, no es apropiada para casos donde se desconoce la función. Si se conoce la función, su ventaja es decisiva.En la siguiente tabla se presentan los valores de los parámertros para $1, 2, 3, 4$ y $5$ puntos. |$$n$$ | $$c_i$$ | $$x_i$$ ||:----:|:----------:|:-------------:||$$1$$ |$$2.000000$$| $$0.000000$$ ||$$2$$ |$$1.000000$$|$$\pm0.577350$$||$$3$$ |$$0.555556$$|$$\pm0.774597$$|| |$$0.888889$$| $$0.000000$$ ||$$4$$ |$$0.347855$$|$$\pm0.861136$$|| |$$0.652145$$|$$\pm0.339981$$||$$5$$ |$$0.236927$$|$$\pm0.906180$$|| |$$0.478629$$|$$\pm0.538469$$|| |$$0.568889$$| $$0.000000$$ |
import numpy as np import pandas as pd GaussTable = [[[0], [2]], [[-1/np.sqrt(3), 1/np.sqrt(3)], [1, 1]], [[-np.sqrt(3/5), 0, np.sqrt(3/5)], [5/9, 8/9, 5/9]], [[-0.861136, -0.339981, 0.339981, 0.861136], [0.347855, 0.652145, 0.652145, 0.347855]], [[-0.90618, -0.538469, 0, 0.538469, 0.90618], [0.236927, 0.478629, 0.568889, 0.478629, 0.236927]], [[-0.93247, -0.661209, -0.238619, 0.238619, 0.661209, 0.93247], [0.171324, 0.360762, 0.467914, 0.467914, 0.360762, 0.171324]]] display(pd.DataFrame(GaussTable, columns=["Integration Points", "Corresponding Weights"])) def IG(f, n): n = int(n) return sum([GaussTable[n - 1][1][i]*f(GaussTable[n - 1][0][i]) for i in range(n)]) def f(x): return x**9 + x**8 IG(f, 5.0)
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
[Volver a la Tabla de Contenido](TOC) Ejemplo Cuadratura de Gauss Determine el valor aproximado de:$$\int_0^1 \frac{4}{1+x^2}dx$$empleando cuadratura gaussiana de dos puntos.Reemplazando los parámetros requeridos en la ecuación ([5.55](Ec5_55)), donde $a=0$, $b=1$, $x_0=-\sqrt{3}/3$ y $x_1=\sqrt{3}/3$\begin{equation*}\begin{split}\int_0^1 f(x)dx &\approx \frac{1-0}{2} \left[f\left( \frac{(1+0)+(1-0)\left(-\frac{\sqrt{3}}{3}\right)}{2}\right)+f\left( \frac{(1+0)+(1-0)\left(\frac{\sqrt{3}}{3}\right)}{2}\right) \right]\\&= \frac{1}{2} \left[f\left( \frac{1-\frac{\sqrt{3}}{3}}{2}\right)+f\left( \frac{1+\frac{\sqrt{3}}{3}}{2}\right) \right]\\&= \frac{1}{2} \left[ \frac{4}{1 + \left( \frac{1-\frac{\sqrt{3}}{3}}{2} \right)^2}+\frac{4}{1 + \left( \frac{1+\frac{\sqrt{3}}{3}}{2} \right)^2} \right]\\&=3.147541\end{split}\end{equation*}Ahora veamos una breve implementación computacional
import numpy as np def fxG(a, b, x): xG = ((b + a) + (b - a) * x) / 2 return funcion(xG) def GQ2(a,b): c0 = 1.0 c1 = 1.0 x0 = -1.0 / np.sqrt(3) x1 = 1.0 / np.sqrt(3) return (b - a) / 2 * (c0 * fxG(a,b,x0) + c1 * fxG(a,b,x1)) print(GQ2(a,b))
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
[Volver a la Tabla de Contenido](TOC)
from IPython.core.display import HTML def css_styling(): styles = open('./nb_style.css', 'r').read() return HTML(styles) css_styling()
_____no_output_____
MIT
Cap05_IntegracionNumerica.ipynb
Youngermaster/Numerical-Analysis
Deep Learning Assignment 5The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data. Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the spaceWord2vec was created by a team of researchers led by Tomas Mikolov at Google. Word2vec can utilize either of two model architectures to produce a distributed representation of words: continuous bag-of-words (CBOW) or continuous skip-gram. In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. The skip-gram architecture weighs nearby context words more heavily than more distant context words.[1][4] According to the authors' note,[5] CBOW is faster while skip-gram is slower but does a better job for infrequent words.References : i. Wikipedia ii. http://mccormickml.com/2016/04/27/word2vec-resources/
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. %matplotlib inline from __future__ import print_function import collections import math import numpy as np import os import random import tensorflow as tf import zipfile from matplotlib import pylab from six.moves import range from six.moves.urllib.request import urlretrieve from sklearn.manifold import TSNE
_____no_output_____
MIT
5_word2vec_skip-gram.ipynb
ramborra/Udacity-Deep-Learning
Download the data from the source website if necessary.
url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016)
Found and verified text8.zip
MIT
5_word2vec_skip-gram.ipynb
ramborra/Udacity-Deep-Learning
Read the data into a string.
def read_data(filename): """Extract the first file enclosed in a zip file as a list of words""" with zipfile.ZipFile(filename) as f: data = tf.compat.as_str(f.read(f.namelist()[0])).split() return data words = read_data(filename) print('Data size %d' % len(words))
Data size 17005207
MIT
5_word2vec_skip-gram.ipynb
ramborra/Udacity-Deep-Learning
Build the dictionary and replace rare words with UNK token. (UNK - Unknown Words)
vocabulary_size = 50000 def build_dataset(words): count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 # dictionary['UNK'] unk_count = unk_count + 1 data.append(index) count[0][1] = unk_count reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reverse_dictionary data, count, dictionary, reverse_dictionary = build_dataset(words) print('Most common words (+UNK)', count[:5]) print('Sample data', data[:10]) del words # Hint to reduce memory. # Printing some sample data print(data[:20]) print(count[:20]) print(dictionary.items()[:20]) print(reverse_dictionary.items()[:20])
_____no_output_____
MIT
5_word2vec_skip-gram.ipynb
ramborra/Udacity-Deep-Learning
Function to generate a training batch for the skip-gram model.
data_index = 0 def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_window + 1 # [ skip_window target skip_window ] buffer = collections.deque(maxlen=span) for _ in range(span): buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) for i in range(batch_size // num_skips): target = skip_window # target label at the center of the buffer targets_to_avoid = [ skip_window ] for j in range(num_skips): while target in targets_to_avoid: target = random.randint(0, span - 1) targets_to_avoid.append(target) batch[i * num_skips + j] = buffer[skip_window] labels[i * num_skips + j, 0] = buffer[target] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch, labels print('data:', [reverse_dictionary[di] for di in data[:8]]) for num_skips, skip_window in [(2, 1), (4, 2)]: data_index = 0 batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window) print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window)) print(' batch:', [reverse_dictionary[bi] for bi in batch]) print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first'] with num_skips = 2 and skip_window = 1: batch: ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term'] labels: ['anarchism', 'as', 'originated', 'a', 'term', 'as', 'a', 'of'] with num_skips = 4 and skip_window = 2: batch: ['as', 'as', 'as', 'as', 'a', 'a', 'a', 'a'] labels: ['originated', 'term', 'a', 'anarchism', 'as', 'originated', 'term', 'of']
MIT
5_word2vec_skip-gram.ipynb
ramborra/Udacity-Deep-Learning
Train a skip-gram model.
batch_size = 128 embedding_size = 128 # Dimension of the embedding vector. skip_window = 1 # How many words to consider left and right. num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. here we limit the # validation samples to the words that have a low numeric ID, which by # construction are also the most frequent. valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # Only pick dev samples in the head of the distribution. valid_examples = np.array(random.sample(range(valid_window), valid_size)) num_sampled = 64 # Number of negative examples to sample. graph = tf.Graph() with graph.as_default(), tf.device('/cpu:0'): # Input data. train_dataset = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Variables. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) softmax_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) softmax_biases = tf.Variable(tf.zeros([vocabulary_size])) # Model. # Look up embeddings for inputs. embed = tf.nn.embedding_lookup(embeddings, train_dataset) # Compute the softmax loss, using a sample of the negative labels each time. loss = tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed, labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size)) # Optimizer. # Note: The optimizer will optimize the softmax_weights AND the embeddings. # This is because the embeddings are defined as a variable quantity and the # optimizer's `minimize` method will by default modify all variable quantities # that contribute to the tensor it is passed. # See docs on `tf.train.Optimizer.minimize()` for more details. optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss) # Compute the similarity between minibatch examples and all embeddings. # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup( normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings)) num_steps = 100001 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') average_loss = 0 for step in range(num_steps): batch_data, batch_labels = generate_batch( batch_size, num_skips, skip_window) feed_dict = {train_dataset : batch_data, train_labels : batch_labels} _, l = session.run([optimizer, loss], feed_dict=feed_dict) average_loss += l if step % 2000 == 0: if step > 0: average_loss = average_loss / 2000 # The average loss is an estimate of the loss over the last 2000 batches. print('Average loss at step %d: %f' % (step, average_loss)) average_loss = 0 # note that this is expensive (~20% slowdown if computed every 500 steps) if step % 10000 == 0: sim = similarity.eval() for i in range(valid_size): valid_word = reverse_dictionary[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = reverse_dictionary[nearest[k]] log = '%s %s,' % (log, close_word) print(log) final_embeddings = normalized_embeddings.eval() # Printing Embeddings (They are all Normalized) print(final_embeddings[0]) print(np.sum(np.square(final_embeddings[0]))) num_points = 400 tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000, method='exact') two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :]) def plot(embeddings, labels): assert embeddings.shape[0] >= len(labels), 'More labels than embeddings' pylab.figure(figsize=(15,15)) # in inches for i, label in enumerate(labels): x, y = embeddings[i,:] pylab.scatter(x, y) pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') pylab.show() words = [reverse_dictionary[i] for i in range(1, num_points+1)] plot(two_d_embeddings, words)
_____no_output_____
MIT
5_word2vec_skip-gram.ipynb
ramborra/Udacity-Deep-Learning
ERROR: type should be string, got " https://towardsdatascience.com/3-basic-steps-of-stock-market-analysis-in-python-917787012143"
%matplotlib inline !pip install yfinance !wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz !tar -xzvf ta-lib-0.4.0-src.tar.gz %cd ta-lib !./configure --prefix=/usr !make !make install !pip install Ta-Lib # from yahoofinancials import YahooFinancials import matplotlib.pyplot as plt import pandas as pd import talib import yfinance as yf plt.rcParams['figure.facecolor'] = 'w' df = yf.download("TSLA", start="2018-11-01", end="2022-03-03", interval="1d") df.shape t = yf.Ticker("T") t.dividends t.dividends.plot(figsize=(14, 7)) df.head() df[df.index >= "2019-11-01"].Close.plot(figsize=(14, 7)) df.loc[:, "rsi"] = talib.RSI(df.Close, 14) df.loc[:, 'ma20'] = df.Close.rolling(20).mean() df.loc[:, 'ma200'] = df.Close.rolling(200).mean() df[["Close", "ma20", "ma200"]].plot(figsize=(14, 7)) fig, ax = plt.subplots(1, 2, figsize=(21, 7)) ax0 = df[["rsi"]].plot(ax=ax[0]) ax0.axhline(30, color="black") ax0.axhline(70, color="black") df[["Close"]].plot(ax=ax[1]) df[df.index >= "2019-11-01"].Volume.plot(kind="bar", figsize=(14, 4)) df #import plotly.offline as pyo # Set notebook mode to work in offline #pyo.init_notebook_mode() import plotly.graph_objects as go fig = go.Figure( data=go.Ohlc( x=df.index, open=df["Open"], high=df["High"], low=df["Low"], close=df["Close"], ) ) fig.show()
_____no_output_____
MIT
Exercise/stocks-analysis.ipynb
JSJeong-me/Machine_Learning
Analysis notebook comparing scoping vs no-scoping for tower selectionPurpose of this notebook is to categorize and analyze generated towers.Requires:* `.pkl` generated by `stimuli/score_towers.py`See also:* `stimuli/generate_towers.ipynb` for plotting code and a similar analysis in the same place as the tower generation code. This notebook supersedes it.
# set up imports import os import sys __file__ = os.getcwd() proj_dir = os.path.dirname(os.path.realpath(__file__)) sys.path.append(proj_dir) utils_dir = os.path.join(proj_dir, 'utils') sys.path.append(utils_dir) analysis_dir = os.path.join(proj_dir, 'analysis') analysis_utils_dir = os.path.join(analysis_dir, 'utils') sys.path.append(analysis_utils_dir) agent_dir = os.path.join(proj_dir, 'model') sys.path.append(agent_dir) agent_util_dir = os.path.join(agent_dir, 'utils') sys.path.append(agent_util_dir) experiments_dir = os.path.join(proj_dir, 'experiments') sys.path.append(experiments_dir) df_dir = os.path.join(proj_dir, 'results/dataframes') stim_dir = os.path.join(proj_dir, 'stimuli') import tqdm import pickle import math import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import scipy.stats as stats from scipy.stats import sem as sem from utils.blockworld_library import * from utils.blockworld import * from model.BFS_Lookahead_Agent import BFS_Lookahead_Agent from model.BFS_Agent import BFS_Agent from model.Astar_Agent import Astar_Agent # show all columns in dataframe pd.set_option('display.max_columns', None) # some helper functions # look at towers def visualize_towers(towers, text_parameters=None): fig, axes = plt.subplots(math.ceil(len(towers)/5), 5, figsize=(20, 15*math.ceil(len(towers)/20))) for axis, tower in zip(axes.flatten(), towers): axis.imshow(tower['bitmap']*1.0) if text_parameters is not None: if type(text_parameters) is not list: text_parameters = [text_parameters] for y_offset, text_parameter in enumerate(text_parameters): axis.text(0, y_offset*1., str(text_parameter+": " + str(tower[text_parameter])), color='gray', fontsize=20) plt.tight_layout() plt.show()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Load in data (we might have multiple dfs)
path_to_dfs = [os.path.join(df_dir, f) for f in ["RLDM_main_experiment.pkl"]] dfs = [pd.read_pickle(path_to_df) for path_to_df in path_to_dfs] print("Read {} dataframes: {}".format(len(dfs), path_to_dfs)) # merge dfs df = pd.concat(dfs) print("Merged dataframes: {}".format(df.shape)) # do a few things to add helpful columns and such # use either solution_cost or states_evaluated as cost df['cost'] = np.maximum(df['solution_cost'].fillna(0), df['states_evaluated'].fillna(0)) # do the same for total cost df['total_cost'] = np.maximum(df['all_sequences_planning_cost'].fillna( 0), df['states_evaluated'].fillna(0)) df.columns # summarize the runs into a run df def summarize_df(df): summary_df = df.groupby('run_ID').agg({ 'agent': 'first', 'label': 'first', 'world': 'first', 'action': 'count', 'blockmap': 'last', 'states_evaluated': ['sum', 'mean', sem], 'partial_solution_cost': ['sum', 'mean', sem], 'solution_cost': ['sum', 'mean', sem], 'all_sequences_planning_cost': ['sum', 'mean', sem], 'perfect': 'last', 'cost': ['sum', 'mean', sem], 'total_cost': ['sum', 'mean', sem], # 'avg_cost_per_step_for_run': ['sum', 'mean', sem], }) return summary_df sum_df = summarize_df(df)
/Users/felixbinder/opt/anaconda3/envs/scoping/lib/python3.9/site-packages/numpy/core/_methods.py:262: RuntimeWarning: Degrees of freedom <= 0 for slice ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof, /Users/felixbinder/opt/anaconda3/envs/scoping/lib/python3.9/site-packages/numpy/core/_methods.py:254: RuntimeWarning: invalid value encountered in double_scalars ret = ret.dtype.type(ret / rcount)
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Let's explore the data a little bit
sum_df sum_df.groupby([('label', 'first')]).mean() sum_df.groupby([('label', 'first')]).count()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
What is the rate of success?
display(sum_df.groupby([('label', 'first')]).mean()[('perfect', 'last')]) sum_df.groupby([('label', 'first')]).mean()[('perfect', 'last')].plot( kind='bar', title='Rate of perfect solutions') plt.show()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
What is the difference in cost between the two conditions?
display(sum_df.groupby([('label', 'first')]).mean()[('cost', 'sum')]) sum_df.groupby([('label', 'first')]).mean()[('cost', 'sum')].plot( kind='bar', title='Mean action planning cost (for chosen solution', yerr=sum_df.groupby([('label', 'first')]).mean()[('cost', 'sem')]) plt.yscale('log') plt.show()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
What about the total cost?
display(sum_df.groupby([('label', 'first')]).mean()[('total_cost', 'sum')]) sum_df.groupby([('label', 'first')]).mean()[('total_cost', 'sum')].plot( kind='bar', title='Total planning cost', yerr=sum_df.groupby([('label', 'first')]).mean()[('total_cost', 'sem')]) plt.yscale('log') plt.show() df[df['label'] == 'Full Subgoal Decomposition 2']['_world'].tail(1).item().silhouette
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Is there a difference between the depth of found solutions?
display(sum_df.groupby([('label', 'first')]).mean()[('action', 'count')]) sum_df.groupby([('label', 'first')]).mean()[('action', 'count') ].plot(kind='bar', title='Mean number of actions')
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Tower analysisNow that we have explored the data, let's look at the distribution over towers. Let's make a scatterplot over subgoal and no subgoal costs.
tower_sum_df = df.groupby(['label', 'world']).agg({ 'cost': ['sum', 'mean', sem], 'total_cost': ['sum', 'mean', sem], }) # flatten the index tower_sum_df.reset_index(inplace=True) tower_sum_df # for the scatterplots, we can only show two agents at the same time. label1 = 'Full Subgoal Planning' label2 = 'Best First Search' plt.scatter( x=tower_sum_df[tower_sum_df['label'] == label1]['cost']['sum'], y=tower_sum_df[tower_sum_df['label'] == label2]['cost']['sum'], c=tower_sum_df[tower_sum_df['label'] == label1]['world']) plt.title("Action planning cost of solving a tower with and without subgoals") plt.xlabel("Cost of solving without subgoals") plt.ylabel("Cost of solving with subgoals") # log log plt.xscale('log') plt.yscale('log') plt.show()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
The same for the total subgoal planning cost
plt.scatter( x=tower_sum_df[tower_sum_df['label'] == label1]['total_cost']['sum'], y=tower_sum_df[tower_sum_df['label'] == label2]['total_cost']['sum'], c=tower_sum_df[tower_sum_df['label'] == label1]['world']) plt.title("Action planning cost of solving a tower with and without subgoals") plt.xlabel("Cost of solving without subgoals") plt.ylabel("Cost of solving with subgoals") # log log plt.xscale('log') plt.yscale('log') plt.show()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Can we see a pattern between the relation of the solution and total subgoal planning cost for the subgoal agent?
plt.scatter( x=tower_sum_df[tower_sum_df['label'] == label1]['cost']['sum'], y=tower_sum_df[tower_sum_df['label'] == label2]['total_cost']['sum'], c=tower_sum_df[tower_sum_df['label'] == label1]['world']) plt.title("Cost of the found solution versus costs of all sequences of subgoals") plt.xlabel("Cost of the found solution") plt.ylabel("Cost of all subgoals") # log log plt.xscale('log') plt.yscale('log') plt.show()
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Looks like there are some **outliers**—let's look at those Do we have towers that can't be solved using a subgoal decomposition?
failed_df = df[(df['perfect'] == False)] display(failed_df) bad_ID = list(df[df['world_status'] == 'Fail']['run_ID'])[1] bad_ID df[df['run_ID'] == bad_ID] df[df['run_ID'] == bad_ID]['_chosen_subgoal_sequence'].dropna( ).values[-1].visual_display() df[df['run_ID'] == bad_ID]['_chosen_subgoal_sequence'].dropna( ).values[0][0].visual_display() failed_df['_world'].tail(1).item().silhouette failed_df['_world'].head(1).item().silhouette
_____no_output_____
MIT
analysis/tower selection analysis.ipynb
cogtoolslab/projection_block_construction
Imports
from music21 import converter, instrument, note, chord, stream import glob import pickle import numpy as np from keras.utils import np_utils
Using TensorFlow backend.
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Read a Midi File
midi = converter.parse("midi_songs/EyesOnMePiano.mid") midi midi.show('midi') midi.show('text') # Flat all the elements elements_to_parse = midi.flat.notes len(elements_to_parse) for e in elements_to_parse: print(e, e.offset) notes_demo = [] for ele in elements_to_parse: # If the element is a Note, then store it's pitch if isinstance(ele, note.Note): notes_demo.append(str(ele.pitch)) # If the element is a Chord, split each note of chord and join them with + elif isinstance(ele, chord.Chord): notes_demo.append("+".join(str(n) for n in ele.normalOrder)) len(notes_demo) isinstance(elements_to_parse[68], chord.Chord)
_____no_output_____
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Preprocessing all Files
notes = [] for file in glob.glob("midi_songs/*.mid"): midi = converter.parse(file) # Convert file into stream.Score Object # print("parsing %s"%file) elements_to_parse = midi.flat.notes for ele in elements_to_parse: # If the element is a Note, then store it's pitch if isinstance(ele, note.Note): notes.append(str(ele.pitch)) # If the element is a Chord, split each note of chord and join them with + elif isinstance(ele, chord.Chord): notes.append("+".join(str(n) for n in ele.normalOrder)) len(notes) with open("notes", 'wb') as filepath: pickle.dump(notes, filepath) with open("notes", 'rb') as f: notes= pickle.load(f) n_vocab = len(set(notes)) print("Total notes- ", len(notes)) print("Unique notes- ", n_vocab) print(notes[100:200])
['1+5+9', 'G#2', '1+5+9', '1+5+9', 'F3', 'F2', 'F2', 'F2', 'F2', 'F2', '4+9', 'E5', '4+9', 'C5', '4+9', 'A5', '4+9', '5+9', 'F5', '5+9', 'C5', '5+9', 'A5', '5+9', '4+9', 'E5', '4+9', 'C5', '4+9', 'A5', '4+9', 'F5', '5+9', 'C5', '5+9', 'E5', '5+9', 'D5', '5+9', 'E5', '4+9', 'E-5', '4+9', 'B5', '4+9', '4+9', 'A5', '5+9', '5+9', '5+9', '5+9', '4+9', '4+9', '4+9', '4+9', '5+9', '5+9', '5+9', '5+9', 'B4', '4+9', 'A4', '4+9', 'E5', '4+9', '4+9', 'E-5', '5+9', '5+9', '5+9', '5+9', '4+9', '4+9', '4+9', '4+9', '5+9', '5+9', '5+9', '5+9', 'E5', '4', 'E-5', 'C6', 'E5', '5', 'E-5', 'B5', 'E5', '6', 'E-5', 'C6', 'A5', '5', 'A4', '4', 'C5', 'E5', 'F5', 'E5', '5']
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Prepare Sequential Data for LSTM
# Hoe many elements LSTM input should consider sequence_length = 100 # All unique classes pitchnames = sorted(set(notes)) # Mapping between ele to int value ele_to_int = dict( (ele, num) for num, ele in enumerate(pitchnames) ) network_input = [] network_output = [] for i in range(len(notes) - sequence_length): seq_in = notes[i : i+sequence_length] # contains 100 values seq_out = notes[i + sequence_length] network_input.append([ele_to_int[ch] for ch in seq_in]) network_output.append(ele_to_int[seq_out]) # No. of examples n_patterns = len(network_input) print(n_patterns) # Desired shape for LSTM network_input = np.reshape(network_input, (n_patterns, sequence_length, 1)) print(network_input.shape) normalised_network_input = network_input/float(n_vocab) # Network output are the classes, encode into one hot vector network_output = np_utils.to_categorical(network_output) network_output.shape print(normalised_network_input.shape) print(network_output.shape)
(60398, 100, 1) (60398, 359)
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Create Model
from keras.models import Sequential, load_model from keras.layers import * from keras.callbacks import ModelCheckpoint, EarlyStopping model = Sequential() model.add( LSTM(units=512, input_shape = (normalised_network_input.shape[1], normalised_network_input.shape[2]), return_sequences = True) ) model.add( Dropout(0.3) ) model.add( LSTM(512, return_sequences=True) ) model.add( Dropout(0.3) ) model.add( LSTM(512) ) model.add( Dense(256) ) model.add( Dropout(0.3) ) model.add( Dense(n_vocab, activation="softmax") ) model.compile(loss="categorical_crossentropy", optimizer="adam") model.summary() #Trained on google colab checkpoint = ModelCheckpoint("model.hdf5", monitor='loss', verbose=0, save_best_only=True, mode='min') model_his = model.fit(normalised_network_input, network_output, epochs=100, batch_size=64, callbacks=[checkpoint]) model = load_model("new_weights.hdf5")
_____no_output_____
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Predictions
sequence_length = 100 network_input = [] for i in range(len(notes) - sequence_length): seq_in = notes[i : i+sequence_length] # contains 100 values network_input.append([ele_to_int[ch] for ch in seq_in]) # Any random start index start = np.random.randint(len(network_input) - 1) # Mapping int_to_ele int_to_ele = dict((num, ele) for num, ele in enumerate(pitchnames)) # Initial pattern pattern = network_input[start] prediction_output = [] # generate 200 elements for note_index in range(200): prediction_input = np.reshape(pattern, (1, len(pattern), 1)) # convert into numpy desired shape prediction_input = prediction_input/float(n_vocab) # normalise prediction = model.predict(prediction_input, verbose=0) idx = np.argmax(prediction) result = int_to_ele[idx] prediction_output.append(result) # Remove the first value, and append the recent value.. # This way input is moving forward step-by-step with time.. pattern.append(idx) pattern = pattern[1:] print(prediction_output)
['D2', 'D5', 'D3', 'C5', 'B4', 'D2', 'A4', 'D3', 'G#4', 'E5', '4+9', 'C5', 'A4', '0+5', 'C5', 'A4', 'F#5', 'C5', 'A4', '0+5', 'C5', 'A4', 'E5', 'C5', 'B4', 'D5', 'E5', '4+9', 'C5', 'A4', '0+5', '4+9', 'C5', 'A4', 'F#5', 'C5', 'A4', '0+5', 'C5', 'A4', 'E5', 'E3', 'C5', 'B2', 'B4', 'C3', 'D5', 'G#2', 'E5', '4+9', 'C5', 'A4', '0+5', '4+9', 'C5', 'A4', 'F#5', '4+9', 'C5', 'A4', '0+5', '4+9', 'C5', 'A4', 'E5', '4+9', 'C5', 'B4', '7', 'D5', 'E5', '4+9', 'C5', 'A4', '0+5', '4+9', 'C5', 'A4', 'F#5', '4+9', 'C5', 'A4', '0+5', '4+9', 'C5', 'A4', 'E5', '4+9', 'C5', 'B4', '7', 'D5', '9+0', '5', '4+7', '2+5', '7', '0+4', '11+2', '4', '0+4', 'E4', '2+5', '11', '11+2', '5', '7+11', '7', '9+0', '9', '11', '0', '4', 'A4', '5', 'F4', 'E5', 'A4', 'D5', '7', 'C5', 'A4', 'B4', '4', 'G4', 'C5', 'E4', 'D5', '11', 'G4', 'B4', '5', 'E4', 'G4', '7', 'E4', '4+9', '4+9', '4+9', '4+9', '4+9', '4+9', '2+7', '4+9', '4+9', '4+9', '4+9', '4+9', '2+7', 'E4', '4+9', 'A4', 'B4', 'C5', '4+9', 'B4', 'A4', 'E4', '4+9', 'C4', 'B3', '4+9', 'C4', 'A3', '4+9', 'C4', 'B3', '7', 'C4', 'D4', '5', 'E4', 'C4', '5', 'E4', 'D4', '4', 'E4', 'F4', 'B3', '2', 'C4', 'F4', '4+9', '4', 'D4', '4+8', '4', 'D4', 'A4', '4+9', 'E4', 'A4', 'C5', '4+9', 'B4', 'A4', 'E4', '4+9', 'C4']
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Create Midi File
offset = 0 # Time output_notes = [] for pattern in prediction_output: # if the pattern is a chord if ('+' in pattern) or pattern.isdigit(): notes_in_chord = pattern.split('+') temp_notes = [] for current_note in notes_in_chord: new_note = note.Note(int(current_note)) # create Note object for each note in the chord new_note.storedInstrument = instrument.Piano() temp_notes.append(new_note) new_chord = chord.Chord(temp_notes) # creates the chord() from the list of notes new_chord.offset = offset output_notes.append(new_chord) else: # if the pattern is a note new_note = note.Note(pattern) new_note.offset = offset new_note.storedInstrument = instrument.Piano() output_notes.append(new_note) offset += 0.5 # create a stream object from the generated notes midi_stream = stream.Stream(output_notes) midi_stream.write('midi', fp = "test_output.mid") midi_stream.show('midi')
_____no_output_____
MIT
Music Generation.ipynb
karandevtyagi/AI-Music-Generator
Small helper function to read the tokens.
def read_file(filename): tokens = [] with open(PATH/filename, encoding='utf8') as f: for line in f: tokens.append(line.split() + [EOS]) return np.array(tokens) trn_tok = read_file('wiki.train.tokens') val_tok = read_file('wiki.valid.tokens') tst_tok = read_file('wiki.test.tokens') len(trn_tok), len(val_tok), len(tst_tok) ' '.join(trn_tok[4][:20]) cnt = Counter(word for sent in trn_tok for word in sent) cnt.most_common(10)
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Give an id to each token and add the pad token (just in case we need it).
itos = [o for o,c in cnt.most_common()] itos.insert(0,'<pad>') vocab_size = len(itos); vocab_size
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Creates the mapping from token to id then numericalizing our datasets.
stoi = collections.defaultdict(lambda : 5, {w:i for i,w in enumerate(itos)}) trn_ids = np.array([([stoi[w] for w in s]) for s in trn_tok]) val_ids = np.array([([stoi[w] for w in s]) for s in val_tok]) tst_ids = np.array([([stoi[w] for w in s]) for s in tst_tok])
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Testing WeightDropout Create a bunch of parameters for deterministic tests.
module = nn.LSTM(20, 20) tst_input = torch.randn(2,5,20) tst_output = torch.randint(0,20,(10,)).long() save_params = {} for n,p in module._parameters.items(): save_params[n] = p.clone()
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Old WeightDropout
module = nn.LSTM(20, 20) for n,p in save_params.items(): module._parameters[n] = nn.Parameter(p.clone()) dp_module = WeightDrop(module, 0.5) opt = optim.SGD(dp_module.parameters(), 10) dp_module.train() torch.manual_seed(7) x = tst_input.clone() x.requires_grad_(requires_grad=True) h = (torch.zeros(1,5,20), torch.zeros(1,5,20)) for _ in range(5): x,h = dp_module(x,h) getattr(dp_module.module, 'weight_hh_l0'),getattr(dp_module.module,'weight_hh_l0_raw') target = tst_output.clone() loss = F.nll_loss(x.view(-1,20), target) loss.backward() opt.step() w, w_raw = getattr(dp_module.module, 'weight_hh_l0'),getattr(dp_module.module,'weight_hh_l0_raw') w.grad, w_raw.grad getattr(dp_module.module, 'weight_hh_l0'),getattr(dp_module.module,'weight_hh_l0_raw')
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
New WeightDropout
class WeightDropout(nn.Module): "A module that warps another layer in which some weights will be replaced by 0 during training." def __init__(self, module, dropout, layer_names=['weight_hh_l0']): super().__init__() self.module,self.dropout,self.layer_names = module,dropout,layer_names for layer in self.layer_names: #Makes a copy of the weights of the selected layers. w = getattr(self.module, layer) self.register_parameter(f'{layer}_raw', nn.Parameter(w.data)) def _setweights(self): for layer in self.layer_names: raw_w = getattr(self, f'{layer}_raw') self.module._parameters[layer] = F.dropout(raw_w, p=self.dropout, training=self.training) def forward(self, *args): self._setweights() return self.module.forward(*args) def reset(self): if hasattr(self.module, 'reset'): self.module.reset() module = nn.LSTM(20, 20) for n,p in save_params.items(): module._parameters[n] = nn.Parameter(p.clone()) dp_module = WeightDropout(module, 0.5) opt = optim.SGD(dp_module.parameters(), 10) dp_module.train() torch.manual_seed(7) x = tst_input.clone() x.requires_grad_(requires_grad=True) h = (torch.zeros(1,5,20), torch.zeros(1,5,20)) for _ in range(5): x,h = dp_module(x,h) getattr(dp_module.module, 'weight_hh_l0'),getattr(dp_module,'weight_hh_l0_raw') target = tst_output.clone() loss = F.nll_loss(x.view(-1,20), target) loss.backward() opt.step() w, w_raw = getattr(dp_module.module, 'weight_hh_l0'),getattr(dp_module,'weight_hh_l0_raw') w.grad, w_raw.grad getattr(dp_module.module, 'weight_hh_l0'),getattr(dp_module,'weight_hh_l0_raw')
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Testing EmbeddingDropout Create a bunch of parameters for deterministic tests.
enc = nn.Embedding(100,20, padding_idx=0) tst_input = torch.randint(0,100,(25,)).long() save_params = enc.weight.clone()
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Old EmbeddingDropout
enc = nn.Embedding(100,20, padding_idx=0) enc.weight = nn.Parameter(save_params.clone()) enc_dp = EmbeddingDropout(enc) torch.manual_seed(7) x = tst_input.clone() enc_dp(x, dropout=0.5)
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
New EmbeddingDropout
def dropout_mask(x, sz, p): "Returns a dropout mask of the same type as x, size sz, with probability p to cancel an element." return x.new(*sz).bernoulli_(1-p)/(1-p) class EmbeddingDropout1(nn.Module): "Applies dropout in the embedding layer by zeroing out some elements of the embedding vector." def __init__(self, emb, dropout): super().__init__() self.emb,self.dropout = emb,dropout self.pad_idx = self.emb.padding_idx if self.pad_idx is None: self.pad_idx = -1 def forward(self, words, dropout=0.1, scale=None): if self.training and self.dropout != 0: size = (self.emb.weight.size(0),1) mask = dropout_mask(self.emb.weight.data, size, self.dropout) masked_emb_weight = mask * self.emb.weight else: masked_emb_weight = self.emb.weight if scale: masked_emb_weight = scale * masked_emb_weight return F.embedding(words, masked_emb_weight, self.pad_idx, self.emb.max_norm, self.emb.norm_type, self.emb.scale_grad_by_freq, self.emb.sparse) enc = nn.Embedding(100,20, padding_idx=0) enc.weight = nn.Parameter(save_params.clone()) enc_dp = EmbeddingDropout1(enc, 0.5) torch.manual_seed(7) x = tst_input.clone() enc_dp(x)
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Testing RNN model Creating a bunch of parameters for deterministic testing.
tst_model = get_language_model(500, 20, 100, 2, 0, bias=True) save_parameters = {} for n,p in tst_model.state_dict().items(): save_parameters[n] = p.clone() tst_input = torch.randint(0, 500, (10,5)).long() tst_output = torch.randint(0, 500, (50,)).long()
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Old RNN model
tst_model = get_language_model(500, 20, 100, 2, 0, bias=True, dropout=0.4, dropoute=0.1, dropouth=0.2, dropouti=0.6, wdrop=0.5) state_dict = OrderedDict() for n,p in save_parameters.items(): state_dict[n] = p.clone() tst_model.load_state_dict(state_dict) opt = optim.SGD(tst_model.parameters(), lr=10) torch.manual_seed(7) x = tst_input.clone() z = tst_model(x) z y = tst_output.clone() loss = F.nll_loss(z[0], y) loss.backward() opt.step() tst_model[0].rnns[0].module._parameters['weight_hh_l0_raw']
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
New RNN model
class RNNDropout(nn.Module): def __init__(self, p=0.5): super().__init__() self.p=p def forward(self, x): if not self.training or not self.p: return x m = dropout_mask(x.data, (1, x.size(1), x.size(2)), self.p) return m * x def repackage_var1(h): "Detaches h from its history." return h.detach() if type(h) == torch.Tensor else tuple(repackage_var(v) for v in h) class RNNCore(nn.Module): "AWD-LSTM/QRNN inspired by https://arxiv.org/abs/1708.02182" initrange=0.1 def __init__(self, vocab_sz, emb_sz, n_hid, n_layers, pad_token, bidir=False, hidden_p=0.2, input_p=0.6, embed_p=0.1, weight_p=0.5, qrnn=False): super().__init__() self.bs,self.qrnn,self.ndir = 1, qrnn,(2 if bidir else 1) self.emb_sz,self.n_hid,self.n_layers = emb_sz,n_hid,n_layers self.encoder = nn.Embedding(vocab_sz, emb_sz, padding_idx=pad_token) self.dp_encoder = EmbeddingDropout1(self.encoder, embed_p) if self.qrnn: #Using QRNN requires cupy: https://github.com/cupy/cupy from .torchqrnn.qrnn import QRNNLayer self.rnns = [QRNNLayer(emb_sz if l == 0 else n_hid, (n_hid if l != n_layers - 1 else emb_sz)//self.ndir, save_prev_x=True, zoneout=0, window=2 if l == 0 else 1, output_gate=True) for l in range(n_layers)] if weight_p != 0.: for rnn in self.rnns: rnn.linear = WeightDropout(rnn.linear, weight_p, layer_names=['weight']) else: self.rnns = [nn.LSTM(emb_sz if l == 0 else n_hid, (n_hid if l != n_layers - 1 else emb_sz)//self.ndir, 1, bidirectional=bidir) for l in range(n_layers)] if weight_p != 0.: self.rnns = [WeightDropout(rnn, weight_p) for rnn in self.rnns] self.rnns = torch.nn.ModuleList(self.rnns) self.encoder.weight.data.uniform_(-self.initrange, self.initrange) self.dropouti = RNNDropout(input_p) self.dropouths = nn.ModuleList([RNNDropout(hidden_p) for l in range(n_layers)]) def forward(self, input): sl,bs = input.size() if bs!=self.bs: self.bs=bs self.reset() raw_output = self.dropouti(self.dp_encoder(input)) new_hidden,raw_outputs,outputs = [],[],[] for l, (rnn,drop) in enumerate(zip(self.rnns, self.dropouths)): with warnings.catch_warnings(): #To avoid the warning that comes because the weights aren't flattened. warnings.simplefilter("ignore") raw_output, new_h = rnn(raw_output, self.hidden[l]) new_hidden.append(new_h) raw_outputs.append(raw_output) if l != self.n_layers - 1: raw_output = drop(raw_output) outputs.append(raw_output) self.hidden = repackage_var1(new_hidden) return raw_outputs, outputs def one_hidden(self, l): nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz)//self.ndir return self.weights.new(self.ndir, self.bs, nh).zero_() def reset(self): [r.reset() for r in self.rnns if hasattr(r, 'reset')] self.weights = next(self.parameters()).data if self.qrnn: self.hidden = [self.one_hidden(l) for l in range(self.n_layers)] else: self.hidden = [(self.one_hidden(l), self.one_hidden(l)) for l in range(self.n_layers)] class LinearDecoder1(nn.Module): "To go on top of a RNN_Core module" initrange=0.1 def __init__(self, n_out, n_hid, output_p, tie_encoder=None, bias=True): super().__init__() self.decoder = nn.Linear(n_hid, n_out, bias=bias) self.decoder.weight.data.uniform_(-self.initrange, self.initrange) self.dropout = RNNDropout(output_p) if bias: self.decoder.bias.data.zero_() if tie_encoder: self.decoder.weight = tie_encoder.weight def forward(self, input): raw_outputs, outputs = input output = self.dropout(outputs[-1]) decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2))) return decoded, raw_outputs, outputs class SequentialRNN1(nn.Sequential): def reset(self): for c in self.children(): if hasattr(c, 'reset'): c.reset() def get_language_model1(vocab_sz, emb_sz, n_hid, n_layers, pad_token, tie_weights=True, qrnn=False, bias=True, output_p=0.4, hidden_p=0.2, input_p=0.6, embed_p=0.1, weight_p=0.5): "To create a full AWD-LSTM" rnn_enc = RNNCore(vocab_sz, emb_sz, n_hid=n_hid, n_layers=n_layers, pad_token=pad_token, qrnn=qrnn, hidden_p=hidden_p, input_p=input_p, embed_p=embed_p, weight_p=weight_p) enc = rnn_enc.encoder if tie_weights else None return SequentialRNN1(rnn_enc, LinearDecoder1(vocab_sz, emb_sz, output_p, tie_encoder=enc, bias=bias))
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
The new model has weights that are organized a bit differently.
save_parameters1 = {} for n,p in save_parameters.items(): if 'weight_hh_l0' not in n and n!='0.encoder_with_dropout.embed.weight': save_parameters1[n] = p.clone() elif n=='0.encoder_with_dropout.embed.weight': save_parameters1['0.dp_encoder.emb.weight'] = p.clone() else: save_parameters1[n[:-4]] = p.clone() splits = n.split('.') splits.remove(splits[-2]) n1 = '.'.join(splits) save_parameters1[n1] = p.clone() tst_model = get_language_model1(500, 20, 100, 2, 0) tst_model.load_state_dict(save_parameters1) opt = optim.SGD(tst_model.parameters(), lr=10) torch.manual_seed(7) x = tst_input.clone() z = tst_model(x) z y = tst_output.clone() loss = F.nll_loss(z[0], y) loss.backward() opt.step() tst_model[0].rnns[0]._parameters['weight_hh_l0_raw']
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
Regularization We'll keep the same param as before. Old reg
tst_model = get_language_model(500, 20, 100, 2, 0, bias=True, dropout=0.4, dropoute=0.1, dropouth=0.2, dropouti=0.6, wdrop=0.5) state_dict = OrderedDict() for n,p in save_parameters.items(): state_dict[n] = p.clone() tst_model.load_state_dict(state_dict) opt = optim.SGD(tst_model.parameters(), lr=10, weight_decay=1) torch.manual_seed(7) x = tst_input.clone() z = tst_model(x) y = tst_output.clone() loss = F.nll_loss(z[0], y) loss = seq2seq_reg(z[0], z[1:], loss, 2, 1) loss.item() loss.backward() nn.utils.clip_grad_norm_(tst_model.parameters(), 0.1) opt.step() tst_model[0].rnns[0].module._parameters['weight_hh_l0_raw']
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
New reg
from dataclasses import dataclass @dataclass class RNNTrainer(Callback): model:nn.Module bptt:int clip:float=None alpha:float=0. beta:float=0. def on_loss_begin(self, last_output, **kwargs): #Save the extra outputs for later and only returns the true output. self.raw_out,self.out = last_output[1],last_output[2] return last_output[0] def on_backward_begin(self, last_loss, last_input, last_output, **kwargs): #Adjusts the lr to the bptt selected #self.learn.opt.lr *= last_input.size(0) / self.bptt #AR and TAR if self.alpha != 0.: last_loss += (self.alpha * self.out[-1].pow(2).mean()).sum() if self.beta != 0.: h = self.raw_out[-1] if len(h)>1: last_loss += (self.beta * (h[1:] - h[:-1]).pow(2).mean()).sum() return last_loss def on_backward_end(self, **kwargs): if self.clip: nn.utils.clip_grad_norm_(self.model.parameters(), self.clip) save_parameters1 = {} for n,p in save_parameters.items(): if 'weight_hh_l0' not in n and n!='0.encoder_with_dropout.embed.weight': save_parameters1[n] = p.clone() elif n=='0.encoder_with_dropout.embed.weight': save_parameters1['0.dp_encoder.embed.weight'] = p.clone() else: save_parameters1[n[:-4]] = p.clone() splits = n.split('.') splits.remove(splits[-2]) n1 = '.'.join(splits) save_parameters1[n1] = p.clone() tst_model = get_language_model1(500, 20, 100, 2, 0) tst_model.load_state_dict(save_parameters1) opt = optim.SGD(tst_model.parameters(), lr=10, weight_decay=1) torch.manual_seed(7) cb = RNNTrainer(tst_model, 10, 0.1, 2, 1) x = tst_input.clone() z = tst_model(x) y = tst_output.clone() z = cb.on_loss_begin(z) loss = F.nll_loss(z, y) loss = cb.on_backward_begin(loss, x, z) loss.item() loss.backward() cb.on_backward_end() opt.step() tst_model[0].rnns[0]._parameters['weight_hh_l0_raw']
_____no_output_____
Apache-2.0
dev_nb/experiments/lm_checks.ipynb
gurvindersingh/fastai_v1
___ ___ Merging, Joining, and ConcatenatingThere are 3 main ways of combining DataFrames together: Merging, Joining and Concatenating. In this lecture we will discuss these 3 methods with examples.____ Example DataFrames
import pandas as pd df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3]) df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], 'B': ['B4', 'B5', 'B6', 'B7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7]) df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'], 'B': ['B8', 'B9', 'B10', 'B11'], 'C': ['C8', 'C9', 'C10', 'C11'], 'D': ['D8', 'D9', 'D10', 'D11']}, index=[8, 9, 10, 11]) df1 df2 df3
_____no_output_____
Apache-2.0
03- General Pandas/06-Merging-Joining-and-Concatenating.ipynb
rikimarutsui/Python-for-Finance-Repo
ConcatenationConcatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use **pd.concat** and pass in a list of DataFrames to concatenate together:
pd.concat([df1,df2,df3]) pd.concat([df1,df2,df3],axis=1)
_____no_output_____
Apache-2.0
03- General Pandas/06-Merging-Joining-and-Concatenating.ipynb
rikimarutsui/Python-for-Finance-Repo
_____ Example DataFrames
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}) left right
_____no_output_____
Apache-2.0
03- General Pandas/06-Merging-Joining-and-Concatenating.ipynb
rikimarutsui/Python-for-Finance-Repo
___ MergingThe **merge** function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example:
pd.merge(left,right,how='inner',on='key')
_____no_output_____
Apache-2.0
03- General Pandas/06-Merging-Joining-and-Concatenating.ipynb
rikimarutsui/Python-for-Finance-Repo
Or to show a more complicated example:
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'], 'key2': ['K0', 'K1', 'K0', 'K1'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'], 'key2': ['K0', 'K0', 'K0', 'K0'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}) pd.merge(left, right, on=['key1', 'key2']) pd.merge(left, right, how='outer', on=['key1', 'key2']) pd.merge(left, right, how='right', on=['key1', 'key2']) pd.merge(left, right, how='left', on=['key1', 'key2'])
_____no_output_____
Apache-2.0
03- General Pandas/06-Merging-Joining-and-Concatenating.ipynb
rikimarutsui/Python-for-Finance-Repo
JoiningJoining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame.
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], 'B': ['B0', 'B1', 'B2']}, index=['K0', 'K1', 'K2']) right = pd.DataFrame({'C': ['C0', 'C2', 'C3'], 'D': ['D0', 'D2', 'D3']}, index=['K0', 'K2', 'K3']) left.join(right) left.join(right, how='outer')
_____no_output_____
Apache-2.0
03- General Pandas/06-Merging-Joining-and-Concatenating.ipynb
rikimarutsui/Python-for-Finance-Repo
2.1 Binary Variables $$Bern(x|\mu) = \mu^x(1-\mu)^{1-x}$$ $$Beta(\mu|a,b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\mu^{a-1}(1-\mu)^{b-1}$$
bern = Binary() X = np.array([1,0,0,0,1,1,1,1,1,1,0,1]) bern.fit(X) bern.plot()
_____no_output_____
MIT
short_notebook/chapter02_short_ver.ipynb
hedwig100/PRML
2.2 Multinomial Variables $$p(\boldsymbol{x}|\boldsymbol{\mu}) = \Pi_{k=1}^K \mu_k^{x_k}$$ $$Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1) \cdots \Gamma(\alpha_K)} \Pi_{k=1}^K \mu_k^{\alpha_k-1}$$ 2.3 The Gaussian Distribution $$\mathcal{N}(x|\mu,\sigma^2) = \sqrt{\frac{1}{2\pi\sigma^2}}\exp{(-\frac{(x - \mu)^2}{2\sigma^2})}$$ $$\mathcal{N}(\boldsymbol{x}|\boldsymbol{\mu},\boldsymbol{\Sigma}) = \sqrt{\frac{1}{(2\pi)^D|\Sigma|}}\exp{(-\frac{1}{2}(\boldsymbol{x} - \boldsymbol{\mu})^T\Sigma^{-1}(\boldsymbol{x} - \boldsymbol{\mu}))}$$
gauss = Gaussian1D() X = np.random.randn(100) + 4 gauss.fit(X) gauss.plot()
_____no_output_____
MIT
short_notebook/chapter02_short_ver.ipynb
hedwig100/PRML
Student's t-distribution
plot_student(mu = 0,lamda = 2,nu = 3)
_____no_output_____
MIT
short_notebook/chapter02_short_ver.ipynb
hedwig100/PRML
2.5 Nonparametric Methods
hist = Histgram(delta=5e-1) X = np.random.randn(100) hist.fit(X) hist.plot()
_____no_output_____
MIT
short_notebook/chapter02_short_ver.ipynb
hedwig100/PRML
Kernel density estimators
parzen_gauss = Parzen() X = np.random.randn(100)*2 + 4 parzen_gauss.fit(X) parzen_gauss.plot() parzen_hist = Parzen(kernel = "hist") parzen_hist.fit(X) parzen_hist.plot()
_____no_output_____
MIT
short_notebook/chapter02_short_ver.ipynb
hedwig100/PRML
Nearest-neighbor methods
knn5 = KNearestNeighbor(k=5) knn30 = KNearestNeighbor(k=30) X = np.random.randn(100)*2.4 + 5.1 knn5.fit(X) knn5.plot() knn30.fit(X) knn30.plot() def load_iris(): dict = { "Iris-setosa": 0, "Iris-versicolor": 1, "Iris-virginica": 2 } X = [] y = [] with open("../data/iris.data") as f: data = f.read() for line in data.split("\n"): # sepal length | sepal width | petal length | petal width if len(line) == 0: continue sl,sw,pl,pw,cl = line.split(",") rec = np.array(list(map(float,(sl,sw,pl,pw)))) cl = dict[cl] X.append(rec) y.append(cl) return np.array(X),np.array(y) X,y = load_iris() X = X[:,:2] knn10 = KNeighborClassifier() knn10.fit(X,y) knn10.plot() knn30 = KNeighborClassifier(k=30) knn30.fit(X,y) knn30.plot()
_____no_output_____
MIT
short_notebook/chapter02_short_ver.ipynb
hedwig100/PRML
Germany: LK Weißenburg-Gunzenhausen (Bayern)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Weißenburg-Gunzenhausen.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="LK Weißenburg-Gunzenhausen", weeks=5); overview(country="Germany", subregion="LK Weißenburg-Gunzenhausen"); compare_plot(country="Germany", subregion="LK Weißenburg-Gunzenhausen", dates="2020-03-15:"); # load the data cases, deaths = germany_get_region(landkreis="LK Weißenburg-Gunzenhausen") # get population of the region for future normalisation: inhabitants = population(country="Germany", subregion="LK Weißenburg-Gunzenhausen") print(f'Population of country="Germany", subregion="LK Weißenburg-Gunzenhausen": {inhabitants} people') # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 1000 rows pd.set_option("max_rows", 1000) # display the table table
_____no_output_____
CC-BY-4.0
ipynb/Germany-Bayern-LK-Weißenburg-Gunzenhausen.ipynb
oscovida/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Weißenburg-Gunzenhausen.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Bayern-LK-Weißenburg-Gunzenhausen.ipynb
oscovida/oscovida.github.io
Subject Selection Experiments disorder data - Srinivas (handle: thewickedaxe) Initial Data Cleaning
# Standard import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt # Dimensionality reduction and Clustering from sklearn.decomposition import PCA from sklearn.cluster import KMeans from sklearn.cluster import MeanShift, estimate_bandwidth from sklearn import manifold, datasets from itertools import cycle # Plotting tools and classifiers from matplotlib.colors import ListedColormap from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import train_test_split from sklearn import preprocessing from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA from sklearn import cross_validation from sklearn.cross_validation import LeaveOneOut # Let's read the data in and clean it def get_NaNs(df): columns = list(df.columns.get_values()) row_metrics = df.isnull().sum(axis=1) rows_with_na = [] for i, x in enumerate(row_metrics): if x > 0: rows_with_na.append(i) return rows_with_na def remove_NaNs(df): rows_with_na = get_NaNs(df) cleansed_df = df.drop(df.index[rows_with_na], inplace=False) return cleansed_df initial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced_inv1.csv') cleansed_df = remove_NaNs(initial_data) # Let's also get rid of nominal data numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] X = cleansed_df.select_dtypes(include=numerics) print X.shape # Let's now clean columns getting rid of certain columns that might not be important to our analysis cols2drop = ['GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id', 'Baseline_Reading_id', 'Concentration_Reading_id'] X = X.drop(cols2drop, axis=1, inplace=False) print X.shape # For our studies children skew the data, it would be cleaner to just analyse adults X = X.loc[X['Age'] >= 18] Y = X.loc[X['race_id'] == 1] X = X.loc[X['Gender_id'] == 1] print X.shape print Y.shape
(4383, 137) (2624, 137) (2981, 137)
Apache-2.0
Code/Assignment-10/SubjectSelectionExperiments (rCBF data).ipynb
Upward-Spiral-Science/spect-team
Extracting the samples we are interested in
# Let's extract ADHd and Bipolar patients (mutually exclusive) ADHD_men = X.loc[X['ADHD'] == 1] ADHD_men = ADHD_men.loc[ADHD_men['Bipolar'] == 0] BP_men = X.loc[X['Bipolar'] == 1] BP_men = BP_men.loc[BP_men['ADHD'] == 0] ADHD_cauc = Y.loc[Y['ADHD'] == 1] ADHD_cauc = ADHD_cauc.loc[ADHD_cauc['Bipolar'] == 0] BP_cauc = Y.loc[Y['Bipolar'] == 1] BP_cauc = BP_cauc.loc[BP_cauc['ADHD'] == 0] print ADHD_men.shape print BP_men.shape print ADHD_cauc.shape print BP_cauc.shape # Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions ADHD_men = pd.DataFrame(ADHD_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False)) BP_men = pd.DataFrame(BP_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False)) ADHD_cauc = pd.DataFrame(ADHD_cauc.drop(['Patient_ID', 'race_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False)) BP_cauc = pd.DataFrame(BP_cauc.drop(['Patient_ID', 'race_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
(1056, 137) (257, 137) (1110, 137) (323, 137)
Apache-2.0
Code/Assignment-10/SubjectSelectionExperiments (rCBF data).ipynb
Upward-Spiral-Science/spect-team
Dimensionality reduction Manifold Techniques ISOMAP
combined1 = pd.concat([ADHD_men, BP_men]) combined2 = pd.concat([ADHD_cauc, BP_cauc]) print combined1.shape print combined2.shape combined1 = preprocessing.scale(combined1) combined2 = preprocessing.scale(combined2) combined1 = manifold.Isomap(20, 20).fit_transform(combined1) ADHD_men_iso = combined1[:1056] BP_men_iso = combined1[1056:] combined2 = manifold.Isomap(20, 20).fit_transform(combined2) ADHD_cauc_iso = combined2[:1110] BP_cauc_iso = combined2[1110:]
_____no_output_____
Apache-2.0
Code/Assignment-10/SubjectSelectionExperiments (rCBF data).ipynb
Upward-Spiral-Science/spect-team
Clustering and other grouping experiments K-Means clustering - iso
data1 = pd.concat([pd.DataFrame(ADHD_men_iso), pd.DataFrame(BP_men_iso)]) data2 = pd.concat([pd.DataFrame(ADHD_cauc_iso), pd.DataFrame(BP_cauc_iso)]) print data1.shape print data2.shape kmeans = KMeans(n_clusters=2) kmeans.fit(data1.get_values()) labels1 = kmeans.labels_ centroids1 = kmeans.cluster_centers_ print('Estimated number of clusters: %d' % len(centroids1)) for label in [0, 1]: ds = data1.get_values()[np.where(labels1 == label)] plt.plot(ds[:,0], ds[:,1], '.') lines = plt.plot(centroids1[label,0], centroids1[label,1], 'o') kmeans = KMeans(n_clusters=2) kmeans.fit(data2.get_values()) labels2 = kmeans.labels_ centroids2 = kmeans.cluster_centers_ print('Estimated number of clusters: %d' % len(centroids2)) for label in [0, 1]: ds2 = data2.get_values()[np.where(labels2 == label)] plt.plot(ds2[:,0], ds2[:,1], '.') lines = plt.plot(centroids2[label,0], centroids2[label,1], 'o')
Estimated number of clusters: 2
Apache-2.0
Code/Assignment-10/SubjectSelectionExperiments (rCBF data).ipynb
Upward-Spiral-Science/spect-team
As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups Classification Experiments Let's experiment with a bunch of classifiers
ADHD_men_iso = pd.DataFrame(ADHD_men_iso) BP_men_iso = pd.DataFrame(BP_men_iso) ADHD_cauc_iso = pd.DataFrame(ADHD_cauc_iso) BP_cauc_iso = pd.DataFrame(BP_cauc_iso) BP_men_iso['ADHD-Bipolar'] = 0 ADHD_men_iso['ADHD-Bipolar'] = 1 BP_cauc_iso['ADHD-Bipolar'] = 0 ADHD_cauc_iso['ADHD-Bipolar'] = 1 data1 = pd.concat([ADHD_men_iso, BP_men_iso]) data2 = pd.concat([ADHD_cauc_iso, BP_cauc_iso]) class_labels1 = data1['ADHD-Bipolar'] class_labels2 = data2['ADHD-Bipolar'] data1 = data1.drop(['ADHD-Bipolar'], axis = 1, inplace = False) data2 = data2.drop(['ADHD-Bipolar'], axis = 1, inplace = False) data1 = data1.get_values() data2 = data2.get_values() # Leave one Out cross validation def leave_one_out(classifier, values, labels): leave_one_out_validator = LeaveOneOut(len(values)) classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator) accuracy = classifier_metrics.mean() deviation = classifier_metrics.std() return accuracy, deviation rf = RandomForestClassifier(n_estimators = 22) qda = QDA() lda = LDA() gnb = GaussianNB() classifier_accuracy_list = [] classifiers = [(rf, "Random Forest"), (lda, "LDA"), (qda, "QDA"), (gnb, "Gaussian NB")] for classifier, name in classifiers: accuracy, deviation = leave_one_out(classifier, data1, class_labels1) print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation) classifier_accuracy_list.append((name, accuracy)) for classifier, name in classifiers: accuracy, deviation = leave_one_out(classifier, data2, class_labels2) print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation) classifier_accuracy_list.append((name, accuracy))
Random Forest accuracy is 0.7565 (+/- 0.429) LDA accuracy is 0.7739 (+/- 0.418) QDA accuracy is 0.7306 (+/- 0.444) Gaussian NB accuracy is 0.7558 (+/- 0.430)
Apache-2.0
Code/Assignment-10/SubjectSelectionExperiments (rCBF data).ipynb
Upward-Spiral-Science/spect-team
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part IX: Portfolio Optimization with Risk Factors and Principal Components Regression (PCR) 1. Downloading the data:
import numpy as np import pandas as pd import yfinance as yf import warnings warnings.filterwarnings("ignore") yf.pdr_override() pd.options.display.float_format = '{:.4%}'.format # Date range start = '2016-01-01' end = '2019-12-30' # Tickers of assets assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'NBL', 'APA', 'MMC', 'JPM', 'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'DHR', 'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI'] assets.sort() # Tickers of factors factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV'] factors.sort() tickers = assets + factors tickers.sort() # Downloading data data = yf.download(tickers, start = start, end = end) data = data.loc[:,('Adj Close', slice(None))] data.columns = tickers # Calculating returns X = data[factors].pct_change().dropna() Y = data[assets].pct_change().dropna() display(X.head())
_____no_output_____
BSD-3-Clause
examples/Tutorial 9.ipynb
xiaolongguo/Riskfolio-Lib
2. Estimating Mean Variance Portfolios with PCR 2.1 Estimating the loadings matrix with PCR.This part is just to visualize how Riskfolio-Lib calculates a loadings matrix using PCR.
import riskfolio.ParamsEstimation as pe feature_selection = 'PCR' # Method to select best model, could be PCR or Stepwise n_components = 0.95 # 95% of explained variance. See PCA in scikit learn for more information loadings = pe.loadings_matrix(X=X, Y=Y, feature_selection=feature_selection, n_components=n_components) loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
_____no_output_____
BSD-3-Clause
examples/Tutorial 9.ipynb
xiaolongguo/Riskfolio-Lib
2.2 Calculating the portfolio that maximizes Sharpe ratio.
import riskfolio.Portfolio as pf # Building the portfolio object port = pf.Portfolio(returns=Y) # Calculating optimum portfolio # Select method and estimate input parameters: method_mu='hist' # Method to estimate expected returns based on historical data. method_cov='hist' # Method to estimate covariance matrix based on historical data. port.assets_stats(method_mu=method_mu, method_cov=method_cov) feature_selection = 'PCR' # Method to select best model, could be PCR or Stepwise n_components = 0.95 # 95% of explained variance. See PCA in scikit learn for more information port.factors = X port.factors_stats(method_mu=method_mu, method_cov=method_cov, feature_selection=feature_selection, n_components=n_components ) # Estimate optimal portfolio: model='FM' # Factor Model rm = 'MV' # Risk measure used, this time will be variance obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe hist = False # Use historical scenarios for risk measures that depend on scenarios rf = 0 # Risk free rate l = 0 # Risk aversion factor, only useful when obj is 'Utility' w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist) display(w.T)
_____no_output_____
BSD-3-Clause
examples/Tutorial 9.ipynb
xiaolongguo/Riskfolio-Lib
2.3 Plotting portfolio composition
import riskfolio.PlotFunctions as plf # Plotting the composition of the portfolio ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20", height=6, width=10, ax=None)
_____no_output_____
BSD-3-Clause
examples/Tutorial 9.ipynb
xiaolongguo/Riskfolio-Lib
2.3 Calculate efficient frontier
points = 50 # Number of points of the frontier frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist) display(frontier.T.head()) # Plotting the efficient frontier label = 'Max Risk Adjusted Return Portfolio' # Title of point mu = port.mu_fm # Expected returns cov = port.cov_fm # Covariance matrix returns = port.returns_fm # Returns of the assets ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm, rf=rf, alpha=0.01, cmap='viridis', w=w, label=label, marker='*', s=16, c='r', height=6, width=10, ax=None) # Plotting efficient frontier composition ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
_____no_output_____
BSD-3-Clause
examples/Tutorial 9.ipynb
xiaolongguo/Riskfolio-Lib
3. Estimating Portfolios Using Risk Factors with Other Risk Measures and PCRIn this part I will calculate optimal portfolios for several risk measures using a __mean estimate based on PCR__. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 3.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
# Risk Measures available: # # 'MV': Standard Deviation. # 'MAD': Mean Absolute Deviation. # 'MSV': Semi Standard Deviation. # 'FLPM': First Lower Partial Moment (Omega Ratio). # 'SLPM': Second Lower Partial Moment (Sortino Ratio). # 'CVaR': Conditional Value at Risk. # 'WR': Worst Realization (Minimax) # 'MDD': Maximum Drawdown of uncompounded returns (Calmar Ratio). # 'ADD': Average Drawdown of uncompounded returns. # 'CDaR': Conditional Drawdown at Risk of uncompounded returns. # port.reset_linear_constraints() # To reset linear constraints (factor constraints) rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM', 'CVaR', 'WR', 'MDD', 'ADD', 'CDaR'] w_s = pd.DataFrame([]) # When we use hist = True the risk measures all calculated # using historical returns, while when hist = False the # risk measures are calculated using the expected returns # based on risk factor model: R = a + B * F hist = False for i in rms: w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist) w_s = pd.concat([w_s, w], axis=1) w_s.columns = rms w_s.style.format("{:.2%}").background_gradient(cmap='YlGn') import matplotlib.pyplot as plt # Plotting a comparison of assets weights for each portfolio fig = plt.gcf() fig.set_figwidth(14) fig.set_figheight(6) ax = fig.subplots(nrows=1, ncols=1) w_s.plot.bar(ax=ax) w_s = pd.DataFrame([]) # When we use hist = True the risk measures all calculated # using historical returns, while when hist = False the # risk measures are calculated using the expected returns # based on risk factor model: R = a + B * F hist = True for i in rms: w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist) w_s = pd.concat([w_s, w], axis=1) w_s.columns = rms w_s.style.format("{:.2%}").background_gradient(cmap='YlGn') import matplotlib.pyplot as plt # Plotting a comparison of assets weights for each portfolio fig = plt.gcf() fig.set_figwidth(14) fig.set_figheight(6) ax = fig.subplots(nrows=1, ncols=1) w_s.plot.bar(ax=ax)
_____no_output_____
BSD-3-Clause
examples/Tutorial 9.ipynb
xiaolongguo/Riskfolio-Lib
test across all columns (when is it worth running in this system)shrinking encodings of different type (approximate queries)ca_police 6.7GBfull_files 802MBcol_files 802 MBparse time 3min 36sPREDS[5:6] 22161713read_fast 8.66 sread 1min 24sread_chunks 2min 20sread_csv 2min 55sPREDS[4:10] 528503read_fast 6.58sread 2 min 7sread_chunks 2min 12sread_csv 3min 21sPREDS[0:20] 1read_fast 2.66 sread 6min 38sread_chunks 1min 59sread_csv 2min 58sgithub_issues 1min 27sPREDS[0:3] 1read_fast 51.9 sread 2min 56sread_chunks 49.9 sread_csv 1min 9sPREDS[0:1] 102077read_fast 2min 3sread 4min 43sread_chunks 1min 23sread_csv 2min 26sblock_total 22.6 sPREDS [2:3] 183368read_fast 795msread 29.5 sread_chunks 21.8 sread_csv 17.5 sPREDS [0:3] 1read_fast 1.92 sread 1min 13sread_chunks 26.3 sread_csv 29.6 sQueriesca_police[8] 82260698cardinality_one 234 mscardinality_many 447 msnormal 2-3 minca_police[8:10]cardinality_many 4.1sread_fast ~10 snormal ~2 minca_police[8:11]cardinality_many 6.57sread_fast ~10 snormal ~2 minca_police[8:13]cardinality_many 12.1sread_fast ~10 snormal ~2 min
indices = [] df_all = [] for i in range(0,num_chunks+1): df_all.append(feather.read_dataframe(f'{FEATHER_DIR}full{i}.f')) df = pd.concat(df_all, ignore_index=True) print('concatted') print(df.index)
concatted RangeIndex(start=0, stop=4999999, step=1)
PSF-2.0
Column_Storage.ipynb
arjunrawal4/pandas-memdb
Analyze results for 3D CGANFeb 22, 2021
import numpy as np import matplotlib.pyplot as plt import pandas as pd import subprocess as sp import sys import os import glob import pickle from matplotlib.colors import LogNorm, PowerNorm, Normalize import seaborn as sns from functools import reduce from ipywidgets import * %matplotlib widget sys.path.append('/global/u1/v/vpa/project/jpt_notebooks/Cosmology/Cosmo_GAN/repositories/cosmogan_pytorch/code/modules_image_analysis/') from modules_img_analysis import * sys.path.append('/global/u1/v/vpa/project/jpt_notebooks/Cosmology/Cosmo_GAN/repositories/cosmogan_pytorch/code/5_3d_cgan/1_main_code/') import post_analysis_pandas as post ### Transformation functions for image pixel values def f_transform(x): return 2.*x/(x + 4.) - 1. def f_invtransform(s): return 4.*(1. + s)/(1. - s) # img_size=64 img_size=128 val_data_dict={'64':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset2a_3dcgan_4univs_64cube_simple_splicing', '128':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube'}
_____no_output_____
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Read validation data
# bins=np.concatenate([np.array([-0.5]),np.arange(0.5,20.5,1),np.arange(20.5,100.5,5),np.arange(100.5,1000.5,50),np.array([2000])]) #bin edges to use bins=np.concatenate([np.array([-0.5]),np.arange(0.5,100.5,5),np.arange(100.5,300.5,20),np.arange(300.5,1000.5,50),np.array([2000])]) #bin edges to use bins=f_transform(bins) ### scale to (-1,1) # ### Extract validation data sigma_lst=[0.5,0.65,0.8,1.1] labels_lst=range(len(sigma_lst)) bkgnd_dict={} num_bkgnd=100 for label in labels_lst: fname=val_data_dict[str(img_size)]+'/norm_1_sig_{0}_train_val.npy'.format(sigma_lst[label]) print(fname) samples=np.load(fname,mmap_mode='r')[-num_bkgnd:][:,0,:,:] dict_val=post.f_compute_hist_spect(samples,bins) bkgnd_dict[str(sigma_lst[label])]=dict_val del samples
/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube/norm_1_sig_0.5_train_val.npy /global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube/norm_1_sig_0.65_train_val.npy /global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube/norm_1_sig_0.8_train_val.npy /global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube/norm_1_sig_1.1_train_val.npy
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Read data
# main_dir='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/' # results_dir=main_dir+'20201002_064327' dict1={'64':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d_cGAN/', '128':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d_cGAN/'} u=interactive(lambda x: dict1[x], x=Select(options=dict1.keys())) # display(u) # parent_dir=u.result parent_dir=dict1[str(img_size)] dir_lst=[i.split('/')[-1] for i in glob.glob(parent_dir+'202107*')] n=interactive(lambda x: x, x=Dropdown(options=dir_lst)) display(n) result=n.result result_dir=parent_dir+result print(result_dir)
/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d_cGAN/20210726_173009_cgan_128_nodes1_lr0.000002_finetune
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Plot Losses
df_metrics=pd.read_pickle(result_dir+'/df_metrics.pkle').astype(np.float64) df_metrics.tail(10) def f_plot_metrics(df,col_list): plt.figure() for key in col_list: plt.plot(df_metrics[key],label=key,marker='*',linestyle='') plt.legend() # col_list=list(col_list) # df.plot(kind='line',x='step',y=col_list) # f_plot_metrics(df_metrics,['spec_chi','hist_chi']) interact_manual(f_plot_metrics,df=fixed(df_metrics), col_list=SelectMultiple(options=df_metrics.columns.values)) chi=df_metrics.quantile(q=0.2,axis=0)['hist_chi'] print(chi) df_metrics[(df_metrics['hist_chi']<=chi)&(df_metrics.epoch>30)].sort_values(by=['hist_chi']).head(10) # display(df_metrics.sort_values(by=['hist_chi']).head(8)) # display(df_metrics.sort_values(by=['spec_chi']).head(8))
_____no_output_____
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Read stored chi-squares for images
## Get sigma list from saved files flist=glob.glob(result_dir+'/df_processed*') sigma_lst=[i.split('/')[-1].split('df_processed_')[-1].split('.pkle')[0] for i in flist] sigma_lst.sort() ### Sorting is important for labels to match !! labels_lst=np.arange(len(sigma_lst)) sigma_lst,labels_lst ### Create a merged dataframe df_list=[] for label in labels_lst: df=pd.read_pickle(result_dir+'/df_processed_{0}.pkle'.format(str(sigma_lst[label]))) df[['epoch','step']]=df[['epoch','step']].astype(int) df['label']=df.epoch.astype(str)+'-'+df.step.astype(str) # Add label column for plotting df_list.append(df) for i,df in enumerate(df_list): df1=df.add_suffix('_'+str(i)) # renaming the columns to be joined on keys=['epoch','step','img_type','label'] rename_cols_dict={key+'_'+str(i):key for key in keys} # print(rename_cols_dict) df1.rename(columns=rename_cols_dict,inplace=True) df_list[i]=df1 df_merged=reduce(lambda x, y : pd.merge(x, y, on = ['step','epoch','img_type','label']), df_list) ### Get sum of all 4 classes for 3 types of chi-squares for chi_type in ['chi_1','chi_spec1','chi_spec2','chi_1a','chi_1b','chi_1c']: keys=[chi_type+'_'+str(label) for label in labels_lst] # display(df_merged[keys].sum(axis=1)) df_merged['sum_'+chi_type]=df_merged[keys].sum(axis=1) del df_list df
_____no_output_____
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Slice best steps
def f_slice_merged_df(df,cutoff=0.2,sort_col='chi_1',col_mode='all',label='all',params_lst=[0,1,2],head=10,epoch_range=[0,None],use_sum=True,display_flag=False): ''' View dataframe after slicing ''' if epoch_range[1]==None: epoch_range[1]=df.max()['epoch'] df=df[(df.epoch<=epoch_range[1])&(df.epoch>=epoch_range[0])] ######### Apply cutoff to keep reasonable chi1 and chispec1 #### Add chi-square columns to use chi_cols=[] if use_sum: ## Add sum chi-square columns for j in ['chi_1','chi_spec1','chi_1a','chi_1b','chi_1c']: chi_cols.append('sum_'+j) if label=='all': ### Add chi-squares for all labels for j in ['chi_1','chi_spec1','chi_1a','chi_1b','chi_1c']: for idx,i in enumerate(params_lst): chi_cols.append(j+'_'+str(idx)) else: ## Add chi-square for specific label assert label in params_lst, "label %s is not in %s"%(label,params_lst) label_idx=params_lst.index(label) print(label_idx) for j in ['chi_1','chi_spec1','chi_spec2','chi_1a','chi_1b','chi_1c']: chi_cols.append(j+'_'+str(label_idx)) # print(chi_cols) q_dict=dict(df_merged.quantile(q=cutoff,axis=0)[chi_cols]) # print(q_dict) strg=['%s < %s'%(key,q_dict[key]) for key in chi_cols ] query=" & ".join(strg) # print(query) df=df.query(query) # Sort dataframe df1=df[df.epoch>0].sort_values(by=sort_col) chis=[i for i in df_merged.columns if 'chi' in i] col_list=['label']+chis+['epoch','step'] if (col_mode=='short'): col_list=['label']+[i for i in df_merged.columns if i.startswith('sum')] col_list=['label']+chi_cols df2=df1.head(head)[col_list] if display_flag: display(df2) # Display df return df2 # f_slice_merged_df(df_merged,cutoff=0.3,sort_col='sum_chi_1',label=0.65,params_lst=[0.5,0.65,0.8,1.1],use_sum=True,head=2000,display_flag=False,epoch_range=[7,None]) cols_to_sort=np.unique([i for i in df_merged.columns for j in ['chi_1_','chi_spec1_'] if ((i.startswith(j)) or (i.startswith('sum')))]) w=interactive(f_slice_merged_df,df=fixed(df_merged), cutoff=widgets.FloatSlider(value=0.3, min=0, max=1.0, step=0.01), col_mode=['all','short'], display_flag=widgets.Checkbox(value=False), use_sum=widgets.Checkbox(value=True), label=ToggleButtons(options=['all']+sigma_lst), params_lst=fixed(sigma_lst), head=widgets.IntSlider(value=10,min=1,max=20,step=1), epoch_range=widgets.IntRangeSlider(value=[0,np.max(df.epoch.values)],min=0,max=np.max(df.epoch.values),step=1), sort_col=cols_to_sort ) display(w) df_sliced=w.result [int(i.split('-')[1]) for i in df_sliced.label.values] # df_sliced best_step=[] df_test=df_merged.copy() df_test=df_merged[(df_merged.epoch<5000)&(df_merged.epoch>0)] cut_off=1.0 best_step.append(f_slice_merged_df(df_test,cutoff=cut_off,sort_col='sum_chi_1',label='all',use_sum=True,head=4,display_flag=False,epoch_range=[7,None],params_lst=sigma_lst).step.values) best_step.append(f_slice_merged_df(df_test,cutoff=cut_off,sort_col='sum_chi_spec1',label='all',use_sum=True,head=8,display_flag=False,epoch_range=[7,None],params_lst=sigma_lst).step.values) best_step.append(f_slice_merged_df(df_test,cutoff=cut_off,sort_col='sum_chi_spec2',label='all',use_sum=True,head=2,display_flag=False,epoch_range=[7,None],params_lst=sigma_lst).step.values) best_step.append(f_slice_merged_df(df_test,cutoff=cut_off,sort_col='sum_chi_1b',label='all',use_sum=True,head=2,display_flag=False,epoch_range=[7,None],params_lst=sigma_lst).step.values) # best_step.append([46669,34281]) best_step=np.unique([i for j in best_step for i in j]) print(best_step) best_step # best_step=[6176] # best_step= [23985,24570,25155,25740,26325,26910,27495] # best_step=[int(i.split('-')[1]) for i in df_sliced.label.values] # best_step=np.arange(40130,40135).astype(int) df_best=df_merged[df_merged.step.isin(best_step)] print(df_best.shape) print([(df_best[df_best.step==step].epoch.values[0],df_best[df_best.step==step].step.values[0]) for step in best_step]) # print([(df_best.loc[idx].epoch,df_best.loc[idx].step) for idx in best_idx]) col_list=['label']+[i for i in df_merged.columns if i.startswith('sum')] df_best[col_list]
_____no_output_____
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Interactive plot
def f_plot_hist_spec(df,param_labels,sigma_lst,steps_list,bkg_dict,plot_type,img_size): assert plot_type in ['hist','spec','grid','spec_relative'],"Invalid mode %s"%(plot_type) if plot_type in ['hist','spec','spec_relative']: fig=plt.figure(figsize=(6,6)) for par_label in param_labels: df=df[df.step.isin(steps_list)] # print(df.shape) idx=sigma_lst.index(par_label) suffix='_%s'%(idx) dict_bkg=bkg_dict[str(par_label)] for (i,row),marker in zip(df.iterrows(),itertools.cycle('>^*sDHPdpx_')): label=row.label+'_'+str(par_label) if plot_type=='hist': x1=row['hist_bin_centers'+suffix] y1=row['hist_val'+suffix] yerr1=row['hist_err'+suffix] x1=f_invtransform(x1) plt.errorbar(x1,y1,yerr1,marker=marker,markersize=5,linestyle='',label=label) if plot_type=='spec': y2=row['spec_val'+suffix] yerr2=row['spec_sdev'+suffix]/np.sqrt(row['num_imgs'+suffix]) x2=np.arange(len(y2)) y2=x2**2*y2; yerr2=x2**2*yerr2 ## Plot k^2 P(y) plt.fill_between(x2, y2 - yerr2, y2 + yerr2, alpha=0.4) plt.plot(x2, y2, marker=marker, linestyle=':',label=label) if plot_type=='spec_relative': y2=row['spec_val'+suffix] yerr2=row['spec_sdev'+suffix] x2=np.arange(len(y2)) ### Reference spectrum y1,yerr1=dict_bkg['spec_val'],dict_bkg['spec_sdev'] y=y2/y1 ## Variance is sum of variance of both variables, since they are uncorrelated # delta_r= |r| * sqrt(delta_a/a)^2 +(\delta_b/b)^2) / \sqrt(N) yerr=(np.abs(y))*np.sqrt((yerr1/y1)**2+(yerr2/y2)**2)/np.sqrt(row['num_imgs'+suffix]) plt.fill_between(x2, y - yerr, y + yerr, alpha=0.4) plt.plot(x2, y, marker=marker, linestyle=':',label=label) if plot_type=='grid': images=np.load(row['fname'+suffix])[:,0,:,:,0] print(images.shape) f_plot_grid(images[:8],cols=4,fig_size=(8,4)) ### Plot reference data if plot_type=='hist': x,y,yerr=dict_bkg['hist_bin_centers'],dict_bkg['hist_val'],dict_bkg['hist_err'] x=f_invtransform(x) plt.errorbar(x, y,yerr,color='k',linestyle='-',label='bkgnd') plt.title('Pixel Intensity Histogram') plt.xscale('symlog',linthreshx=50) if plot_type=='spec': y,yerr=dict_bkg['spec_val'],dict_bkg['spec_sdev']/np.sqrt(num_bkgnd) x=np.arange(len(y)) y=x**2*y; yerr=x**2*yerr ## Plot k^2 P(y) plt.fill_between(x, y - yerr, y + yerr, color='k',alpha=0.8) plt.title('Spectrum') plt.xlim(0,img_size/2) if plot_type=='spec_relative': plt.axhline(y=1.0,color='k',linestyle='-.') plt.title("Relative spectrum") plt.xlim(0,img_size/2) plt.ylim(0.5,2) if plot_type in ['hist','spec']: plt.yscale('log') plt.legend(bbox_to_anchor=(0.3, 0.75),ncol=2, fancybox=True, shadow=True,prop={'size':6}) # f_plot_hist_spec(df_merged,[sigma_lst[-1]],sigma_lst,[best_step[0]],bkgnd_dict,'hist') interact_manual(f_plot_hist_spec,df=fixed(df_best), param_labels=SelectMultiple(options=sigma_lst),sigma_lst=fixed(sigma_lst), steps_list=SelectMultiple(options=df_best.step.values), img_size=fixed(img_size), bkg_dict=fixed(bkgnd_dict),plot_type=ToggleButtons(options=['hist','spec','grid','spec_relative'])) best_step sigma=1.1 fname=val_data_dict[str(img_size)]+'/norm_1_sig_{0}_train_val.npy'.format(sigma) print(fname) a1=np.load(fname,mmap_mode='r')[-100:] a1.shape images=a1[:,0,:,:,0] f_plot_grid(images[8:16],cols=4,fig_size=(8,4))
/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube/norm_1_sig_1.1_train_val.npy 2 4
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Delete unwanted stored models(Since deterministic runs aren't working well )
# # fldr='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/20210119_134802_cgan_predict_0.65_m2/models' # fldr=result_dir # print(fldr) # flist=glob.glob(fldr+'/models/checkpoint_*.tar') # len(flist) # # Delete unwanted stored images # for i in flist: # try: # step=int(i.split('/')[-1].split('_')[-1].split('.')[0]) # if step not in best_step: # # print(step) # os.remove(i) # pass # else: # print(step) # # print(i) # except Exception as e: # # print(e) # # print(i) # pass # best_step # ! du -hs /global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset2a_3dcgan_4univs_64cube_simple_splicing/norm_1_sig_0.5_train_val.npy # fname='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset4_smoothing_4univ_cgan_varying_sigma_128cube/Om0.3_Sg0.5_H70.0.npy' # np.load(fname,mmap_mode='r').shape 2880/90
_____no_output_____
BSD-3-Clause-LBNL
code/5_3d_cgan/2_cgan_analysis/1_cgan3d_analyze-results.ipynb
vmos1/cosmogan_pytorch
Advanced topics in test driven development Introduction- Already seen the basics- Learn some advanced topics The hypothesis package- http://hypothesis.readthedocs.io- `pip install hypothesis`- General idea earlier: - Make test data. - Perform operations - Assert something after operation- Hypothesis automates this! - Describe range of scenarios - Computer explores these and tests- With hypothesis: - Generate random data using specification - Perform operations - assert something about result Example
from hypothesis import given from hypothesis import strategies as st from gcd import gcd @given(st.integers(min_value=0), st.integers(min_value=0)) def test_gcd(a, b): result = gcd(a, b) # Now what? # assert a%result == 0
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Example: adding a specific case
@given(st.integers(min_value=0), st.integers(min_value=0)) @example(a=44, b=19) def test_gcd(a, b): result = gcd(a, b) # Now what? # assert a%result == 0
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
More details- `given` generates inputs- `strategies`: provides a strategy for inputs- Different stratiegies - `integers` - `floats` - `text` - `booleans` - `tuples` - `lists` - ...- See: http://hypothesis.readthedocs.io/en/latest/data.html Example exercise- Write a simple run-length encoding function called `encode`- Write another called `decode` to produce the same input from the output of `encode`
def encode(text): return [] def decode(lst): return ''
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
The test
from hypothesis import given from hypothesis import strategies as st @given(st.text()) def test_decode_inverts_encode(s): assert decode(encode(s)) == s
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Summary- Much easier to test- hypothesis does the hard work- Can do a lot more!- Read the docs for more- For some detailed articles: http://hypothesis.works/articles/intro/- Here in particular is one interesting article: http://hypothesis.works/articles/calculating-the-mean/---- Unittest module- Basic idea and style is from JUnit- Some consider this old style How to use unittest- Subclass `unittest.TestCase`- Create test methods A simple example- Let us test gcd.py with unittest
# gcd.py def gcd(a, b): if b == 0: return a return gcd(b, a%b)
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Writing the test
# test_gcd.py from gcd import gcd import unittest class TestGCD(unittest.TestCase): def test_gcd_works_for_positive_integers(self): self.assertEqual(gcd(48, 64), 16) self.assertEqual(gcd(44, 19), 1) if __name__ == '__main__': unittest.main()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Running it- Just run `python test_gcd.py`- Also works with `nosetests` and `pytest` Notes- Note the name of the method.- Note the use of `self.assertEqual`- Also available: `assertNotEqual, assertTrue, assertFalse, assertIs, assertIsNot`- `assertIsNone, assertIn, assertIsInstance, assertRaises`- `assertAlmostEqual, assertListEqual, assertSequenceEqual ` ...- https://docs.python.org/2/library/unittest.html Fixtures- What if you want to do something common before all tests?- Typically called a **fixture**- Use the `setUp` and `tearDown` methods for method-level fixtures Silly fixture example
# test_gcd.py import gcd import unittest class TestGCD(unittest.TestCase): def setUp(self): print("setUp") def tearDown(self): print("tearDown") def test_gcd_works_for_positive_integers(self): self.assertEqual(gcd(48, 64), 16) self.assertEqual(gcd(44, 19), 1) if __name__ == '__main__': unittest.main()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Exercise- Fix bug with negative numbers in gcd.py.- Use TDD. Using hypothesis with unittest
# test_gcd.py from hypothesis import given from hypothesis import strategies as st import gcd import unittest class TestGCD(unittest.TestCase): @given(a=st.integers(min_value=0), b=st.integers(min_value=0)) def test_gcd_works_for_positive_integers(self, a, b): result = gcd(a, b) assert a%result == 0 assert b%result == 0 assert result <= a and result <= b if __name__ == '__main__': unittest.main()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
Some notes on style- Use descriptive function names- Intent matters- Segregate the test code into the following
- Given: what is the context of the test? - When: what action is taken to actually test the problem - Then: what do we actually ensure.
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
More on intent driven programming- "Programs must be written for people to read, and only incidentally for machines to execute.” Harold Abelson- The code should make the intent clear.For example:
if self.temperature > 600 and self.pressure > 10e5: message = 'hello you have a problem here!' message += 'current temp is %s'%(self.temperature) print(message) self.valve.close() self.raise_warning()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
is totally unclear as to the intent. Instead refactor as follows:
if self.reactor_is_critical(): self.shutdown_with_warning()
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
A more involved testing example- Motivational problem:> Find all the git repositories inside a given directory recursively.> Make this a command line tool supporting command line use.- Write tests for the code- Some rules: 0. The test should be runnable by anyone (even by a computer), almost anywhere. 1. Don't write anything in the current directory (use a temporary directory). 2. Cleanup any files you create while testing. 3. Make sure tests do not affect global state too much. Solution1. Create some test data.2. Test!3. Cleanup the test data Class-level fixtures- Use `setupClass` and `tearDownClass` classmethods for class level fixtures. Module-level fixtures- `setup_module`, `teardown_module`- Can be used for a module-level fixture- http://nose.readthedocs.io/en/latest/writing_tests.html Coverage- Assess the amount of code that is covered by testing- http://coverage.readthedocs.io/- `pip install coverage`- Integrates with nosetests/pytest Typical coverage usage
$ coverage run -m nose.core my_package $ coverage report -m $ coverage html
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
mock- Mocking for advanced testing.- Example: reading some twitter data- Example: function to post an update to facebook or twitter- Example: email user when simulation crashes- Can you test it? How? Using mock: the big picture- Do you really want to post something on facebook?- Or do you want to know if the right method was called with the right arguments?- Idea: "mock" the objects that do something and test them- Quoting from the Python docs:> It allows you to replace parts of your system under test with mock objects> and make assertions about how they have been used. Installation- Built-in on Python >= 3.3
- `from unittest import mock`
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees
- else `pip install mock`
- `import mock`
_____no_output_____
OLDAP-2.5
slides/test_driven_development/tdd_advanced.ipynb
FOSSEE/sees