markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 9. Which was the most-ordered item?
c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1)
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 10. For the most-ordered item, how many items were ordered?
c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1)
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 11. What was the most ordered item in the choice_description column?
c = chipo.groupby('choice_description').sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) # Diet Coke 159
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 12. How many items were orderd in total?
total_items_orders = chipo.quantity.sum() total_items_orders
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 13. Turn the item price into a float Step 13.a. Check the item price type
chipo.item_price.dtype
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 13.b. Create a lambda function and change the type of item price
dollarizer = lambda x: float(x[1:-1]) chipo.item_price = chipo.item_price.apply(dollarizer)
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 13.c. Check the item price type
chipo.item_price.dtype
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 14. How much was the revenue for the period in the dataset?
revenue = (chipo['quantity']* chipo['item_price']).sum() print('Revenue was: $' + str(np.round(revenue,2)))
Revenue was: $39237.02
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 15. How many orders were made in the period?
orders = chipo.order_id.value_counts().count() orders
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 16. What is the average revenue amount per order?
# Solution 1 chipo['revenue'] = chipo['quantity'] * chipo['item_price'] order_grouped = chipo.groupby(by=['order_id']).sum() order_grouped.mean()['revenue'] # Solution 2 chipo.groupby(by=['order_id']).sum().mean()['revenue']
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Step 17. How many different items are sold?
chipo.item_name.value_counts().count()
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/Chipotle/Exercise_with_Solutions.ipynb
ismael-araujo/pandas-exercise
Solución de ecuaciones diferenciales Dada la siguiente ecuación diferencial:$$\dot{x} = -x$$queremos obtener la respuesta del sistema que representa, es decir, los valores que va tomando $x$.Si analizamos esta ecuación diferencial, podremos notar que la solución de este sistema es una función $\varphi(t)$, tal que cuando la derivemos obtengamos el negativo de esta misma función, es decir:$$\frac{d}{dt} \varphi(t) = -\varphi(t)$$y despues de un poco de pensar, podemos darnos cuenta de que la función que queremos es:$$\varphi(t) = e^{-t}$$Sin embargo muchas veces no tendremos funciones tan sencillas (ciertamente no es el caso en la robótica, donde usualmente tenemos ecuaciones diferenciales no lineales de orden $n$), por lo que en esta práctica veremos algunas estrategias para obtener soluciones a esta ecuación diferencial, tanto numéricas como simbolicas. Método de Euler El [método de Euler](http://es.wikipedia.org/wiki/Método_de_Euler) para obtener el comportamiento de una ecuación diferencial, se basa en la intuición básica de la derivada; digamos que tenemos una ecuación diferencial general:$$\frac{dy}{dx} = y' = F(x, y)$$en donde $F(x, y)$ puede ser cualquier función que depende de $x$ y/o de $y$, entonces podemos dividir en pedazos el comportamiento de la gráfica de tal manera que solo calculemos un pequeño pedazo cada vez, aproximando el comportamiento de la ecuación diferencial, con el de una recta, cuya pendiente será la derivada:![Método de Euler](./imagenes/euler.jpg)«Método de Euler» por Vero.delgado - Trabajo propio. Disponible bajo la licencia CC BY-SA 3.0 vía Wikimedia Commons.Esta recta que aproxima a la ecuación diferencial, podemos recordar que tiene una estructura:$$y = b + mx$$por lo que si sustituimos en $m$ la derivada y $b$ con el valor anterior de la ecuación diferencial, obtendremos algo como:$$\overbrace{y_{i+1}}^{\text{nuevo valor de }y} = \overbrace{y_i}^{\text{viejo valor de }y} + \overbrace{\frac{dy}{dx}}^{\text{pendiente}} \overbrace{\Delta x}^{\text{distancia en }x}$$pero conocemos el valor de $\frac{dy}{dx}$, es nuestra ecuación diferencial; por lo que podemos escribir esto como:$$y_{i+1} = y_i + F(x_i, y_i) \Delta x$$Resolvamos algunas iteraciones de nuestro sistema; empecemos haciendo 10 iteraciones a lo largo de 10 segundos, con condiciones iniciales $x(0) = 1$, eso quiere decir que:$$\begin{align}\Delta t &= 1 \\x(0) &= 1 \\\dot{x}(0) &= 1\end{align}$$
x0 = 1 Δt = 1 # Para escribir simbolos griegos como Δ, tan solo tienes que escribir su nombre # precedido de una diagonal (\Delta) y teclear tabulador una vez F = lambda x : -x x1 = x0 + F(x0)*Δt x1 x2 = x1 + F(x1)*Δt x2
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Ejercicio Crea codigo para una iteración mas con estos mismos parametros y despliega el resultado.
x3 = # Escribe el codigo de tus calculos aqui from pruebas_2 import prueba_2_1 prueba_2_1(x0, x1, x2, x3, _)
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Momento... que esta pasando? Resulta que este $\Delta t$ es demasiado grande, intentemos con 20 iteraciones:$$\begin{align}\Delta t &= 0.5 \\x(0) &= 1\end{align}$$
x0 = 1 n = 20 Δt = 10/n F = lambda x : -x x1 = x0 + F(x0)*Δt x1 x2 = x1 + F(x1)*Δt x2 x3 = x2 + F(x2)*Δt x3
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Esto va a ser tardado, mejor digamosle a Python que es lo que tenemos que hacer, y que no nos moleste hasta que acabe, podemos usar un ciclo ```for``` y una lista para guardar todos los valores de la trayectoria:
xs = [x0] for t in range(20): xs.append(xs[-1] + F(xs[-1])*Δt) xs
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Ahora que tenemos estos valores, podemos graficar el comportamiento de este sistema, primero importamos la libreria ```matplotlib```:
%matplotlib inline from matplotlib.pyplot import plot
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Mandamos a llamar la función ```plot```:
plot(xs);
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Sin embargo debido a que el periodo de integración que utilizamos es demasiado grande, la solución es bastante inexacta, podemos verlo al graficar contra la que sabemos es la solución de nuestro problema:
from numpy import linspace, exp ts = linspace(0, 10, 20) plot(xs) plot(exp(-ts));
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Si ahora utilizamos un numero de pedazos muy grande, podemos mejorar nuestra aproximación:
xs = [x0] n = 100 Δt = 10/n for t in range(100): xs.append(xs[-1] + F(xs[-1])*Δt) ts = linspace(0, 10, 100) plot(xs) plot(exp(-ts));
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
odeint Este método funciona tan bien, que ya viene programado dentro de la libreria ```scipy```, por lo que solo tenemos que importar esta librería para utilizar este método.Sin embargo debemos de tener cuidado al declarar la función $F(x, t)$. El primer argumento de la función se debe de referir al estado de la función, es decir $x$, y el segundo debe de ser la variable independiente, en nuestro caso el tiempo.
from scipy.integrate import odeint F = lambda x, t : -x x0 = 1 ts = linspace(0, 10, 100) xs = odeint(func=F, y0=x0, t=ts) plot(ts, xs);
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Ejercicio Grafica el comportamiento de la siguiente ecuación diferencial.$$\dot{x} = x^2 - 5 x + \frac{1}{2} \sin{x} - 2$$> Nota: Asegurate de impotar todas las librerias que puedas necesitar
ts = # Escribe aqui el codigo que genera un arreglo de puntos equidistantes (linspace) x0 = # Escribe el valor de la condicion inicial # Importa las funciones de librerias que necesites aqui G = lambda x, t: # Escribe aqui el codigo que describe los calculos que debe hacer la funcion xs = # Escribe aqui el comando necesario para simular la ecuación diferencial plot(ts, xs); from pruebas_2 import prueba_2_2 prueba_2_2(ts, xs)
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Sympy Y por ultimo, hay veces en las que incluso podemos obtener una solución analítica de una ecuación diferencial, siempre y cuando cumpla ciertas condiciones de simplicidad.
from sympy import var, Function, dsolve from sympy.physics.mechanics import mlatex, mechanics_printing mechanics_printing() var("t") x = Function("x")(t) x, x.diff(t) solucion = dsolve(x.diff(t) + x, x) solucion
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Ejercicio Implementa el codigo necesario para obtener la solución analítica de la siguiente ecuación diferencial:$$\dot{x} = x^2 - 5x$$
# Declara la variable independiente de la ecuación diferencial var("") # Declara la variable dependiente de la ecuación diferencial = Function("")() # Escribe la ecuación diferencial con el formato necesario (Ecuacion = 0) # adentro de la función dsolve sol = dsolve() sol from pruebas_2 import prueba_2_3 prueba_2_3(sol)
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Solución a ecuaciones diferenciales de orden superior Si ahora queremos obtener el comportamiento de una ecuacion diferencial de orden superior, como:$$\ddot{x} = -\dot{x} - x + 1$$Tenemos que convertirla en una ecuación diferencial de primer orden para poder resolverla numericamente, por lo que necesitaremos convertirla en una ecuación diferencial matricial, por lo que empezamos escribiendola junto con la identidad $\dot{x} = \dot{x}$ en un sistema de ecuaciones:$$\begin{align}\dot{x} &= \dot{x} \\\ddot{x} &= -\dot{x} - x + 1\end{align}$$Si extraemos el operador derivada del lado izquierda, tenemos:$$$$\begin{align}\frac{d}{dt} x &= \dot{x} \\\frac{d}{dt} \dot{x} &= -\dot{x} - x + 1\end{align}$$$$O bien, de manera matricial:$$\frac{d}{dt}\begin{pmatrix}x \\\dot{x}\end{pmatrix} =\begin{pmatrix}0 & 1 \\-1 & -1\end{pmatrix}\begin{pmatrix}x \\\dot{x}\end{pmatrix} +\begin{pmatrix}0 \\1\end{pmatrix}$$Esta ecuación ya _no_ es de segundo orden, es de hecho, de primer orden, sin embargo nuestra variable ha crecido a ser un vector de estados, por el momento le llamaremos $X$, asi pues, lo podemos escribir como:$$\frac{d}{dt} X = A X + B$$en donde:$$A = \begin{pmatrix}0 & 1 \\-1 & -1\end{pmatrix} \quad \text{y} \quad B =\begin{pmatrix}0 \\1\end{pmatrix}$$y de manera similar, declarar una función para dar a ```odeint```.
from numpy import matrix, array def F(X, t): A = matrix([[0, 1], [-1, -1]]) B = matrix([[0], [1]]) return array((A*matrix(X).T + B).T).tolist()[0] ts = linspace(0, 10, 100) xs = odeint(func=F, y0=[0, 0], t=ts) plot(xs);
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Ejercicio Implementa la solución de la siguiente ecuación diferencial, por medio de un modelo en representación de espacio de estados:$$\ddot{x} = -8\dot{x} - 15x + 1$$> Nota: Tomalo con calma y paso a paso> * Empieza anotando la ecuación diferencial en tu cuaderno, junto a la misma identidad del ejemplo> * Extrae la derivada del lado izquierdo, para que obtengas el _estado_ de tu sistema> * Extrae las matrices A y B que corresponden a este sistema> * Escribe el codigo necesario para representar estas matrices
def G(X, t): A = # Escribe aqui el codigo para la matriz A B = # Escribe aqui el codigo para el vector B return array((A*matrix(X).T + B).T).tolist()[0] ts = linspace(0, 10, 100) xs = odeint(func=G, y0=[0, 0], t=ts) plot(xs); from pruebas_2 import prueba_2_4 prueba_2_4(xs)
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Funciones de transferencia Sin embargo, no es la manera mas facil de obtener la solución, tambien podemos aplicar una transformada de Laplace, y aplicar las funciones de la libreria de control para simular la función de transferencia de esta ecuación; al aplicar la transformada de Laplace, obtendremos:$$G(s) = \frac{1}{s^2 + s + 1}$$
from control import tf, step F = tf([0, 0, 1], [1, 1, 1]) xs, ts = step(F) plot(ts, xs);
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
Ejercicio Modela matematicamente la ecuación diferencial del ejercicio anterior, usando una representación de función de transferencia.> Nota: De nuevo, no desesperes, escribe tu ecuación diferencial y aplica la transformada de Laplaca tal como te enseñaron tus abuelos hace tantos años...
G = tf([], []) # Escribe los coeficientes de la función de transferencia xs, ts = step(G) plot(ts, xs); from pruebas_2 import prueba_2_5 prueba_2_5(ts, xs)
_____no_output_____
MIT
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
CPSC 483 Project 3 - Regularization, Cross-Validation, and Grid Search by: Josef Jankowski([email protected]) and William Timani ([email protected]) 1. Load and examine the Boston dataset’s features, target values, and description.
from sklearn import datasets dataset_boston = datasets.load_boston() print(dataset_boston.DESCR)
.. _boston_dataset: Boston house prices dataset --------------------------- **Data Set Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target. :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per $10,000 - PTRATIO pupil-teacher ratio by town - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town - LSTAT % lower status of the population - MEDV Median value of owner-occupied homes in $1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L. This is a copy of UCI ML housing dataset. https://archive.ics.uci.edu/ml/machine-learning-databases/housing/ This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics ...', Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter. The Boston house-price data has been used in many machine learning papers that address regression problems. .. topic:: References - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261. - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
2. Save CRIM as the new target value t, and drop the column CRIM from X. Add the target value MEDV to X.
import numpy as np import pandas as pd # Independent variables (i.e. features) df_boston_features = pd.DataFrame(data=dataset_boston.data, columns=dataset_boston.feature_names) df_boston_features.insert(0, 'MEDV', dataset_boston.target) df_boston_target = pd.DataFrame(data=df_boston_features['CRIM'], columns=['CRIM']) df_boston_features = df_boston_features.drop(['CRIM'], axis=1) print(df_boston_features)
MEDV ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO \ 0 24.0 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 15.3 1 21.6 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 17.8 2 34.7 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 17.8 3 33.4 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 18.7 4 36.2 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 18.7 .. ... ... ... ... ... ... ... ... ... ... ... 501 22.4 0.0 11.93 0.0 0.573 6.593 69.1 2.4786 1.0 273.0 21.0 502 20.6 0.0 11.93 0.0 0.573 6.120 76.7 2.2875 1.0 273.0 21.0 503 23.9 0.0 11.93 0.0 0.573 6.976 91.0 2.1675 1.0 273.0 21.0 504 22.0 0.0 11.93 0.0 0.573 6.794 89.3 2.3889 1.0 273.0 21.0 505 11.9 0.0 11.93 0.0 0.573 6.030 80.8 2.5050 1.0 273.0 21.0 B LSTAT 0 396.90 4.98 1 396.90 9.14 2 392.83 4.03 3 394.63 2.94 4 396.90 5.33 .. ... ... 501 391.99 9.67 502 396.90 9.08 503 396.90 5.64 504 393.45 6.48 505 396.90 7.88 [506 rows x 13 columns]
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
3. Use sklearn.model_selection.train_test_split() to split the features and target values into separate training and test sets. Use 80% of the original data as a training set, and 20% for testing.
from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(df_boston_features, df_boston_target, test_size=.2) print(x_train)
MEDV ZN INDUS CHAS NOX RM AGE DIS RAD TAX \ 55 35.4 90.0 1.22 0.0 0.403 7.249 21.9 8.6966 5.0 226.0 273 35.2 20.0 6.96 1.0 0.464 7.691 51.8 4.3665 3.0 223.0 350 22.9 40.0 1.25 0.0 0.429 6.490 44.4 8.7921 1.0 335.0 126 15.7 0.0 25.65 0.0 0.581 5.613 95.6 1.7572 2.0 188.0 11 18.9 12.5 7.87 0.0 0.524 6.009 82.9 6.2267 5.0 311.0 .. ... ... ... ... ... ... ... ... ... ... 407 27.9 0.0 18.10 0.0 0.659 5.608 100.0 1.2852 24.0 666.0 339 19.0 0.0 5.19 0.0 0.515 5.985 45.4 4.8122 5.0 224.0 430 14.5 0.0 18.10 0.0 0.584 6.348 86.1 2.0527 24.0 666.0 182 37.9 0.0 2.46 0.0 0.488 7.155 92.2 2.7006 3.0 193.0 227 31.6 0.0 6.20 0.0 0.504 7.163 79.9 3.2157 8.0 307.0 PTRATIO B LSTAT 55 17.9 395.93 4.81 273 18.6 390.77 6.58 350 19.7 396.90 5.98 126 19.1 359.29 27.26 11 15.2 396.90 13.27 .. ... ... ... 407 20.2 332.09 12.13 339 20.2 396.90 9.74 430 20.2 83.45 17.64 182 17.8 394.12 4.82 227 17.4 372.08 6.36 [404 rows x 13 columns]
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
4. Create and fit() an sklearn.linear_model.LinearRegression to the training set
from sklearn.linear_model import LinearRegression import numpy as np x = np.array(x_train) y = np.array(y_train) lm = LinearRegression().fit(x,y) print(f'w0 = {lm.intercept_}') print(f'w1 = {lm.coef_[0]}')
w0 = [17.76174149] w1 = [-1.89810056e-01 4.85173466e-02 -7.25156399e-02 -6.73905034e-01 -9.29427840e+00 2.09214818e-01 1.48984316e-03 -1.04360641e+00 5.78852226e-01 -3.95781002e-03 -2.28791697e-01 -8.15484094e-03 1.01545583e-01]
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
5. Use the predict() method of the model to find the response for each value in the test set, and sklearn.metrics.mean_squared_error(), to find the training and test MSE.
from sklearn.metrics import mean_squared_error predicted_train = lm.predict(x_train) predicted_test = lm.predict(x_test) mse = mean_squared_error(y_train, predicted_train, squared=True) print('Train: ', mse) mse = mean_squared_error(y_test, predicted_test, squared=True) print('Test: ', mse)
Train: 39.367922681692114 Test: 44.258359558976515
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
6. By itself, the MSE doesn’t tell us much. Use the score() method of the model to find the R2 values for the training and test data. R2, the coefficient of determination, measures the proportion of variability in the target t that can be explained using the features in X. A value near 1 indicates that most of the variability in the response has been explained by the regression, while a value near 0 indicates that the regression does not explain much of the variability. See Section 3.1.3 of An Introduction to Statistical Learning for details. Given the R2 scores, how well did our model do?
r_train = lm.score(x_train, y_train) print('Train r score: ', r_train) r_test = lm.score(x_test, y_test) print('Test r score: ', r_test)
Train r score: 0.4620036329045377 Test r score: 0.4203369336652354
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
The model is somewhat accurate. Our r's are mostly in between 0 and 1 (around .5) 7. Let’s see if we can fit the data better with a more flexible model. Scikit-learn can construct polynomial features for us using sklearn.preprocessing.PolynomialFeatures (though note that this includes interaction features as well; you saw in Project 2 that purely polynomial features can easily be constructed using numpy.hstack()). Add degree-2 polynomial features, then fit a new linear model. Compare the training and test MSE and R2 scores. Do we seem to be overfitting?
t = np.array(y_train['CRIM']).reshape([-1,1]) x_reshape_train = np.hstack(((np.array(np.ones_like(x_train['MEDV']))).reshape([-1,1]), np.array(x_train))) for attr in x_train: xsquared = np.square(np.array(x_train[attr])).reshape([-1,1]) x_reshape_train = np.hstack((x_reshape_train, xsquared)) lm = LinearRegression().fit(x_reshape_train, t) predicted_train = lm.predict(x_reshape_train) mse = mean_squared_error(y_train, predicted_train, squared=True) print('Training MSE: ', mse) t = np.array(y_test['CRIM']).reshape([-1,1]) x_reshape_test = np.hstack(((np.array(np.ones_like(x_test['MEDV']))).reshape([-1,1]), np.array(x_test))) for attr in x_test: xsquared = np.square(np.array(x_test[attr])).reshape([-1,1]) x_reshape_test = np.hstack((x_reshape_test, xsquared)) lm = LinearRegression().fit(x_reshape_test, t) predicted_train = lm.predict(x_reshape_test) mse = mean_squared_error(y_test, predicted_train, squared=True) print('Testing MSE: ', mse) r_train = lm.score(x_reshape_train, y_train) print('Train r score: ', r_train) r_test = lm.score(x_reshape_test, y_test) print('Test r score: ', r_test)
Training MSE: 32.767573538174275 Testing MSE: 1273.9784845912206 Train r score: -21.674371778352302 Test r score: -15.685622381430647
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
Test MSE seems to have improved as well as the training MSE. The r score seems to also be closer to 0 meaning that the regression does not explain much of the variability. 8. Regularization would allow us to construct a model of intermediate complexity by penalizing large values for the coefficients. Scikit-learn provides this as sklearn.linear_model.Ridge. The parameter alpha corresponds to 𝜆 as shown in the textbook. For now, leave it set to the default value of 1.0, and fit the model to the degree-2 polynomial features. Don’t forget to normalize your features. Once again, compare the training and test MSE and R2 scores. Is this model an improvement?
from sklearn.linear_model import Ridge clf = Ridge(alpha=1.0, normalize=True) pm = clf.fit(x_reshape_train, y_train) predicted_train = pm.predict(x_reshape_train) mse = mean_squared_error(y_train, predicted_train, squared=True) print('Training MSE: ', mse) predicted_test = pm.predict(x_reshape_test) mse = mean_squared_error(y_test, predicted_test, squared=True) print('Testing MSE: ', mse) r_train = pm.score(x_reshape_train, y_train) print('Train r score: ', r_train) r_test = pm.score(x_reshape_test, y_test) print('Test r score: ', r_test)
Training MSE: 41.74784384916144 Testing MSE: 47.56699214405758 Train r score: 0.42947997265391635 Test r score: 0.37700292560993187
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
The model does not seem to improve anything. 9. We used the default penalty value of 1.0 in the previous experiment, but there’s no reason to believe that this is optimal. Use sklearn.linear_model.RidgeCV to find an optimal value for alpha. How does this compare to experiment (8)?
from sklearn.linear_model import RidgeCV clf = RidgeCV(normalize=True) pm = clf.fit(x_reshape_train, y_train) predicted_train = pm.predict(x_reshape_train) mse = mean_squared_error(y_train, predicted_train, squared=True) print('Training MSE: ', mse) predicted_test = pm.predict(x_reshape_test) mse = mean_squared_error(y_test, predicted_test, squared=True) print('Testing MSE: ', mse) r_train = pm.score(x_reshape_train, y_train) print('Train r score: ', r_train) r_test = pm.score(x_reshape_test, y_test) print('Test r score: ', r_test)
Training MSE: 38.050178880414954 Testing MSE: 43.181827227141405 Train r score: 0.4800117300952836 Test r score: 0.43443655323311625
Apache-2.0
regularization.ipynb
josefj1519/RegularizationBoston
📝 Exercise M4.01The aim of this exercise is two-fold:* understand the parametrization of a linear model;* quantify the fitting accuracy of a set of such models.We will reuse part of the code of the course to:* load data;* create the function representing a linear model. Prerequisites Data loading NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
import pandas as pd penguins = pd.read_csv("../datasets/penguins_regression.csv") feature_name = "Flipper Length (mm)" target_name = "Body Mass (g)" data, target = penguins[[feature_name]], penguins[target_name]
_____no_output_____
CC-BY-4.0
notebooks/36 - linear_models_ex_01.ipynb
aquinquenel/scikit-learn-mooc
Model definition
def linear_model_flipper_mass( flipper_length, weight_flipper_length, intercept_body_mass ): """Linear model of the form y = a * x + b""" body_mass = weight_flipper_length * flipper_length + intercept_body_mass return body_mass
_____no_output_____
CC-BY-4.0
notebooks/36 - linear_models_ex_01.ipynb
aquinquenel/scikit-learn-mooc
Main exerciseDefine a vector `weights = [...]` and a vector `intercepts = [...]` ofthe same length. Each pair of entries `(weights[i], intercepts[i])` tags adifferent model. Use these vectors along with the vector`flipper_length_range` to plot several linear models that could possiblyfit our data. Use the above helper function to visualize both the models andthe real samples.
import numpy as np flipper_length_range = np.linspace(data.min(), data.max(), num=300) # solution import matplotlib.pyplot as plt import seaborn as sns weights = [-40, 45, 90] intercepts = [15000, -5000, -14000] ax = sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) label = "{0:.2f} (g / mm) * flipper length + {1:.2f} (g)" for weight, intercept in zip(weights, intercepts): predicted_body_mass = linear_model_flipper_mass( flipper_length_range, weight, intercept) ax.plot(flipper_length_range, predicted_body_mass, label=label.format(weight, intercept)) _ = ax.legend(loc='center left', bbox_to_anchor=(-0.25, 1.25), ncol=1)
_____no_output_____
CC-BY-4.0
notebooks/36 - linear_models_ex_01.ipynb
aquinquenel/scikit-learn-mooc
In the previous question, you were asked to create several linear models.The visualization allowed you to qualitatively assess if a model was betterthan another.Now, you should come up with a quantitative measure which indicates thegoodness of fit of each linear model and allows you to select the best model.Define a function `goodness_fit_measure(true_values, predictions)` that takesas inputs the true target values and the predictions and returns a singlescalar as output.
# solution def goodness_fit_measure(true_values, predictions): # we compute the error between the true values and the predictions of our # model errors = np.ravel(true_values) - np.ravel(predictions) # We have several possible strategies to reduce all errors to a single value. # Computing the mean error (sum divided by the number of element) might seem # like a good solution. However, we have negative errors that will misleadingly # reduce the mean error. Therefore, we can either square each # error or take the absolute value: these metrics are known as mean # squared error (MSE) and mean absolute error (MAE). Let's use the MAE here # as an example. return np.mean(np.abs(errors))
_____no_output_____
CC-BY-4.0
notebooks/36 - linear_models_ex_01.ipynb
aquinquenel/scikit-learn-mooc
You can now copy and paste the code below to show the goodness of fit foreach model.```pythonfor model_idx, (weight, intercept) in enumerate(zip(weights, intercepts)): target_predicted = linear_model_flipper_mass(data, weight, intercept) print(f"Model {model_idx}:") print(f"{weight:.2f} (g / mm) * flipper length + {intercept:.2f} (g)") print(f"Error: {goodness_fit_measure(target, target_predicted):.3f}\n")```
# solution for model_idx, (weight, intercept) in enumerate(zip(weights, intercepts)): target_predicted = linear_model_flipper_mass(data, weight, intercept) print(f"Model #{model_idx}:") print(f"{weight:.2f} (g / mm) * flipper length + {intercept:.2f} (g)") print(f"Error: {goodness_fit_measure(target, target_predicted):.3f}\n")
Model #0: -40.00 (g / mm) * flipper length + 15000.00 (g) Error: 2764.854 Model #1: 45.00 (g / mm) * flipper length + -5000.00 (g) Error: 338.523 Model #2: 90.00 (g / mm) * flipper length + -14000.00 (g) Error: 573.041
CC-BY-4.0
notebooks/36 - linear_models_ex_01.ipynb
aquinquenel/scikit-learn-mooc
Viewshed tool
viewshed = import_toolbox('http://sampleserver1.arcgisonline.com/ArcGIS/rest/services/Elevation/ESRI_Elevation_World/GPServer') viewshed.viewshed? help(viewshed.viewshed) import arcgis arcgis.env.out_spatial_reference = 4326 map = gis.map('South San Francisco', zoomlevel=12) map from arcgis.features import Feature, FeatureSet def get_viewshed(m, g): m.draw(g) res = viewshed.viewshed(FeatureSet([Feature(g)]), "5 Miles") # "5 Miles" or LinearUnit(5, 'Miles') can be passed as input m.draw(res) map.on_click(get_viewshed) def __call__():
_____no_output_____
Apache-2.0
talks/DevSummit2018/ArcGIS Python API - Advanced Scripting/GP/Using geoprocessing tools.ipynb
nitz21/arcpy
====================================================== MA477 - Theory and Applications of Data Science Homework 3: Matplotlib & Seaborn Dr. Valmir Bucaj United States Military Academy, West Point, AY20-2======================================================= Weight: 50pts Cadet Name:Michael KleineDate:January 31, 2020 $\dots \dots$ MY DOCUMENTATION IDENTIFIES ALL SOURCES USED AND ASSISTANCE RECEIVED IN THIS ASSIGNMENTMCK$\dots \dots$ I DID NOT USE ANY SOURCES OR ASSISTANCE REQUIRING DOCUMENATION IN COMPLETING THIS ASSIGNMENT Signature/Initials: Complete the following tasks: Import the following libaraires: `matplotlib.pyplot, seaborn, pandas, numpy`
import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np import datetime
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Recreate the following plot as closely as you can10pts
#Enter your code here x=np.linspace(start=-4,stop=4) fig=plt.figure(figsize=(8,6)) axes1=fig.add_axes([0.1,0.1,0.9,0.9]) axes2=fig.add_axes([0.43,0.2,0.25,0.4]) axes1.plot(x,3*np.exp(-0.25*x**2),'r-.', lw=3, label=r'3e$^{-0.25x}$') axes1.plot(x,2.8*np.exp(-0.15*(x-0.1)**2),marker='o', markerfacecolor='y', markeredgecolor='k',markersize=10, markeredgewidth=2, label=r'2.8e$^{-0.15(x-0.1)}$') axes1.legend(loc='upper right') axes1.set_xlabel('That took a while',fontsize=18) axes1.set_title('Many Plots', fontsize=20) axes2.set_title('Small Plot', fontsize=12) plt.xticks(ticks=[-2,-1,0,1,2]) plt.yticks(ticks=[-3,-2,-1,0,1,2,3]) axes2.text(-1.75,1.5,'Cool',size=16, color='r') axes2.text(-0.25,0,'Plot',size=16, color='b') axes2.text(1,-2,'Bro',size=16, color='g') #Doon't run this cell unless you have recreated it, as the plot below will dissapear
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
For the rest of the exercises we will be using the `Airbnb` dataset contained in this folder. Read in the dataset and save it as `abnb`
#Enter code here abnb=pd.read_excel('Airbnb.xlsx')
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Check out the head of the data:
#Enter code here abnb.head() #Don't run this cell unless you are happy with your answer above
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Recreate the following `jointplot` 5pts
#Enter code here sns.set_style('white') sns.jointplot(x='number_of_reviews',y='price',data=abnb,height=6, kind='kde') plt.show() #Don't run this cell unless you are happy with your answer
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Recreate the following `boxplots` 5pts
#Enter code here plt.figure(figsize=(12,6)) sns.boxplot(x='neighbourhood_group',y='price',data=abnb) plt.xlabel('Neighbourhood Group', fontsize=14) plt.ylabel('Price', fontsize=14) #Don't run this cell unless you are happy with your answer
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
10pts
#Enter Code Here plt.figure(figsize=(12,8)) sns.boxplot(x='neighbourhood_group',y='number_of_reviews',data=abnb, hue='room_type') plt.xlabel('Neighbourhood Group', fontsize=14) plt.ylabel('Number of Reviews', fontsize=14) plt.ylim(-10,350) #Don't run this cell unless you are happy with your answer
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Recreate the following `violinplot` comparing the distribution of ONLY `Entire home/apt` and `Private room` for all five `neighbourhood groups`10pts
#Enter Code Here abnb2 = abnb[ abnb['room_type'] == 'Shared room'].index abnb3 = abnb.drop(abnb2) plt.figure(figsize=(12,8)) sns.violinplot(x='neighbourhood_group',y='price', data=abnb3,hue='room_type',split=True) plt.xlabel('Neighbourhood Group', fontsize=16) plt.ylabel('Price', fontsize=16) #Don't run this cell unless you are happy with your answer
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Challenging!!!Time Series: Recreate the following plot10pts(Hint: Convert the column `last_review` to `DateTime` format and reset it as the index of the dataframe)
#Enter answer here #Format the data abnb=pd.read_excel('Airbnb.xlsx') abnb['Month'] = pd.to_datetime(abnb['last_review'],yearfirst=True, format='%Y/%m/%d') abnb['last_review'] = pd.to_datetime(abnb['last_review'],yearfirst=True, format='%Y/%m/%d') abnb = abnb.sort_values(by=['Month']) abnb = abnb.set_index('Month') abnb = abnb.dropna(axis=0,subset=['last_review']) #create the plot fig = plt.figure(figsize=(12,5)) axes=fig.add_axes([0.1,0.1,0.9,0.9]) axes.plot('last_review', 'number_of_reviews', data=abnb) axes.plot('last_review', 'price', data=abnb) datemin = pd.to_datetime('20161001', format='%Y%m%d', errors='ignore') datemax = pd.to_datetime('20190401', format='%Y%m%d', errors='ignore') axes.set_xlim(datemin, datemax) axes.legend() axes.set_title('Fluctuations in Number of Reviews and Price over time') axes.set_xlabel('last_review')
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Instructor Comments: Only some minor styling issues and label. -0.5pts
#Don't erase this cell unless you are happy with your answer #https://matplotlib.org/tutorials/text/mathtext.html #https://www.overleaf.com/learn/latex/Subscripts_and_superscripts #https://thispointer.com/python-pandas-how-to-drop-rows-in-dataframe-by-conditions-on-column-values/ #https://www.geeksforgeeks.org/change-data-type-for-one-or-more-columns-in-pandas-dataframe/ #https://stackoverflow.com/questions/26763344/convert-pandas-column-to-datetime #https://appdividend.com/2019/01/26/pandas-set-index-example-python-set_index-tutorial/ #https://stackoverflow.com/questions/28161356/sort-pandas-dataframe-by-date
_____no_output_____
MIT
MA477 - Theory and Applications of Data Science/Homework/Student Solutions/Homework 3/Kleine_Michael_MA477_Homework3.ipynb
jkstarling/MA477-copy
Q1 Solving the equation for all c in [0,4)
x_list = [] for c_ in range(0,400,2): c = c_/100 #the accuracy delta = 1 x=1 itr = 0 while delta>1e-7: x_new = 1-exp(-1*c*x) delta = abs(x_new- x) x = x_new itr += 1 if c==3: print("For c=3, x={}".format(x)) print("Number of iterations: {}".format(itr)) x_list.append(x)
For c=3, x=0.9404798005896199 Number of iterations: 9
MIT
Assignment4.ipynb
mukund109/Numerical_Analysis_PHYS3142
A plot of the percolation transition
plt.plot([i/100 for i in range(0,400,2)], x_list) plt.xlabel('c') plt.ylabel('x')
_____no_output_____
MIT
Assignment4.ipynb
mukund109/Numerical_Analysis_PHYS3142
Q2 a) The overrelaxation formula is given by:$$x_{n+1} = (1+\omega )f(x_n) - \omega x_n $$$$\implies f(x_n) = \frac{x_{n+1} + \omega x_n}{1+\omega} \ \ \ \ \ (1)$$Denote the true solution by $x^*$, then the Taylor series expansion of $f$ around $x_n$ is given by:$$f(x^*) \approx f(x_n) + f'(x_n)(x^* - x_n)$$Since, $x^* = f(x^*)$,$$x^* \approx f(x_n) + f'(x_n)(x^* - x_n)$$Substituting in equation 1 and solving for $x^*$:$$x^* \approx \frac{\frac{x_{n+1} + \omega x_n}{1 + \omega} - f'(x_n) x_n}{1-f'(x_n)}\\\implies x^* - x_{n+1} \approx \frac{x_n - x_{n+1}}{1 - \frac{1}{(1+\omega) f'(x_n) - \omega}}$$ b) and c)
itr_list = [] w_list = [] for w_ in range(0,60,2): w = w_/100 delta = 1 x=1 itr = 0 while delta>1e-7: x_new = (1+w)*(1-exp(-3*x)) - w*x delta = abs(x_new- x) x = x_new itr += 1 print("For w={}".format(w), end=', ') print("Number of iterations: {}".format(itr), end='\n \n') itr_list.append(itr) w_list.append(w)
For w=0.0, Number of iterations: 9 For w=0.02, Number of iterations: 9 For w=0.04, Number of iterations: 8 For w=0.06, Number of iterations: 8 For w=0.08, Number of iterations: 7 For w=0.1, Number of iterations: 7 For w=0.12, Number of iterations: 7 For w=0.14, Number of iterations: 6 For w=0.16, Number of iterations: 6 For w=0.18, Number of iterations: 5 For w=0.2, Number of iterations: 4 For w=0.22, Number of iterations: 4 For w=0.24, Number of iterations: 5 For w=0.26, Number of iterations: 6 For w=0.28, Number of iterations: 6 For w=0.3, Number of iterations: 7 For w=0.32, Number of iterations: 7 For w=0.34, Number of iterations: 7 For w=0.36, Number of iterations: 8 For w=0.38, Number of iterations: 8 For w=0.4, Number of iterations: 9 For w=0.42, Number of iterations: 9 For w=0.44, Number of iterations: 9 For w=0.46, Number of iterations: 10 For w=0.48, Number of iterations: 10 For w=0.5, Number of iterations: 11 For w=0.52, Number of iterations: 11 For w=0.54, Number of iterations: 12 For w=0.56, Number of iterations: 12 For w=0.58, Number of iterations: 13
MIT
Assignment4.ipynb
mukund109/Numerical_Analysis_PHYS3142
d) The recursive formula for the error can be obtained by rearranging the previous equations to get:$$\epsilon_{n+1} = \epsilon_{n} [(1+\omega) f'(x^*) - \omega]$$(Note: This is an approximation for when $x_{n}$ is close to $x^*$)In order to find the conditions when the overrelaxation method with $\omega < 0$ converges faster than the ordinary relaxation method ($\omega = 0$), we need to find values of $f'(x^*)$ and $\omega$ that satisfy the following constraints:1. Overrelaxation method converges:$$ |(1+\omega) f'(x^*) - \omega| < 1$$2. Ordinary relaxation method converges:$$ |f'(x^*)| < 1$$3. Overrelaxation converges faster:$$|(1+\omega) f'(x^*) - \omega | < |f'(x^*)|$$4. Overrelaxation factor is negative:$$\omega < 0$$ We can plot this region
fy, wx = np.meshgrid(np.linspace(-2,1,1000), np.linspace(-2,1,1000)) mask = np.zeros((1000, 1000), dtype=bool) mask[(np.abs((1+wx)*fy - wx) < 1) & \ (np.abs(fy) < 1) & \ (np.abs((1+wx)*fy - wx) < np.abs(fy)) & \ (wx < 0)] = 2 plt.contour(wx, fy, mask, cmap='flag') plt.xlabel("w") plt.ylabel("f '(x*)")
_____no_output_____
MIT
Assignment4.ipynb
mukund109/Numerical_Analysis_PHYS3142
Therefore, when the current estimate is sufficiently close to the actual solution, when $f'(x^*)$ and $\omega$ fall inside this region, the overrelaxation converges faster than ordinary relaxation We can take the simple example of the case when $f(x) = -0.75x$This resembles all cases where the function is locally linear with slope $-0.75$
itr_list = [] w_list = [] for w_ in range(-80,1,1): w = w_/100 delta = 1 x=0.1 itr = 0 while delta>1e-7: x_new = (1+w)*(-0.75*x) - w*x delta = abs(x_new- x) x = x_new itr += 1 itr_list.append(itr) w_list.append(w) plt.plot(w_list, itr_list) plt.xlabel("w") plt.ylabel("Number of iterations")
_____no_output_____
MIT
Assignment4.ipynb
mukund109/Numerical_Analysis_PHYS3142
Generate images from text sentences with VQGAN and CLIP (z+quantize method with augmentations).Notebook made by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). The original BigGAN+CLIP method was made by https://twitter.com/advadnoun. Translated and added explanations, and modifications by Eleiber8347, and the friendly interface was made thanks to Abulafia3734.For a detailed tutorial on how to use it, I recommend visiting this article https://tuscriaturas.miraheze.org/wiki/Ayuda:Generar_im%C3%A1genes_con_VQGAN%2BCLIP * by Jakeukalane2767 and Avengium (Angel)3715 *Google-translated to English: https://tuscriaturas-miraheze-org.translate.goog/wiki/Ayuda:Generar_im%C3%A1genes_con_VQGAN%2BCLIP?_x_tr_sl=es&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
# @title Licensed under the MIT License # Copyright (c) 2021 Katherine Crowson # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. !nvidia-smi # @title Install libraries # @markdown This cell will take a while because it has to download several libraries print("Installing CLIP...") !git clone https://github.com/openai/CLIP &> /dev/null print("Installing Python lbraries for IA...") !git clone https://github.com/CompVis/taming-transformers !pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null !pip install kornia &> /dev/null !pip install einops &> /dev/null !pip install wget &> /dev/null !pip install tdqm print("Installing metadata tools...") !pip install stegano &> /dev/null !apt install exempi &> /dev/null !pip install python-xmp-toolkit &> /dev/null !pip install imgtag &> /dev/null !pip install pillow==7.1.2 &> /dev/null print("Installing video creation tooling...") !pip install imageio-ffmpeg &> /dev/null !mkdir steps print("Finalising installation.") #@title Select model #@markdown By default, the notebook downloads model 16384 from ImageNet. There are others that are not downloaded by default, since it would be unneccssary if you are not going to use them, so if you want to use them, simply select the models to download. imagenet_1024 = False #@param {type:"boolean"} imagenet_16384 = True #@param {type:"boolean"} gumbel_8192 = False #@param {type:"boolean"} coco = False #@param {type:"boolean"} faceshq = False #@param {type:"boolean"} wikiart_1024 = False #@param {type:"boolean"} wikiart_16384 = False #@param {type:"boolean"} sflckr = False #@param {type:"boolean"} ade20k = False #@param {type:"boolean"} ffhq = False #@param {type:"boolean"} celebahq = False #@param {type:"boolean"} if imagenet_1024: !curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' #ImageNet 1024 !curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fckpts%2Flast.ckpt&dl=1' #ImageNet 1024 if imagenet_16384: !curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' #ImageNet 16384 !curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1' #ImageNet 16384 if gumbel_8192: !curl -L -o gumbel_8192.yaml -C - 'https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' #Gumbel 8192 !curl -L -o gumbel_8192.ckpt -C - 'https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/files/?p=%2Fckpts%2Flast.ckpt&dl=1' #Gumbel 8192 if coco: !curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO !curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO if faceshq: !curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ !curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ if wikiart_1024: !curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024 !curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024 if wikiart_16384: !curl -L -o wikiart_16384.yaml -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.yaml' #WikiArt 16384 !curl -L -o wikiart_16384.ckpt -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.ckpt' #WikiArt 16384 if sflckr: !curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR !curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR if ade20k: !curl -L -o ade20k.yaml -C - 'https://static.miraheze.org/intercriaturaswiki/b/bf/Ade20k.txt' #ADE20K !curl -L -o ade20k.ckpt -C - 'https://app.koofr.net/content/links/0f65c2cd-7102-4550-a2bd-07fd383aac9e/files/get/last.ckpt?path=%2F2020-11-20T21-45-44_ade20k_transformer%2Fcheckpoints%2Flast.ckpt' #ADE20K if ffhq: !curl -L -o ffhq.yaml -C - 'https://app.koofr.net/content/links/0fc005bf-3dca-4079-9d40-cdf38d42cd7a/files/get/2021-04-23T18-19-01-project.yaml?path=%2F2021-04-23T18-19-01_ffhq_transformer%2Fconfigs%2F2021-04-23T18-19-01-project.yaml&force' #FFHQ !curl -L -o ffhq.ckpt -C - 'https://app.koofr.net/content/links/0fc005bf-3dca-4079-9d40-cdf38d42cd7a/files/get/last.ckpt?path=%2F2021-04-23T18-19-01_ffhq_transformer%2Fcheckpoints%2Flast.ckpt&force' #FFHQ if celebahq: !curl -L -o celebahq.yaml -C - 'https://app.koofr.net/content/links/6dddf083-40c8-470a-9360-a9dab2a94e96/files/get/2021-04-23T18-11-19-project.yaml?path=%2F2021-04-23T18-11-19_celebahq_transformer%2Fconfigs%2F2021-04-23T18-11-19-project.yaml&force' #CelebA-HQ !curl -L -o celebahq.ckpt -C - 'https://app.koofr.net/content/links/6dddf083-40c8-470a-9360-a9dab2a94e96/files/get/last.ckpt?path=%2F2021-04-23T18-11-19_celebahq_transformer%2Fcheckpoints%2Flast.ckpt&force' #CelebA-HQ # @title Load libraries and definitions import argparse import math from pathlib import Path import sys sys.path.append('./taming-transformers') from IPython import display from base64 import b64encode from omegaconf import OmegaConf from PIL import Image from taming.models import cond_transformer, vqgan import torch from torch import nn, optim from torch.nn import functional as F from torchvision import transforms from torchvision.transforms import functional as TF from tqdm.notebook import tqdm from CLIP import clip import kornia.augmentation as K import numpy as np import imageio from PIL import ImageFile, Image from imgtag import ImgTag # metadatos from libxmp import * # metadatos import libxmp # metadatos from stegano import lsb import json from tqdm.notebook import tqdm ImageFile.LOAD_TRUNCATED_IMAGES = True def sinc(x): return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([])) def lanczos(x, a): cond = torch.logical_and(-a < x, x < a) out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([])) return out / out.sum() def ramp(ratio, width): n = math.ceil(width / ratio + 1) out = torch.empty([n]) cur = 0 for i in range(out.shape[0]): out[i] = cur cur += ratio return torch.cat([-out[1:].flip([0]), out])[1:-1] def resample(input, size, align_corners=True): n, c, h, w = input.shape dh, dw = size input = input.view([n * c, 1, h, w]) if dh < h: kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype) pad_h = (kernel_h.shape[0] - 1) // 2 input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect') input = F.conv2d(input, kernel_h[None, None, :, None]) if dw < w: kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype) pad_w = (kernel_w.shape[0] - 1) // 2 input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect') input = F.conv2d(input, kernel_w[None, None, None, :]) input = input.view([n, c, h, w]) return F.interpolate(input, size, mode='bicubic', align_corners=align_corners) class ReplaceGrad(torch.autograd.Function): @staticmethod def forward(ctx, x_forward, x_backward): ctx.shape = x_backward.shape return x_forward @staticmethod def backward(ctx, grad_in): return None, grad_in.sum_to_size(ctx.shape) replace_grad = ReplaceGrad.apply class ClampWithGrad(torch.autograd.Function): @staticmethod def forward(ctx, input, min, max): ctx.min = min ctx.max = max ctx.save_for_backward(input) return input.clamp(min, max) @staticmethod def backward(ctx, grad_in): input, = ctx.saved_tensors return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None clamp_with_grad = ClampWithGrad.apply def vector_quantize(x, codebook): d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T indices = d.argmin(-1) x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook return replace_grad(x_q, x) class Prompt(nn.Module): def __init__(self, embed, weight=1., stop=float('-inf')): super().__init__() self.register_buffer('embed', embed) self.register_buffer('weight', torch.as_tensor(weight)) self.register_buffer('stop', torch.as_tensor(stop)) def forward(self, input): input_normed = F.normalize(input.unsqueeze(1), dim=2) embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2) dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2) dists = dists * self.weight.sign() return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean() def parse_prompt(prompt): vals = prompt.rsplit(':', 2) vals = vals + ['', '1', '-inf'][len(vals):] return vals[0], float(vals[1]), float(vals[2]) class MakeCutouts(nn.Module): def __init__(self, cut_size, cutn, cut_pow=1.): super().__init__() self.cut_size = cut_size self.cutn = cutn self.cut_pow = cut_pow self.augs = nn.Sequential( K.RandomHorizontalFlip(p=0.5), # K.RandomSolarize(0.01, 0.01, p=0.7), K.RandomSharpness(0.3,p=0.4), K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'), K.RandomPerspective(0.2,p=0.4), K.ColorJitter(hue=0.01, saturation=0.01, p=0.7)) self.noise_fac = 0.1 def forward(self, input): sideY, sideX = input.shape[2:4] max_size = min(sideX, sideY) min_size = min(sideX, sideY, self.cut_size) cutouts = [] for _ in range(self.cutn): size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size) offsetx = torch.randint(0, sideX - size + 1, ()) offsety = torch.randint(0, sideY - size + 1, ()) cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size] cutouts.append(resample(cutout, (self.cut_size, self.cut_size))) batch = self.augs(torch.cat(cutouts, dim=0)) if self.noise_fac: facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac) batch = batch + facs * torch.randn_like(batch) return batch def load_vqgan_model(config_path, checkpoint_path): config = OmegaConf.load(config_path) if config.model.target == 'taming.models.vqgan.VQModel': model = vqgan.VQModel(**config.model.params) model.eval().requires_grad_(False) model.init_from_ckpt(checkpoint_path) elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer': parent_model = cond_transformer.Net2NetTransformer(**config.model.params) parent_model.eval().requires_grad_(False) parent_model.init_from_ckpt(checkpoint_path) model = parent_model.first_stage_model elif config.model.target == 'taming.models.vqgan.GumbelVQ': model = vqgan.GumbelVQ(**config.model.params) print(config.model.params) model.eval().requires_grad_(False) model.init_from_ckpt(checkpoint_path) else: raise ValueError(f'unknown model type: {config.model.target}') del model.loss return model def resize_image(image, out_size): ratio = image.size[0] / image.size[1] area = min(image.size[0] * image.size[1], out_size[0] * out_size[1]) size = round((area * ratio)**0.5), round((area / ratio)**0.5) return image.resize(size, Image.LANCZOS) def download_img(img_url): try: return wget.download(img_url,out="input.jpg") except: return
_____no_output_____
Apache-2.0
ai_art_book.ipynb
Dazzla/Codility-Lessons-In-Java
Execution ParametersMainly what you will have to modify will be texts:, there you can place the text(s) you want to generate (separated with | ). It is a list because you can put more than one text, and so the AI ​​tries to 'mix' the images, giving the same priority to both texts.To use an initial image to the model, you just have to upload a file to the Colab environment (in the section on the left), and then modify initial_image: putting the exact name of the file. Example: sample.pngYou can also modify the model by changing the lines that say model:. Currently 1024, 16384, Gumbel, COCO-Stuff, FacesHQ, WikiArt, S-FLCKR, Ade20k, FFHQ and CelebaHQ are available. To activate them you have to have downloaded them first, and then you can just select it.You can also use target_images, which is basically putting one or more images on it that the AI ​​will take as "target", fulfilling the same function as putting a text on it. To put more than one you have to use | as a separator.
#@title Parameter text = "Complex building" #@param {type:"string"} width = 480#@param {type:"number"} height = 480#@param {type:"number"} model = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_1024", "wikiart_16384", "coco", "faceshq", "sflckr", "ade20k", "ffhq", "celebahq", "gumbel_8192"] image_interval = 50#@param {type:"number"} initial_image = None#@param {type:"string"} object_images = None#@param {type:"string"} seed = -1#@param {type:"number"} max_iterations = -1#@param {type:"number"} input_images = "" model_names={"vqgan_imagenet_f16_16384": 'ImageNet 16384', "vqgan_imagenet_f16_1024": "ImageNet 1024", "wikiart_1024":"WikiArt 1024", "wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR", "ade20k":"ADE20K", "ffhq":"FFHQ", "celebahq":"CelebA-HQ", "gumbel_8192": "Gumbel 8192"} model_name = model_names[model] if model == "gumbel_8192": is_gumbel = True else: is_gumbel = False if seed == -1: seed = None if initial_image == "None": initial_image = None elif initial_image and initial_image.lower().startswith("http"): initial_image = download_img(initial_image) if object_images == "None" or not object_images: object_images = [] else: object_images = object_images.split("|") object_images = [image.strip() for image in object_images] if initial_image or object_images != []: input_images = True text = [frase.strip() for frase in text.split("|")] if text == ['']: text = [] args = argparse.Namespace( prompts=text, image_prompts=object_images, noise_prompt_seeds=[], noise_prompt_weights=[], size=[width, height], init_image=initial_image, init_weight=0., clip_model='ViT-B/32', vqgan_config=f'{model}.yaml', vqgan_checkpoint=f'{model}.ckpt', step_size=0.1, cutn=64, cut_pow=1., display_freq=image_interval, seed=seed, ) #@title Execute... device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print('Using device:', device) if text: print('Using texts:', text) if object_images: print('Using image prompts:', object_images) if args.seed is None: seed = torch.seed() else: seed = args.seed torch.manual_seed(seed) print('Using seed:', seed) model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device) perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device) cut_size = perceptor.visual.input_resolution if is_gumbel: e_dim = model.quantize.embedding_dim else: e_dim = model.quantize.e_dim f = 2**(model.decoder.num_resolutions - 1) make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow) if is_gumbel: n_toks = model.quantize.n_embed else: n_toks = model.quantize.n_e toksX, toksY = args.size[0] // f, args.size[1] // f sideX, sideY = toksX * f, toksY * f if is_gumbel: z_min = model.quantize.embed.weight.min(dim=0).values[None, :, None, None] z_max = model.quantize.embed.weight.max(dim=0).values[None, :, None, None] else: z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None] z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None] if args.init_image: pil_image = Image.open(args.init_image).convert('RGB') pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS) z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1) else: one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float() if is_gumbel: z = one_hot @ model.quantize.embed.weight else: z = one_hot @ model.quantize.embedding.weight z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2) z_orig = z.clone() z.requires_grad_(True) opt = optim.Adam([z], lr=args.step_size) normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711]) pMs = [] for prompt in args.prompts: txt, weight, stop = parse_prompt(prompt) embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float() pMs.append(Prompt(embed, weight, stop).to(device)) for prompt in args.image_prompts: path, weight, stop = parse_prompt(prompt) img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY)) batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device)) embed = perceptor.encode_image(normalize(batch)).float() pMs.append(Prompt(embed, weight, stop).to(device)) for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights): gen = torch.Generator().manual_seed(seed) embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen) pMs.append(Prompt(embed, weight).to(device)) def synth(z): if is_gumbel: z_q = vector_quantize(z.movedim(1, 3), model.quantize.embed.weight).movedim(3, 1) else: z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1) return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1) def add_xmp_data(filename): imagen = ImgTag(filename=filename) imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True}) if args.prompts: imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True}) else: imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True}) imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True}) imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', nombre_modelo, {"prop_array_is_ordered":True, "prop_value_is_array":True}) imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True}) imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True}) #for frases in args.prompts: # imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True}) imagen.close() def add_stegano_data(filename): data = { "title": " | ".join(args.prompts) if args.prompts else None, "notebook": "VQGAN+CLIP", "i": i, "model": model_name, "seed": str(seed), "input_images": input_images } lsb.hide(filename, json.dumps(data)).save(filename) @torch.no_grad() def checkin(i, losses): losses_str = ', '.join(f'{loss.item():g}' for loss in losses) tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}') out = synth(z) TF.to_pil_image(out[0].cpu()).save('progress.png') add_stegano_data('progress.png') add_xmp_data('progress.png') display.display(display.Image('progress.png')) def ascend_txt(): global i out = synth(z) iii = perceptor.encode_image(normalize(make_cutouts(out))).float() result = [] if args.init_weight: result.append(F.mse_loss(z, z_orig) * args.init_weight / 2) for prompt in pMs: result.append(prompt(iii)) img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:] img = np.transpose(img, (1, 2, 0)) filename = f"steps/{i:04}.png" imageio.imwrite(filename, np.array(img)) add_stegano_data(filename) add_xmp_data(filename) return result def train(i): opt.zero_grad() lossAll = ascend_txt() if i % args.display_freq == 0: checkin(i, lossAll) loss = sum(lossAll) loss.backward() opt.step() with torch.no_grad(): z.copy_(z.maximum(z_min).minimum(z_max)) i = 0 try: with tqdm() as pbar: while True: train(i) if i == max_iterations: break i += 1 pbar.update() except KeyboardInterrupt: pass
_____no_output_____
Apache-2.0
ai_art_book.ipynb
Dazzla/Codility-Lessons-In-Java
Generate a video of the resultsIf you want to generate a video with the frames, just click below. You can modify the number of FPS, the initial frame, the last frame, etc.
# @title View video in browser # @markdown This process is slow. Use the download cell below if you don't want to wait mp4 = open('video.mp4','rb').read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() display.HTML(""" <video width=400 controls> <source src="%s" type="video/mp4"> </video> """ % data_url) # @title Download video from google.colab import files files.download("video.mp4")
_____no_output_____
Apache-2.0
ai_art_book.ipynb
Dazzla/Codility-Lessons-In-Java
Sequence to Sequence Model using RNN==================
%matplotlib inline
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
NLP From Scratch: Translation with a Sequence to Sequence Network and Attention*******************************************************************************Based on original code by Sean Robertson ``_In this project we will be teaching a neural network to translate fromFrench to English.:: [KEY: > input, = target, < output] > il est en train de peindre un tableau . = he is painting a picture . < he is painting a picture . > pourquoi ne pas essayer ce vin delicieux ? = why not try that delicious wine ? < why not try that delicious wine ? > elle n est pas poete mais romanciere . = she is not a poet but a novelist . < she not not a poet but a novelist . > vous etes trop maigre . = you re too skinny . < you re all alone .... to varying degrees of success.This is made possible by the simple but powerful idea of the `sequenceto sequence network `__, in which tworecurrent neural networks work together to transform one sequence toanother. An encoder network condenses an input sequence into a vector,and a decoder network unfolds that vector into a new sequence.**Requirements**
from __future__ import unicode_literals, print_function, division from io import open import unicodedata import string import re import random import torch import torch.nn as nn from torch import optim import torch.nn.functional as F device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Loading data files==================The English to French pairs are too big to include in the repo, sodownload to ``data/eng-fra.txt`` before continuing. The file is a tabseparated list of translation pairs: I am cold. J'ai froid.
!wget https://github.com/ravi-ilango/acm-dec-2020-nlp/blob/main/lab1/data.zip?raw=true -O data.zip !unzip data.zip !head data/eng-fra.txt
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
We'll need a unique index per word to use as the inputs and targets ofthe networks later. To keep track of all this we will use a helper classcalled ``Lang`` which has word → index (``word2index``) and index → word(``index2word``) dictionaries, as well as a count of each word``word2count`` to use to later replace rare words.
SOS_token = 0 EOS_token = 1 class Lang: def __init__(self, name): self.name = name self.word2index = {} self.word2count = {} self.index2word = {0: "SOS", 1: "EOS"} self.n_words = 2 # Count SOS and EOS def addSentence(self, sentence): for word in sentence.split(' '): self.addWord(word) def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.n_words self.word2count[word] = 1 self.index2word[self.n_words] = word self.n_words += 1 else: self.word2count[word] += 1
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
The files are all in Unicode, to simplify we will turn Unicodecharacters to ASCII, make everything lowercase, and trim mostpunctuation.
# Turn a Unicode string to plain ASCII, thanks to # https://stackoverflow.com/a/518232/2809427 def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' ) # Lowercase, trim, and remove non-letter characters def normalizeString(s): s = unicodeToAscii(s.lower().strip()) s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) return s
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Exercise: Check string processing
s="À l'aide !" normalizeString(s)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
To read the data file we will split the file into lines, and then splitlines into pairs. The files are all English → Other Language, so if wewant to translate from Other Language → English I added the ``reverse``flag to reverse the pairs.
def readLangs(lang1, lang2, reverse=False): print("Reading lines...") # Read the file and split into lines lines = open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\ read().strip().split('\n') # Split every line into pairs and normalize pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines] # Reverse pairs, make Lang instances if reverse: pairs = [list(reversed(p)) for p in pairs] input_lang = Lang(lang2) output_lang = Lang(lang1) else: input_lang = Lang(lang1) output_lang = Lang(lang2) return input_lang, output_lang, pairs
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Exercise: Check creation of input and output sentence pairs
input_lang, output_lang, pairs = readLangs("eng", "fra", reverse=True)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Since there are a *lot* of example sentences and we want to trainsomething quickly, we'll trim the data set to only relatively short andsimple sentences. Here the maximum length is 10 words (that includesending punctuation) and we're filtering to sentences that translate tothe form "I am" or "He is" etc. (accounting for apostrophes replacedearlier).
MAX_LENGTH = 10 eng_prefixes = ( "i am ", "i m ", "he is", "he s ", "she is", "she s ", "you are", "you re ", "we are", "we re ", "they are", "they re " ) def filterPair(p): return len(p[0].split(' ')) < MAX_LENGTH and \ len(p[1].split(' ')) < MAX_LENGTH and \ p[1].startswith(eng_prefixes) def filterPairs(pairs): return [pair for pair in pairs if filterPair(pair)]
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Exercise: Check results of filtering data
filterPairs(pairs[:100])
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
The full process for preparing the data is:- Read text file and split into lines, split lines into pairs- Normalize text, filter by length and content- Make word lists from sentences in pairs
def prepareData(lang1, lang2, reverse=False): input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse) print("Read %s sentence pairs" % len(pairs)) pairs = filterPairs(pairs) print("Trimmed to %s sentence pairs" % len(pairs)) print("Counting words...") for pair in pairs: input_lang.addSentence(pair[0]) output_lang.addSentence(pair[1]) print("Counted words:") print(input_lang.name, input_lang.n_words) print(output_lang.name, output_lang.n_words) return input_lang, output_lang, pairs input_lang, output_lang, pairs = prepareData('eng', 'fra', True) print(random.choice(pairs))
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
The Seq2Seq Model using RNN===========================A Recurrent Neural Network, or RNN, is a network that operates on asequence and uses its own output as input for subsequent steps. The Encoder-----------The encoder of a seq2seq network is a RNN that outputs some value forevery word from the input sentence. For every input word the encoderoutputs a vector and a hidden state, and uses the hidden state for the next input word.
class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.embedding = nn.Embedding(input_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size) def forward(self, input, hidden): embedded = self.embedding(input).view(1, 1, -1) output = embedded output, hidden = self.gru(output, hidden) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) #input_size: number of words in input # print (input_lang.n_words) #hidden_size: word embedding size = 256
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
The Decoder-----------The decoder is another RNN that takes the encoder output vector(s) andoutputs a sequence of words to create the translation.
class DecoderRNN(nn.Module): def __init__(self, hidden_size, output_size): super(DecoderRNN, self).__init__() self.hidden_size = hidden_size self.embedding = nn.Embedding(output_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden): output = self.embedding(input).view(1, 1, -1) output = F.relu(output) output, hidden = self.gru(output, hidden) output = self.softmax(self.out(output[0])) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Training========Preparing Training Data-----------------------To train, for each pair we will need an input tensor (indexes of thewords in the input sentence) and target tensor (indexes of the words inthe target sentence). While creating these vectors we will append theEOS token to both sequences.
def indexesFromSentence(lang, sentence): return [lang.word2index[word] for word in sentence.split(' ')] def tensorFromSentence(lang, sentence): indexes = indexesFromSentence(lang, sentence) indexes.append(EOS_token) return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1) def tensorsFromPair(pair): input_tensor = tensorFromSentence(input_lang, pair[0]) target_tensor = tensorFromSentence(output_lang, pair[1]) return (input_tensor, target_tensor)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Exercise: Check input and target data
pair = pairs[1] pair tensors = tensorsFromPair(pair) input_tensor = tensors[0] target_tensor = tensors[1] input_tensor, target_tensor
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Exercise: Check the forward pass of the network
encoder = EncoderRNN(input_size=input_lang.n_words, hidden_size=256).to(device) decoder = DecoderRNN (hidden_size=256, output_size=output_lang.n_words).to(device) learning_rate = 0.01 criterion = nn.NLLLoss() encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate) #Check one forward and backward pass encoder_hidden = encoder.initHidden() encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_length = input_tensor.size(0) target_length = target_tensor.size(0) loss = 0 input_length, target_length for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei], encoder_hidden) encoder_hidden decoder_input = torch.tensor([[SOS_token]], device=device) decoder_hidden = encoder_hidden for di in range(target_length): decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) topv, topi = decoder_output.topk(1) decoder_input = topi.squeeze().detach() # detach from history as input loss += criterion(decoder_output, target_tensor[di]) if decoder_input.item() == EOS_token: break loss.backward() encoder_optimizer.step() decoder_optimizer.step() #One forward and backward pass encoder_hidden = encoder.initHidden() encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_length = input_tensor.size(0) target_length = target_tensor.size(0) loss = 0 for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei], encoder_hidden) decoder_input = torch.tensor([[SOS_token]], device=device) decoder_hidden = encoder_hidden for di in range(target_length): decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) topv, topi = decoder_output.topk(1) decoder_input = topi.squeeze().detach() # detach from history as input loss += criterion(decoder_output, target_tensor[di]) if decoder_input.item() == EOS_token: break loss.backward() encoder_optimizer.step() decoder_optimizer.step()
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Training the Model------------------To train we run the input sentence through the encoder, and keep trackof every output and the latest hidden state. Then the decoder is giventhe ```` token as its first input, and the last hidden state of theencoder as its first hidden state."Teacher forcing" is the concept of using the real target outputs aseach next input, instead of using the decoder's guess as the next input.Using teacher forcing causes it to converge faster but `when the trainednetwork is exploited, it may exhibitinstability `__.You can observe outputs of teacher-forced networks that read withcoherent grammar but wander far from the correct translation -intuitively it has learned to represent the output grammar and can "pickup" the meaning once the teacher tells it the first few words, but ithas not properly learned how to create the sentence from the translationin the first place.Because of the freedom PyTorch's autograd gives us, we can randomlychoose to use teacher forcing or not with a simple if statement. Turn``teacher_forcing_ratio`` up to use more of it.
teacher_forcing_ratio = 0.5 def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH): encoder_hidden = encoder.initHidden() encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_length = input_tensor.size(0) target_length = target_tensor.size(0) encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device) loss = 0 for ei in range(input_length): encoder_output, encoder_hidden = encoder( input_tensor[ei], encoder_hidden) encoder_outputs[ei] = encoder_output[0, 0] decoder_input = torch.tensor([[SOS_token]], device=device) decoder_hidden = encoder_hidden use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False if use_teacher_forcing: # Teacher forcing: Feed the target as the next input for di in range(target_length): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden) loss += criterion(decoder_output, target_tensor[di]) decoder_input = target_tensor[di] # Teacher forcing else: # Without teacher forcing: use its own predictions as the next input for di in range(target_length): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden) topv, topi = decoder_output.topk(1) decoder_input = topi.squeeze().detach() # detach from history as input loss += criterion(decoder_output, target_tensor[di]) if decoder_input.item() == EOS_token: break loss.backward() encoder_optimizer.step() decoder_optimizer.step() return loss.item() / target_length
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
This is a helper function to print time elapsed and estimated timeremaining given the current time and progress %.
import time import math def asMinutes(s): m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) def timeSince(since, percent): now = time.time() s = now - since es = s / (percent) rs = es - s return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
The whole training process looks like this:- Start a timer- Initialize optimizers and criterion- Create set of training pairs- Start empty losses array for plottingThen we call ``train`` many times and occasionally print the progress (%of examples, time so far, estimated time) and average loss.
def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.01): start = time.time() plot_losses = [] print_loss_total = 0 # Reset every print_every plot_loss_total = 0 # Reset every plot_every encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate) training_pairs = [tensorsFromPair(random.choice(pairs)) for i in range(n_iters)] criterion = nn.NLLLoss() for iter in range(1, n_iters + 1): training_pair = training_pairs[iter - 1] input_tensor = training_pair[0] target_tensor = training_pair[1] loss = train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion) print_loss_total += loss plot_loss_total += loss if iter % print_every == 0: print_loss_avg = print_loss_total / print_every print_loss_total = 0 print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters), iter, iter / n_iters * 100, print_loss_avg)) if iter % plot_every == 0: plot_loss_avg = plot_loss_total / plot_every plot_losses.append(plot_loss_avg) plot_loss_total = 0 showPlot(plot_losses)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Plotting results----------------Plotting is done with matplotlib, using the array of loss values``plot_losses`` saved while training.
import matplotlib.pyplot as plt plt.switch_backend('agg') import matplotlib.ticker as ticker import numpy as np def showPlot(points): plt.figure() fig, ax = plt.subplots() # this locator puts ticks at regular intervals loc = ticker.MultipleLocator(base=0.2) ax.yaxis.set_major_locator(loc) plt.plot(points)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Evaluation==========Evaluation is mostly the same as training, but there are no targets sowe simply feed the decoder's predictions back to itself for each step.Every time it predicts a word we add it to the output string, and if itpredicts the EOS token we stop there. We also store the decoder'sattention outputs for display later.
def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH): with torch.no_grad(): input_tensor = tensorFromSentence(input_lang, sentence) input_length = input_tensor.size()[0] encoder_hidden = encoder.initHidden() encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device) for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei], encoder_hidden) decoder_input = torch.tensor([[SOS_token]], device=device) # SOS decoder_hidden = encoder_hidden decoded_words = [] decoder_attentions = torch.zeros(max_length, max_length) for di in range(max_length): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden) topv, topi = decoder_output.data.topk(1) if topi.item() == EOS_token: decoded_words.append('<EOS>') break else: decoded_words.append(output_lang.index2word[topi.item()]) decoder_input = topi.squeeze().detach() return decoded_words
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
We can evaluate random sentences from the training set and print out theinput, target, and output to make some subjective quality judgements:
def evaluateRandomly(encoder, decoder, n=10): for i in range(n): pair = random.choice(pairs) print('>', pair[0]) print('=', pair[1]) output_words = evaluate(encoder, decoder, pair[0]) output_sentence = ' '.join(output_words) print('<', output_sentence) print('')
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Training and Evaluating=======================With all these helper functions in place (it looks like extra work, butit makes it easier to run multiple experiments) we can actuallyinitialize a network and start training.Remember that the input sentences were heavily filtered. For this smalldataset we can use relatively small networks of 256 hidden nodes and asingle GRU layer. After about 40 minutes on a MacBook CPU we'll get somereasonable results... Note:: If you run this notebook you can train, interrupt the kernel, evaluate, and continue training later. Comment out the lines where the encoder and decoder are initialized and run ``trainIters`` again.
hidden_size = 256 encoder1 = EncoderRNN(input_lang.n_words, hidden_size).to(device) decoder1 = DecoderRNN(hidden_size, output_lang.n_words).to(device) trainIters(encoder1, decoder1, 75000, print_every=5000, plot_every=100) evaluateRandomly(encoder1, decoder1)
_____no_output_____
MIT
part2/lab1/seq2seq_translation_tutorial.ipynb
yasheshshroff/ODSC2021_NLP_PyTorch
Using Jupyter Notebook effectively----------------------------------
len? print? numbers = [1,2,3] #numbers.<TAB> numbers.insert? numbers?
Type: list String form: [1, 2, 3] Length: 3 Docstring: Built-in mutable sequence. If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.
Apache-2.0
Mar22/Statistics/.ipynb_checkpoints/jupyterhelp-checkpoint.ipynb
khajadatascienceR/DataScienceWithPython
Defining Functions and using them---------------------------------
def square(number): """Returns the Square of a number""" return number ** 2 square? square?? print?? #numbers.c<TAB> Try this # tab completion while importing #from itertools import co<TAB> #from numpy imp *Warning? str.*find*?
str.find str.rfind
Apache-2.0
Mar22/Statistics/.ipynb_checkpoints/jupyterhelp-checkpoint.ipynb
khajadatascienceR/DataScienceWithPython
Magic Commands----------------
%timeit test_list = [ n**3 for n in range(100)] %%timeit my_list = [] for number in range(1000): my_list.append(number**3)
20.5 ms ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Apache-2.0
Mar22/Statistics/.ipynb_checkpoints/jupyterhelp-checkpoint.ipynb
khajadatascienceR/DataScienceWithPython
Previous Outputs and Underscore shortcuts-------------------------------------------
10 + 5 print(_) 10 * 5 10 ** 5 print(_) print(__) print(___) Out[31] # going by cell number !dir # execute native commands %%time my_list = [] for number in range(1000): my_list.append(number**3) def sum_of_lists(max_limit): total = 0 for number in range(max_limit): total += number ** 3 return total %prun sum_of_lists(100000)
4 function calls in 0.048 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.048 0.048 0.048 0.048 947818732.py:1(sum_of_lists) 1 0.000 0.000 0.048 0.048 {built-in method builtins.exec} 1 0.000 0.000 0.048 0.048 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Apache-2.0
Mar22/Statistics/.ipynb_checkpoints/jupyterhelp-checkpoint.ipynb
khajadatascienceR/DataScienceWithPython
Gráficos
import plotly.io as pio pio.renderers a3_1fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A3_N_2019_12_04tAm2.5.csv') a3_2fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A3_A_2019_12_09tAm11.8.csv') a3_3fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A3_A_2019_12_11tAm2.5.csv') a4_1fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A4_N_2019_12_16tAm2.1.csv') a4_2fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A4_A_2019_12_19tAm6.csv') a4_3fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A4_A_2020_01_06tAm2.5.csv') a4_4fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A4_A_2020_01_13tAm2.5.csv') a5_1fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A5_N_2020_01_22tAm2.5.csv') a5_2fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A5_A_2020_01_27tAm12.5.csv') a5_3fim = pd.read_csv('/content/drive/My Drive/Nicolas/2021_04 - Dados processados/A5_A_2020_01_28tAm2.5.csv') def plota_rms(dado, nomedado): fig = go.Figure() fig.add_trace(go.Scatter( y = dado['lowpassRMS'], line = dict(shape = 'spline' ), name = 'filtered RMS lowpass' )) fig.add_trace(go.Scatter( y = dado['highpassRMS'], line = dict(shape = 'spline' ), name = 'filtered RMS highpass' )) fig.add_trace(go.Scatter( y = dado['bandpassRMS'], line = dict(shape = 'spline' ), name = 'filtered RMS bandpass' )) fig.update_layout( title={ 'text': nomedado + 'RMS', 'y':0.9, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'}, autosize=False, width=900, height=500) fig.show(renderer="notebook") def plota_curtose(dado, nomedado): fig = go.Figure() fig.add_trace(go.Scatter( y = dado['lowpassCurtose'], line = dict(shape = 'spline' ), name = 'filtered Curtose lowpass' )) fig.add_trace(go.Scatter( y = dado['highpassCurtose'], line = dict(shape = 'spline' ), name = 'filtered Curtose highpass', )) fig.add_trace(go.Scatter( y = dado['bandpassCurtose'], line = dict(shape = 'spline' ), name = 'filtered Curtose bandpass', )) fig.update_layout( title={ 'text': nomedado + 'Curtose', 'y':0.9, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'}, autosize=False, width=900, height=500 ) fig.show(renderer="notebook") plota_rms(a3_1fim, 'a3_N_1') plota_curtose(a3_1fim, 'a3_N_1') plota_rms(a3_2fim, 'a3_A_2') plota_curtose(a3_2fim, 'a3_A_2') plota_rms(a3_3fim, 'a3_A_3') plota_curtose(a3_2fim, 'a3_A_3') plota_rms(a4_1fim, 'a4_N_1') plota_curtose(a4_1fim, 'a4_N_1') plota_rms(a4_2fim, 'a4_A_2') plota_curtose(a4_2fim, 'a4_A_2') plota_rms(a4_3fim, 'a4_A_3') plota_curtose(a4_3fim, 'a4_A_3') plota_rms(a4_4fim, 'a4_A_4') plota_curtose(a4_4fim, 'a4_A_4') plota_rms(a5_1fim, 'a5_N_1') plota_curtose(a5_1fim, 'a5_N_1') plota_rms(a5_2fim, 'a5_A_2') plota_curtose(a5_2fim, 'a5_A_2') plota_rms(a5_3fim, 'a5_A_3') plota_curtose(a5_3fim, 'a5_A_3')
_____no_output_____
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
Entendimento dos dadosTem 59s entre elesCada arquivo é 1sDepois existe um Gap de 59s que não são medidosE depois vem o próximo arquivo Data columns 1 calota inferior2 dummy, bancada3 calota superior Parâmetros 25.6khzFs = 25.6*10^3dT = 1/FSFst = (1:length(V1))*dt Tipo de análise feitaLowpass em 1k, um bandpass de 1k a 10k, e um highpass em 10kCurtose : b2=1/n * ∑[(xi−x¯)/s]^4 − 3
from google.colab import drive drive.mount('/content/drive') #4/1AY0e-g5GH8MfPR750KhgwYRUTCU_YkBmew1ZIl1vqKmHgjCFsPlq-RLglxw
Mounted at /content/drive
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
FUNCIONANDO Retirando o RMS do ensaio não amaciada da amostra 3 Imports de bibliotecas
from io import BytesIO import zipfile # import rarfile import pandas as pd import urllib.request import numpy as np import seaborn as sns import matplotlib.pyplot as plt import re from google.colab import files import csv from scipy.signal import butter, lfilter import scipy.stats import plotly.graph_objects as go from google.colab import files import plotly.offline
_____no_output_____
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
Dados
a3_1 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A3_N_2019_12_04tAm2.5.csv') a3_2 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A3_A_2019_12_09tAm11.8.csv') a3_3 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A3_A_2019_12_11tAm2.5.csv') a4_1 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A4_N_2019_12_16tAm2.1.csv') a4_2 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A4_A_2019_12_19tAm6.csv') a4_3 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A4_A_2020_01_06tAm2.5.csv') a4_4 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A4_A_2020_01_13tAm2.5.csv') a5_1 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A5_N_2020_01_22tAm2.5.csv') a5_2 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A5_A_2020_01_27tAm12.5.csv') a5_3 = pd.read_csv('/content/drive/My Drive/Nicolas/2021_03 - Dados processados/A5_A_2020_01_28tAm2.5.csv')
_____no_output_____
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
Variáveis constantes
cutoff_low=1000 cutoff_band=[1000,10000] cutoff_high=10000 order = 5 fs=25600 time=1 sampling_rate = fs/time
_____no_output_____
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
Funções
def filtro_lowpass(data,order,cutoff, fs): nyquist = fs*0.5 cutoff/(fs*0.5) wn_low = cutoff/nyquist b_low, a_low = butter(order, wn_low, btype='lowpass') filtered_sig_low = lfilter(b_low, a_low, data.values) return filtered_sig_low def filtro_highpass(data,order,cutoff, fs): nyquist = fs*0.5 wn_high = cutoff/nyquist b_high, a_high = butter(order, wn_high, btype='highpass') filtered_sig_high = lfilter(b_high, a_high, data.values) return filtered_sig_high def filtro_bandpass(data,order,cutoff, fs): nyquist = fs*0.5 wn_band = [] wn_band.append(cutoff[0]/nyquist) wn_band.append(cutoff[1]/nyquist) b_band, a_band = butter(order, wn_band, btype='bandpass') filtered_sig_band = lfilter(b_band, a_band, data.values) return filtered_sig_band def cria_tempo(fs): t = [] for i in range(len(df)): dt = 1*fs t.append(k*dt) k = k+1 return t def rms(data_column): x = data_column.apply(lambda x: x*x) y = np.sqrt(sum(x)/len(data_column)) return y def curtose(data_column): curtose = scipy.stats.kurtosis(data_column) return curtose # ensaioprocessado é os dados já processados para concatenar e junto os novos processamentos def aplica_filtro(pastazip, pastaarquivo, ensaioprocessado, tipo): zip_ref = zipfile.ZipFile(pastazip, 'r') df = [] text_files = zip_ref.infolist() text = [] for i in text_files: if i.filename.startswith(f"{pastaarquivo}vibTempo"): text.append(i.filename) if tipo == 2: k=1 else: k=0 t_rms = [] t_rms = pd.DataFrame(columns=(['lowpassRMS', 'highpassRMS', 'bandpassRMS', 'lowpassCurtose', 'highpassCurtose', 'bandpassCurtose'])) df_fim = [] for text_file in text[1: len(text) - k]: df = [] row_rms = [] df = pd.read_csv(zip_ref.open(text_file), sep='\t', header=None) df['lowpass'] = filtro_lowpass(df[0],order,cutoff_low, fs) df['highpass'] = filtro_highpass(df[0],order,cutoff_high, fs) df['bandpass'] = filtro_bandpass(df[0],order,cutoff_band, fs) new_row = {'lowpassRMS':rms(df['lowpass']), 'highpassRMS':rms(df['highpass']), 'bandpassRMS':rms(df['bandpass']), 'lowpassCurtose':curtose(df['lowpass']), 'highpassCurtose':curtose(df['highpass']), 'bandpassCurtose':curtose(df['bandpass'])} t_rms = t_rms.append(new_row, ignore_index=True) df_fim = pd.concat((ensaioprocessado,t_rms), axis=1) return df_fim
_____no_output_____
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
zip and .dat import and read
lista_ensaios = [a3_1, a3_2, a3_3] a3pastazip = '/content/drive/My Drive/Nicolas/Amostra_A3.zip' pastaarquivo_a3 = ['Amostra A3/N_2019_12_04/vibracao/', 'Amostra A3/A_2019_12_09/vibracao/', 'Amostra A3/A_2019_12_11/vibracao/'] a3_1fim = aplica_filtro(a3pastazip, pastaarquivo_a3[0], lista_ensaios[0], 1) a3_2fim = aplica_filtro(a3pastazip, pastaarquivo_a3[1], lista_ensaios[1], 1) a3_3fim = aplica_filtro(a3pastazip, pastaarquivo_a3[2], lista_ensaios[2], 1) lista_ensaios_a4 = [a4_1, a4_2, a4_3, a4_4] a4pastazip = '/content/drive/My Drive/Nicolas/Amostra_A4.zip' pastaarquivo_a4 = ['Amostra A4/N_2019_12_16/vibracao/', 'Amostra A4/A_2019_12_19/vibracao/', 'Amostra A4/A_2020_01_06/vibracao/', 'Amostra A4/A_2020_01_13/vibracao/' ] a4_1fim = aplica_filtro(a4pastazip, pastaarquivo_a4[0], lista_ensaios_a4[0], 2) a4_2fim = aplica_filtro(a4pastazip, pastaarquivo_a4[1], lista_ensaios_a4[1], 1) a4_3fim = aplica_filtro(a4pastazip, pastaarquivo_a4[2], lista_ensaios_a4[2], 2) a4_4fim = aplica_filtro(a4pastazip, pastaarquivo_a4[3], lista_ensaios_a4[3], 1) lista_ensaios_a5 = [a5_1, a5_2, a5_3] a5pastazip = '/content/drive/My Drive/Nicolas/Amostra_A5.zip' pastaarquivo_a5 = ['Amostra A5/N_2020_01_22/vibracao/', 'Amostra A5/A_2020_01_27/vibracao/', 'Amostra A5/A_2020_01_28/vibracao/'] a5_1fim = aplica_filtro(a5pastazip, pastaarquivo_a5[0], lista_ensaios_a5[0], 1) a5_2fim = aplica_filtro(a5pastazip, pastaarquivo_a5[1], lista_ensaios_a5[1], 1) a5_3fim = aplica_filtro(a5pastazip, pastaarquivo_a5[2], lista_ensaios_a5[2], 1) # a3_1fim.to_csv('A3_N_2019_12_04tAm2.5.csv') # a3_2fim.to_csv('A3_A_2019_12_09tAm11.8.csv') # a3_3fim.to_csv('A3_A_2019_12_11tAm2.5.csv') # a4_1fim.to_csv('A4_N_2019_12_16tAm2.1.csv') # a4_2fim.to_csv('A4_A_2019_12_19tAm6.csv') # a4_3fim.to_csv('A4_A_2020_01_06tAm2.5.csv') # a4_4fim.to_csv('A4_A_2020_01_13tAm2.5.csv') # a5_1fim.to_csv('A5_N_2020_01_22tAm2.5.csv') # a5_2fim.to_csv('A5_A_2020_01_27tAm12.5.csv') # a5_3fim.to_csv('A5_A_2020_01_28tAm2.5.csv') # files.download('A3_N_2019_12_04tAm2.5.csv') # files.download('A3_A_2019_12_09tAm11.8.csv') # files.download('A3_A_2019_12_11tAm2.5.csv') # files.download('A4_N_2019_12_16tAm2.1.csv') # files.download('A4_A_2019_12_19tAm6.csv') # files.download('A4_A_2020_01_06tAm2.5.csv') # files.download('A4_A_2020_01_13tAm2.5.csv') # files.download('A5_N_2020_01_22tAm2.5.csv') # files.download('A5_A_2020_01_27tAm12.5.csv') # files.download('A5_A_2020_01_28tAm2.5.csv')
_____no_output_____
MIT
notebooks/Filtros_vibracao.ipynb
nicolasantero/compressor-breakin-kmeans-clustering
This is an interactive tutorial designed to walk throughregularization for a linear-Gaussian GLM, which allows for closed-formMAP parameter estimates. The next tutorial ('tutorial4') will cover thesame methods for the Poisson GLM (which requires numerical optimization).We'll consider two simple regularization methods:1. Ridge regression - corresponds to maximum a posteriori (MAP) estimation under an iid Gaussian prior on the filter coefficients. 2. L2 smoothing prior - using to an iid Gaussian prior on the pairwise-differences of the filter(s).Data: from Uzzell & Chichilnisky 2004; see README file for details. Last updated: Mar 10, 2020 (JW Pillow)Tutorial instructions: Execute each section below separately usingcmd-enter. For detailed suggestions on how to interact with thistutorial, see header material in tutorial1_PoissonGLM.mTransferred into Python by Xiaodong LI
import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.io import loadmat from scipy.optimize import minimize from scipy.linalg import hankel,pinv,block_diag from scipy.interpolate import interp1d from interpolation import interp from numpy.linalg import inv,norm,lstsq from matplotlib import mlab aa=np.asarray def neglogli_poissGLM(prs,XX,YY,dtbin): """ Compute negative log-likelihood of data undr Poisson GLM model with exponential nonlinearity Inputs: prs [d x 1] - parameter vector XX [T x d] - design matrix YY [T x 1] - response (spike count per time bin) dtbin [1 x 1] - time bin size used Outputs: neglogli = negative log likelihood of spike train dL [d x 1] = gradient H [d x d] = Hessian (second deriv matrix) """ # Compute GLM filter output and condititional intensity vv = XX@prs # filter output rr = np.exp(vv)*dtbin # conditional intensity (per bin) # --------- Compute log-likelihood ----------- Trm1 = -vv.T@YY # spike term from Poisson log-likelihood Trm0 = np.sum(rr) # non-spike term neglogli = Trm1 + Trm0 return neglogli def jac_neglogli_poissGLM(prs,XX,YY,dtbin): # Compute GLM filter output and condititional intensity vv = XX@prs # filter output rr = np.exp(vv)*dtbin # conditional intensity (per bin) # --------- Compute Gradient ----------------- dL1 = -XX.T@YY # spiking term (the spike-triggered average) dL0 = XX.T@rr # non-spiking term dL = dL1+dL0 return dL def hess_neglogli_poissGLM(prs,XX,YY,dtbin): # Compute GLM filter output and condititional intensity vv = XX@prs # filter output rr = np.exp(vv)*dtbin # conditional intensity (per bin) # --------- Compute Hessian ------------------- H = [email protected](XX,rr.reshape(-1,1)) # non-spiking term return H def neglogposterior(prs,negloglifun,Cinv): """ Compute negative log-posterior given a negative log-likelihood function and zero-mean Gaussian prior with inverse covariance 'Cinv'. Inputs: prs [d x 1] - parameter vector negloglifun - handle for negative log-likelihood function Cinv [d x d] - response (spike count per time bin) Outputs: negLP - negative log posterior grad [d x 1] - gradient H [d x d] - Hessian (second deriv matrix) Compute negative log-posterior by adding quadratic penalty to log-likelihood """ # evaluate function and gradient negLP= negloglifun(prs) negLP += .5*prs.T@Cinv@prs return negLP def jac_neglogposterior(prs,jac_negloglifun,Cinv): grad=jac_negloglifun(prs) grad += Cinv@prs return grad def hess_neglogposterior(prs,hess_negloglifun,Cinv): H=hess_negloglifun(prs) H += Cinv return H
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Load the raw data Be sure to unzip the data file data_RGCs.zip(http://pillowlab.princeton.edu/data/data_RGCs.zip) and place it in this directory before running the tutorial. Or substitute your own dataset here instead!(Data from Uzzell & Chichilnisky 2004):
datadir='../data_RGCs/' # directory where stimulus lives Stim=loadmat(datadir+'Stim.mat')['Stim'].flatten() # stimulus (temporal binary white noise) stimtimes=loadmat(datadir+'stimtimes.mat')['stimtimes'].flatten() # stim frame times in seconds (if desired) SpTimes=loadmat(datadir+'SpTimes.mat')['SpTimes'][0,:] # load spike times (in units of stim frames) ncells=len(SpTimes) # number of neurons (4 for this dataset). # Neurons #0-1 are OFF, #2-3 are ON.
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Pick a cell to work with
cellnum = 2 # (0-1 are OFF cells; 2-3 are ON cells). tsp = SpTimes[cellnum];
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Compute some basic statistics on the stimulus
dtStim = stimtimes[1]-stimtimes[0] # time bin size for stimulus (s) # See tutorial 1 for some code to visualize the raw data!
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Upsample to get finer timescale representation of stim and spikes The need to regularize GLM parameter estimates is acute when we don'thave enough data relative to the number of parameters we're trying toestimate, or when using correlated (eg naturalistic) stimuli, since thestimuli don't have enough power at all frequencies to estimate allfrequency components of the filter. The RGC dataset we've looked at so far requires only a temporal filter(as opposed to spatio-temporal filter for full spatiotemporal moviestimuli), so it doesn't have that many parameters to esimate. It also hasbinary white noise stimuli, which have equal energy at all frequencies.Regularization thus isn't an especially big deal for this data (which waspart of our reason for selecting it). However, we can make it lookcorrelated by considering it on a finer timescale than the frame rate ofthe monitor. (Indeed, this will make it look highly correlated).For speed of our code and to illustrate the advantages of regularization,let's use only a reduced (5-minute) portion of the dataset:
nT=120*60*1 # # of time bins for 1 minute of data Stim=Stim[:nT] # pare down stimulus tsp=tsp[tsp<nT*dtStim] # pare down spikes
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Now upsample to finer temporal grid
upsampfactor = 5 # divide each time bin by this factor dtStimhi = dtStim/upsampfactor # use bins 100 time bins finer ttgridhi = np.arange(dtStimhi/2,nT*dtStim+dtStimhi,dtStimhi) # fine time grid for upsampled stim Stimhi = interp1d(np.arange(1,nT+1)*dtStim,Stim,kind='nearest',fill_value='extrapolate')(ttgridhi) nThi = nT*upsampfactor # length of upsampled stimulus
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial