markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Random Forest Regression
from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() rf.fit(X_train, y_train) # prediction pred_rf = rf.predict(X_test) mae_rf = mean_absolute_error(y_test,pred_rf) r2_rf = r2_score(y_test,pred_rf) print(f'Mean absolute error of Random forst regression is {mae_rf}') print(f'R2 score of Random forst regressor is {r2_rf}') fig, ax = plt.subplots() plt.title('Linear relationship for random forest regressor') ax.scatter(pred_rf, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10)
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Lasso Regression
laso = Lasso() laso.fit(X_train, y_train) pred_laso = laso.predict(X_test) mae_laso = mean_absolute_error(y_test, pred_laso) r2_laso = r2_score(y_test, pred_laso) print(f'Mean absolute error of Random forst regression is {mae_laso}') print(f'R2 score of Random forst regressor is {r2_laso}') fig, ax = plt.subplots() plt.title('Linear relationship for Lasso regressor') ax.scatter(pred_laso, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10) gb = GradientBoostingRegressor() gb.fit(X_train, y_train) pred_gb = gb.predict(X_test) mae_gb = mean_absolute_error(y_test, pred_gb) r2_gb = r2_score(y_test, pred_gb) print(f'Mean absolute error of Random forst regression is {mae_gb}') print(f'R2 score of Random forst regressor is {r2_gb}') fig, ax = plt.subplots() plt.title('Linear relationship for Lasso regressor') ax.scatter(pred_gb, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10)
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Stacking Regressor:combining multiple regression model and choosing the final model. in our case we used kfold cross validation to make sure that the model is not overfitting.
estimators = [('lr',LinearRegression()), ('gb',GradientBoostingRegressor()),\ ('dt',DecisionTreeRegressor()), ('laso',Lasso())] from sklearn.model_selection import KFold kf = KFold(n_splits=10,shuffle=True, random_state=seed) stacking = StackingRegressor(estimators=estimators, final_estimator=RandomForestRegressor(random_state=seed), cv=kf) stacking.fit(X_train, y_train) pred_stack = stacking.predict(X_test) mae_stack = mean_absolute_error(y_test, pred_stack) r2_stack = r2_score(y_test, pred_stack) print(f'Mean absolute error of Random forst regression is {mae_stack}') print(f'R2 score of Random forst regressor is {r2_stack}') fig, ax = plt.subplots() plt.title('Linear relationship for Stacking regressor') ax.scatter(pred_stack, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10) result = pd.DataFrame({'Model':['Linear Regression','Decison tree','Random Forest', 'Lasso',\ 'Gradient Boosting Regressor', 'Stacking Regressor'], 'MAE':[mae_lr, mae_dt, mae_rf, mae_laso, mae_gb, mae_stack], 'R2 score':[r2_lr, r2_dt, r2_rf, r2_laso, r2_gb, r2_stack] }) result
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Working with XSPEC modelsOne of the most powerful aspects of **XSPEC** is a huge modeling community. While in 3ML, we are focused on building a powerful and modular data analysis tool, we cannot neglect the need for many of the models thahat already exist in **XSPEC** and thus provide support for them via **astromodels** directly in 3ML. For details on installing **astromodels** with **XSPEC** support, visit the 3ML or **astromodels** installation page. Let's explore how we can use **XSPEC** spectral models in 3ML.
%matplotlib notebook import matplotlib.pyplot as plt import numpy as np
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
We do not load the models by default as this takes some time and 3ML should load quickly. However, if you need the **XSPEC** models, they are imported from astromodels like this:
from astromodels.xspec.factory import *
Loading xspec models...done
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
The models are indexed with *XS_* before the typical **XSPEC** model names.
plaw = XS_powerlaw() phabs = XS_phabs() phabs
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
The spectral models behave just as any other **astromodels** spectral model and can be used in combination with other **astromodels** spectral models.
from astromodels import Powerlaw am_plaw = Powerlaw() plaw_with_abs = am_plaw*phabs fig, ax =plt.subplots() energy_grid = np.linspace(.1,10.,1000) ax.loglog(energy_grid,plaw_with_abs(energy_grid)) ax.set_xlabel('energy') ax.set_ylabel('flux')
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
XSPEC SettingsMany **XSPEC** models depend on external abundances, cross-sections, and cosmological parameters. We provide an interface to control these directly.Simply import the **XSPEC** settings like so:
from astromodels.xspec.xspec_settings import *
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
Calling the functions without arguments simply returns their current settings
xspec_abund() xspec_xsect() xspec_cosmo()
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
To change the settings for abundance and cross-section, provide strings with the normal **XSPEC** naming conventions.
xspec_abund('wilm') xspec_abund() xspec_xsect('bcmc') xspec_xsect()
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
To alter the cosmological parameters, one passes either the parameters that should be changed, or all three:
xspec_cosmo(H0=68.) xspec_cosmo() xspec_cosmo(H0=68.,q0=.1,lambda_0=70.) xspec_cosmo()
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
The Extended Kalman Filter์„ ํ˜• ์นผ๋งŒ ํ•„ํ„ฐ (Linear Kalman Filter)์— ๋Œ€ํ•œ ์ด๋ก ์„ ๋ฐ”ํƒ•์œผ๋กœ ๋น„์„ ํ˜• ๋ฌธ์ œ์— ์นผ๋งŒ ํ•„ํ„ฐ๋ฅผ ์ ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ํ™•์žฅ์นผ๋งŒํ•„ํ„ฐ (EKF)๋Š” ์˜ˆ์ธก๋‹จ๊ณ„์™€ ์ถ”์ •๋‹จ๊ณ„์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋น„์„ ํ˜•์œผ๋กœ ๊ฐ€์ •ํ•˜๊ณ  ํ˜„์žฌ์˜ ์ถ”์ •๊ฐ’์— ๋Œ€ํ•ด ์‹œ์Šคํ…œ์„ ์„ ํ˜•ํ™” ํ•œ๋’ค ์„ ํ˜• ์นผ๋งŒ ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ธฐ๋ฒ•์ž…๋‹ˆ๋‹ค.๋น„์„ ํ˜• ๋ฌธ์ œ์— ์ ์šฉ๋˜๋Š” ์„ฑ๋Šฅ์ด ๋” ์ข‹์€ ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค (UKF, H_infinity)์ด ์žˆ์ง€๋งŒ EKF ๋Š” ์•„์ง๋„ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜์„œ ๊ด€๋ จ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค.
%matplotlib inline # HTML(""" # <style> # .output_png { # display: table-cell; # text-align: center; # vertical-align: middle; # } # </style> # """)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
Linearizing the Kalman Filter Non-linear models์นผ๋งŒ ํ•„ํ„ฐ๋Š” ์‹œ์Šคํ…œ์ด ์„ ํ˜•์ผ๊ฒƒ์ด๋ผ๋Š” ๊ฐ€์ •์„ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋น„์„ ํ˜• ๋ฌธ์ œ์—๋Š” ์ง์ ‘์ ์œผ๋กœ ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ๋น„์„ ํ˜•์„ฑ์€ ๋‘๊ฐ€์ง€ ์›์ธ์—์„œ ๊ธฐ์ธ๋ ์ˆ˜ ์žˆ๋Š”๋ฐ ์ฒซ์งธ๋Š” ํ”„๋กœ์„ธ์Šค ๋ชจ๋ธ์˜ ๋น„์„ ํ˜•์„ฑ ๊ทธ๋ฆฌ๊ณ  ๋‘˜์งธ ์ธก์ • ๋ชจ๋ธ์˜ ๋น„์„ ํ˜•์„ฑ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋–จ์–ด์ง€๋Š” ๋ฌผ์ฒด์˜ ๊ฐ€์†๋„๋Š” ์†๋„์˜ ์ œ๊ณฑ์— ๋น„๋ก€ํ•˜๋Š” ๊ณต๊ธฐ์ €ํ•ญ์— ์˜ํ•ด ๊ฒฐ์ •๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋น„์„ ํ˜•์ ์ธ ํ”„๋กœ์„ธ์Šค ๋ชจ๋ธ์„ ๊ฐ€์ง€๊ณ , ๋ ˆ์ด๋”๋กœ ๋ชฉํ‘œ๋ฌผ์˜ ๋ฒ”์œ„์™€ ๋ฐฉ์œ„ (bearing) ๋ฅผ ์ธก์ •ํ• ๋•Œ ๋น„์„ ํ˜•ํ•จ์ˆ˜์ธ ์‚ผ๊ฐํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œ์ ์˜ ์œ„์น˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋น„์„ ํ˜•์ ์ธ ์ธก์ • ๋ชจ๋ธ์„ ๊ฐ€์ง€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.๋น„์„ ํ˜•๋ฌธ์ œ์— ๊ธฐ์กด์˜ ์นผ๋งŒํ•„ํ„ฐ ๋ฐฉ์ •์‹์„ ์ ์šฉํ•˜์ง€ ๋ชปํ•˜๋Š” ์ด์œ ๋Š” ๋น„์„ ํ˜•ํ•จ์ˆ˜์— ์ •๊ทœ๋ถ„ํฌ (Gaussian)๋ฅผ ์ž…๋ ฅํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด Gaussian ์ด ์•„๋‹Œ ๋ถ„ํฌ๋ฅผ ๊ฐ€์ง€๊ฒŒ ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค.
import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt mu, sigma = 0, 0.1 gaussian = stats.norm.pdf(x, mu, sigma) x = np.linspace(mu - 3*sigma, mu + 3*sigma, 10000) def nonlinearFunction(x): return np.sin(x) def linearFunction(x): return 0.5*x nonlinearOutput = nonlinearFunction(gaussian) linearOutput = linearFunction(gaussian) # print(x) plt.plot(x, gaussian, label = 'Gaussian Input') plt.plot(x, linearOutput, label = 'Linear Output') plt.plot(x, nonlinearOutput, label = 'Nonlinear Output') plt.grid(linestyle='dotted', linewidth=0.8) plt.legend() plt.show()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
System Equations์„ ํ˜• ์นผ๋งŒ ํ•„ํ„ฐ์˜ ๊ฒฝ์šฐ ํ”„๋กœ์„ธ์Šค ๋ฐ ์ธก์ • ๋ชจ๋ธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.$$\begin{aligned}\dot{\mathbf x} &= \mathbf{Ax} + w_x\\\mathbf z &= \mathbf{Hx} + w_z\end{aligned}$$์ด๋•Œ $\mathbf A$ ๋Š” (์—ฐ์†์‹œ๊ฐ„์—์„œ) ์‹œ์Šคํ…œ์˜ ์—ญํ•™์„ ๋ฌ˜์‚ฌํ•˜๋Š” dynamic matrix ์ž…๋‹ˆ๋‹ค. ์œ„์˜ ์‹์„ ์ด์‚ฐํ™”(discretize)์‹œํ‚ค๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ด์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. $$\begin{aligned}\bar{\mathbf x}_k &= \mathbf{F} \mathbf{x}_{k-1} \\\bar{\mathbf z} &= \mathbf{H} \mathbf{x}_{k-1}\end{aligned}$$์ด๋•Œ $\mathbf F$ ๋Š” ์ด์‚ฐ์‹œ๊ฐ„ $\Delta t$ ์— ๊ฑธ์ณ $\mathbf x_{k-1}$์„ $\mathbf x_{k}$ ๋กœ ์ „ํ™˜ํ•˜๋Š” ์ƒํƒœ๋ณ€ํ™˜ํ–‰๋ ฌ ๋˜๋Š” ์ƒํƒœ์ „๋‹ฌํ•จ์ˆ˜ (state transition matrix) ์ด๊ณ , ์œ„์—์„œ์˜ $w_x$ ์™€ $w_z$๋Š” ๊ฐ๊ฐ ํ”„๋กœ์„ธ์Šค ๋…ธ์ด์ฆˆ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ $\mathbf Q$ ๊ณผ ์ธก์ • ๋…ธ์ด์ฆˆ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ $\mathbf R$ ์— ํฌํ•จ๋ฉ๋‹ˆ๋‹ค.์„ ํ˜• ์‹œ์Šคํ…œ์—์„œ์˜ $\mathbf F \mathbf x- \mathbf B \mathbf u$ ์™€ $\mathbf H \mathbf x$ ๋Š” ๋น„์„ ํ˜• ์‹œ์Šคํ…œ์—์„œ ํ•จ์ˆ˜ $f(\mathbf x, \mathbf u)$ ์™€ $h(\mathbf x)$ ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค.$$\begin{aligned}\dot{\mathbf x} &= f(\mathbf x, \mathbf u) + w_x\\\mathbf z &= h(\mathbf x) + w_z\end{aligned}$$ Linearisation์„ ํ˜•ํ™”๋ž€ ๋ง๊ทธ๋Œ€๋กœ ํ•˜๋‚˜์˜ ์‹œ์ ์— ๋Œ€ํ•˜์—ฌ ๋น„์„ ํ˜•ํ•จ์ˆ˜์— ๊ฐ€์žฅ ๊ฐ€๊นŒ์šด ์„  (์„ ํ˜•์‹œ์Šคํ…œ) ์„ ์ฐพ๋Š”๊ฒƒ์ด๋ผ๊ณ  ๋ณผ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ์„ ํ˜•ํ™”๋ฅผ ํ• ์ˆ˜ ์žˆ๊ฒ ์ง€๋งŒ ํ”ํžˆ ์ผ์ฐจ ํ…Œ์ผ๋Ÿฌ ๊ธ‰์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ($ c_0$ ๊ณผ $c_1 x$)$$f(x) = \sum_{k=0}^\infty c_k x^k = c_0 + c_1 x + c_2 x^2 + \dotsb$$$$c_k = \frac{f^{\left(k\right)}(0)}{k!} = \frac{1}{k!} \cdot \frac{d^k f}{dx^k}\bigg|_0 $$ํ–‰๋ ฌ์˜ ๋ฏธ๋ถ„๊ฐ’์„ Jacobian ์ด๋ผ๊ณ  ํ•˜๋Š”๋ฐ ์ด๋ฅผ ํ†ตํ•ด์„œ ์œ„์™€ ๊ฐ™์ด $\mathbf F$ ์™€ $\mathbf H$ ๋ฅผ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.$$\begin{aligned}\mathbf F = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}} \;\;\;\;\mathbf H = \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t} \end{aligned}$$$$\mathbf F = \frac{\partial f(\mathbf x, \mathbf u)}{\partial x} =\begin{bmatrix}\frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \dots & \frac{\partial f_1}{\partial x_n}\\\frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \dots & \frac{\partial f_2}{\partial x_n} \\\\ \vdots & \vdots & \ddots & \vdots\\\frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \dots & \frac{\partial f_n}{\partial x_n}\end{bmatrix}$$Linear Kalman Filter ์™€ Extended Kalman Filter ์˜ ์‹๋“ค์„ ์•„๋ž˜์™€ ๊ฐ™์ด ๋น„๊ตํ• ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.$$\begin{array}{l|l}\text{Linear Kalman filter} & \text{EKF} \\\hline & \boxed{\mathbf F = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}}} \\\mathbf{\bar x} = \mathbf{Fx} + \mathbf{Bu} & \boxed{\mathbf{\bar x} = f(\mathbf x, \mathbf u)} \\\mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q & \mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q \\\hline& \boxed{\mathbf H = \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}} \\\textbf{y} = \mathbf z - \mathbf{H \bar{x}} & \textbf{y} = \mathbf z - \boxed{h(\bar{x})}\\\mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} & \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \\\mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} & \mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \\\mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} & \mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}}\end{array}$$$\mathbf F \mathbf x_{k-1}$ ์„ ์‚ฌ์šฉํ•˜์—ฌ $\mathbf x_{k}$์˜ ๊ฐ’์„ ์ถ”์ •ํ• ์ˆ˜ ์žˆ๊ฒ ์ง€๋งŒ, ์„ ํ˜•ํ™” ๊ณผ์ •์—์„œ ์˜ค์ฐจ๊ฐ€ ์ƒ๊ธธ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— Euler ๋˜๋Š” Runge Kutta ์ˆ˜์น˜ ์ ๋ถ„ (numerical integration) ์„ ํ†ตํ•ด์„œ ์‚ฌ์ „์ถ”์ •๊ฐ’ $\mathbf{\bar{x}}$ ๋ฅผ ๊ตฌํ•ฉ๋‹ˆ๋‹ค. ๊ฐ™์€ ์ด์œ ๋กœ $\mathbf y$ (innovation vector ๋˜๋Š” ์ž”์ฐจ(residual)) ๋ฅผ ๊ตฌํ• ๋•Œ๋„ $\mathbf H \mathbf x$ ๋Œ€์‹ ์— ์ˆ˜์น˜์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ๊ณ„์‚ฐํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. Example: Robot Localization Prediction Model (์˜ˆ์ธก๋ชจ๋ธ)EKF๋ฅผ 4๋ฅœ ๋กœ๋ด‡์— ์ ์šฉ์‹œ์ผœ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ bicycle steering model ์„ ํ†ตํ•ด ์•„๋ž˜์˜ ์‹œ์Šคํ…œ ๋ชจ๋ธ์„ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
import kf_book.ekf_internal as ekf_internal ekf_internal.plot_bicycle()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
$$\begin{aligned} \beta &= \frac d w \tan(\alpha) \\\bar x_k &= x_{k-1} - R\sin(\theta) + R\sin(\theta + \beta) \\\bar y_k &= y_{k-1} + R\cos(\theta) - R\cos(\theta + \beta) \\\bar \theta_k &= \theta_{k-1} + \beta\end{aligned}$$ ์œ„์˜ ์‹๋“ค์„ ํ† ๋Œ€๋กœ ์ƒํƒœ๋ฒกํ„ฐ๋ฅผ $\mathbf{x}=[x, y, \theta]^T$ ๊ทธ๋ฆฌ๊ณ  ์ž…๋ ฅ๋ฒกํ„ฐ๋ฅผ $\mathbf{u}=[v, \alpha]^T$ ๋ผ๊ณ  ์ •์˜ ํ•ด์ฃผ๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด $f(\mathbf x, \mathbf u)$ ๋‚˜ํƒ€๋‚ด์ค„์ˆ˜ ์žˆ๊ณ  $f$ ์˜ Jacobian $\mathbf F$๋ฅผ ๋ฏธ๋ถ„ํ•˜์—ฌ ์•„๋ž˜์˜ ํ–‰๋ ฌ์„ ๊ตฌํ•ด์ค„์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.$$\bar x = f(x, u) + \mathcal{N}(0, Q)$$$$f = \begin{bmatrix}x\\y\\\theta\end{bmatrix} + \begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\R\cos(\theta) - R\cos(\theta + \beta) \\\beta\end{bmatrix}$$$$\mathbf F = \frac{\partial f(\mathbf x, \mathbf u)}{\partial \mathbf x} = \begin{bmatrix}1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \\0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \\0 & 0 & 1\end{bmatrix}$$ $\bar{\mathbf P}$ ์„ ๊ตฌํ•˜๊ธฐ ์œ„ํ•ด ์ž…๋ ฅ($\mathbf u$)์—์„œ ๋น„๋กฏ๋˜๋Š” ํ”„๋กœ์„ธ์Šค ๋…ธ์ด์ฆˆ $\mathbf Q$ ๋ฅผ ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.$$\mathbf{M} = \begin{bmatrix}\sigma_{vel}^2 & 0 \\ 0 & \sigma_\alpha^2\end{bmatrix}\;\;\;\;\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix}\frac{\partial f_1}{\partial v} & \frac{\partial f_1}{\partial \alpha} \\\frac{\partial f_2}{\partial v} & \frac{\partial f_2}{\partial \alpha} \\\frac{\partial f_3}{\partial v} & \frac{\partial f_3}{\partial \alpha}\end{bmatrix}$$$$\mathbf{\bar P} =\mathbf{FPF}^{\mathsf T} + \mathbf{VMV}^{\mathsf T}$$
import sympy from sympy.abc import alpha, x, y, v, w, R, theta from sympy import symbols, Matrix sympy.init_printing(use_latex="mathjax", fontsize='16pt') time = symbols('t') d = v*time beta = (d/w)*sympy.tan(alpha) r = w/sympy.tan(alpha) fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)], [y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)], [theta+beta]]) F = fxu.jacobian(Matrix([x, y, theta])) F # reduce common expressions B, R = symbols('beta, R') F = F.subs((d/w)*sympy.tan(alpha), B) F.subs(w/sympy.tan(alpha), R) V = fxu.jacobian(Matrix([v, alpha])) V = V.subs(sympy.tan(alpha)/w, 1/R) V = V.subs(time*v/R, B) V = V.subs(time*v, 'd') V
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
Measurement Model (์ธก์ •๋ชจ๋ธ)๋ ˆ์ด๋”๋กœ ๋ฒ”์œ„$(r)$์™€ ๋ฐฉ์œ„($\phi$)๋ฅผ ์ธก์ •ํ• ๋•Œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์„ผ์„œ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ $\mathbf p$ ๋Š” landmark์˜ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋‚ด์ค๋‹ˆ๋‹ค.$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}\;\;\;\;\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$$$\begin{aligned}\mathbf z& = h(\bar{\mathbf x}, \mathbf p) &+ \mathcal{N}(0, R)\\&= \begin{bmatrix}\sqrt{(p_x - x)^2 + (p_y - y)^2} \\\arctan(\frac{p_y - y}{p_x - x}) - \theta \end{bmatrix} &+ \mathcal{N}(0, R)\end{aligned}$$$h$ ์˜ Jacobian $\mathbf H$๋ฅผ ๋ฏธ๋ถ„ํ•˜์—ฌ ์•„๋ž˜์˜ ํ–‰๋ ฌ์„ ๊ตฌํ•ด์ค„์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.$$\mathbf H = \frac{\partial h(\mathbf x, \mathbf u)}{\partial \mathbf x} =\left[\begin{matrix}\frac{- p_{x} + x}{\sqrt{\left(p_{x} - x\right)^{2} + \left(p_{y} - y\right)^{2}}} & \frac{- p_{y} + y}{\sqrt{\left(p_{x} - x\right)^{2} + \left(p_{y} - y\right)^{2}}} & 0\\- \frac{- p_{y} + y}{\left(p_{x} - x\right)^{2} + \left(p_{y} - y\right)^{2}} & - \frac{p_{x} - x}{\left(p_{x} - x\right)^{2} + \left(p_{y} - y\right)^{2}} & -1\end{matrix}\right]$$
import sympy from sympy.abc import alpha, x, y, v, w, R, theta px, py = sympy.symbols('p_x, p_y') z = sympy.Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)], [sympy.atan2(py-y, px-x) - theta]]) z.jacobian(sympy.Matrix([x, y, theta])) # print(sympy.latex(z.jacobian(sympy.Matrix([x, y, theta]))) from math import sqrt def H_of(x, landmark_pos): """ compute Jacobian of H matrix where h(x) computes the range and bearing to a landmark for state x """ px = landmark_pos[0] py = landmark_pos[1] hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2 dist = sqrt(hyp) H = array( [[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0], [ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]]) return H from math import atan2 def Hx(x, landmark_pos): """ takes a state variable and returns the measurement that would correspond to that state. """ px = landmark_pos[0] py = landmark_pos[1] dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2) Hx = array([[dist], [atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]]) return Hx
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
์ธก์ • ๋…ธ์ด์ฆˆ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ด์ค๋‹ˆ๋‹ค.$$\mathbf R=\begin{bmatrix}\sigma_{range}^2 & 0 \\ 0 & \sigma_{bearing}^2\end{bmatrix}$$ Implementation`FilterPy` ์˜ `ExtendedKalmanFilter` class ๋ฅผ ํ™œ์šฉํ•ด์„œ EKF ๋ฅผ ๊ตฌํ˜„ํ•ด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
from filterpy.kalman import ExtendedKalmanFilter as EKF from numpy import array, sqrt, random import sympy class RobotEKF(EKF): def __init__(self, dt, wheelbase, std_vel, std_steer): EKF.__init__(self, 3, 2, 2) self.dt = dt self.wheelbase = wheelbase self.std_vel = std_vel self.std_steer = std_steer a, x, y, v, w, theta, time = sympy.symbols( 'a, x, y, v, w, theta, t') d = v*time beta = (d/w)*sympy.tan(a) r = w/sympy.tan(a) self.fxu = sympy.Matrix( [[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)], [y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)], [theta+beta]]) self.F_j = self.fxu.jacobian(sympy.Matrix([x, y, theta])) self.V_j = self.fxu.jacobian(sympy.Matrix([v, a])) # save dictionary and it's variables for later use self.subs = {x: 0, y: 0, v:0, a:0, time:dt, w:wheelbase, theta:0} self.x_x, self.x_y, = x, y self.v, self.a, self.theta = v, a, theta def predict(self, u): self.x = self.move(self.x, u, self.dt) self.subs[self.theta] = self.x[2, 0] self.subs[self.v] = u[0] self.subs[self.a] = u[1] F = array(self.F_j.evalf(subs=self.subs)).astype(float) V = array(self.V_j.evalf(subs=self.subs)).astype(float) # covariance of motion noise in control space M = array([[self.std_vel*u[0]**2, 0], [0, self.std_steer**2]]) self.P = F @ self.P @ F.T + V @ M @ V.T def move(self, x, u, dt): hdg = x[2, 0] vel = u[0] steering_angle = u[1] dist = vel * dt if abs(steering_angle) > 0.001: # is robot turning? beta = (dist / self.wheelbase) * tan(steering_angle) r = self.wheelbase / tan(steering_angle) # radius dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)], [r*cos(hdg) - r*cos(hdg + beta)], [beta]]) else: # moving in straight line dx = np.array([[dist*cos(hdg)], [dist*sin(hdg)], [0]]) return x + dx
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
์ •ํ™•ํ•œ ์ž”์ฐจ๊ฐ’ $y$์„ ๊ตฌํ•˜๊ธฐ ๋ฐฉ์œ„๊ฐ’์ด $0 \leq \phi \leq 2\pi$ ์ด๋„๋ก ๊ณ ์ณ์ค๋‹ˆ๋‹ค.
def residual(a, b): """ compute residual (a-b) between measurements containing [range, bearing]. Bearing is normalized to [-pi, pi)""" y = a - b y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi) if y[1] > np.pi: # move to [-pi, pi) y[1] -= 2 * np.pi return y from filterpy.stats import plot_covariance_ellipse from math import sqrt, tan, cos, sin, atan2 import matplotlib.pyplot as plt dt = 1.0 def z_landmark(lmark, sim_pos, std_rng, std_brg): x, y = sim_pos[0, 0], sim_pos[1, 0] d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2) a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0] z = np.array([[d + random.randn()*std_rng], [a + random.randn()*std_brg]]) return z def ekf_update(ekf, z, landmark): ekf.update(z, HJacobian = H_of, Hx = Hx, residual=residual, args=(landmark), hx_args=(landmark)) def run_localization(landmarks, std_vel, std_steer, std_range, std_bearing, step=10, ellipse_step=20, ylim=None): ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel, std_steer=std_steer) ekf.x = array([[2, 6, .3]]).T # x, y, steer angle ekf.P = np.diag([.1, .1, .1]) ekf.R = np.diag([std_range**2, std_bearing**2]) sim_pos = ekf.x.copy() # simulated position # steering command (vel, steering angle radians) u = array([1.1, .01]) plt.figure() plt.scatter(landmarks[:, 0], landmarks[:, 1], marker='s', s=60) track = [] for i in range(200): sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot track.append(sim_pos) if i % step == 0: ekf.predict(u=u) if i % ellipse_step == 0: plot_covariance_ellipse( (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], std=6, facecolor='k', alpha=0.3) x, y = sim_pos[0, 0], sim_pos[1, 0] for lmark in landmarks: z = z_landmark(lmark, sim_pos, std_range, std_bearing) ekf_update(ekf, z, lmark) if i % ellipse_step == 0: plot_covariance_ellipse( (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], std=6, facecolor='g', alpha=0.8) track = np.array(track) plt.plot(track[:, 0], track[:,1], color='k', lw=2) plt.axis('equal') plt.title("EKF Robot localization") if ylim is not None: plt.ylim(*ylim) plt.show() return ekf landmarks = array([[5, 10], [10, 5], [15, 15]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1) print('Final P:', ekf.P.diagonal())
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
Method for visualizing warping over training steps
import os import imageio import numpy as np import matplotlib.pyplot as plt np.random.seed(0)
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Construct warping matrix
g = 1.02 # scaling parameter # Matrix for rotating 45 degrees rotate = np.array([[np.cos(np.pi/4), -np.sin(np.pi/4)], [np.sin(np.pi/4), np.cos(np.pi/4)]]) # Matrix for scaling along x coordinate scale_x = np.array([[g, 0], [0, 1]]) # Matrix for scaling along y coordinate scale_y = np.array([[1, 0], [0, g]]) # Matrix for unrotating (-45 degrees) unrotate = np.array([[np.cos(-np.pi/4), -np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]) # Warping matrix warp = rotate @ scale_x @ unrotate # Unwarping matrix unwarp = rotate @ scale_y @ unrotate
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Warp grid slowly over time
# Construct 4x4 grid s = 1 # initial scale locs = [[x,y] for x in range(4) for y in range(4)] grid = s*np.array(locs) # Matrix to collect data n_steps = 50 warp_data = np.zeros([n_steps, 16, 2]) # Initial timestep has no warping warp_data[0,:,:] = grid # Warp slowly over time for i in range(1,n_steps): grid = grid @ warp warp_data[i,:,:] = grid fig, ax = plt.subplots(1,3, figsize=(15, 5), sharex=True, sharey=True) ax[0].scatter(warp_data[0,:,0], warp_data[0,:,1]) ax[0].set_title("Warping: step 0") ax[1].scatter(warp_data[n_steps//2,:,0], warp_data[n_steps//2,:,1]) ax[1].set_title("Warping: Step {}".format(n_steps//2)) ax[2].scatter(warp_data[n_steps-1,:,0], warp_data[n_steps-1,:,1]) ax[2].set_title("Warping: Step {}".format(n_steps-1)) plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Unwarp grid slowly over time
# Matrix to collect data unwarp_data = np.zeros([n_steps, 16, 2]) # Start with warped grid unwarp_data[0,:,:] = grid # Unwarp slowly over time for i in range(1,n_steps): grid = grid @ unwarp unwarp_data[i,:,:] = grid fig, ax = plt.subplots(1,3, figsize=(15, 5), sharex=True, sharey=True) ax[0].scatter(unwarp_data[0,:,0], unwarp_data[0,:,1]) ax[0].set_title("Unwarping: Step 0") # ax[0].set_ylim([-0.02, 0.05]) # ax[0].set_xlim([-0.02, 0.05]) ax[1].scatter(unwarp_data[n_steps//2,:,0], unwarp_data[n_steps//2,:,1]) ax[1].set_title("Unwarping: Step {}".format(n_steps//2)) ax[2].scatter(unwarp_data[n_steps-1,:,0], unwarp_data[n_steps-1,:,1]) ax[2].set_title("Unwarping: Step {}".format(n_steps-1)) plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
High-dimensional vectors with random projection matrix
# data = [warp_data, unwarp_data] data = np.concatenate([warp_data, unwarp_data], axis=0) # Random projection matrix hidden_dim = 32 random_mat = np.random.randn(2, hidden_dim) data = data @ random_mat # Add noise to each time step sigma = 0.2 noise = sigma*np.random.randn(2*n_steps, 16, hidden_dim) data = data + noise
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Parameterize scatterplot with average "congruent" and "incongruent" distances
loc2idx = {i:(loc[0],loc[1]) for i,loc in enumerate(locs)} idx2loc = {v:k for k,v in loc2idx.items()}
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Function for computing distance matrix
def get_distances(M): n,m = M.shape D = np.zeros([n,n]) for i in range(n): for j in range(n): D[i,j] = np.linalg.norm(M[i,:] - M[j,:]) return D D = get_distances(data[0]) plt.imshow(D) plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Construct same-rank groups for "congruent" and "incongruent" diagonals
c_rank = np.array([loc[0] + loc[1] for loc in locs]) # rank along "congruent" diagonal i_rank = np.array([3 + loc[0] - loc[1] for loc in locs]) # rank along "incongruent" diagonal G_idxs = [] # same-rank group for "congruent" diagonal H_idxs = [] # same-rank group for "incongruent" diagonal for i in range(7): # total number of ranks (0 through 6) G_set = [j for j in range(len(c_rank)) if c_rank[j] == i] H_set = [j for j in range(len(i_rank)) if i_rank[j] == i] G_idxs.append(G_set) H_idxs.append(H_set)
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Function for estimating $ \alpha $ and $ \beta $ $$ \bar{x_i} = \sum_{x \in G_i} \frac{1}{n} x $$$$ \alpha_{i, i+1} = || \bar{x}_i - \bar{x}_{i+1} || $$$$ \bar{y_i} = \sum_{y \in H_i} \frac{1}{n} y $$$$ \beta_{i, i+1} = || \bar{y}_i - \bar{y}_{i+1} || $$
def get_parameters(M): # M: [16, hidden_dim] alpha = [] beta = [] for i in range(6): # total number of parameters (01,12,23,34,45,56) # alpha_{i, i+1} x_bar_i = np.mean(M[G_idxs[i],:], axis=0) x_bar_ip1 = np.mean(M[G_idxs[i+1],:], axis=0) x_dist = np.linalg.norm(x_bar_i - x_bar_ip1) alpha.append(x_dist) # beta_{i, i+1} y_bar_i = np.mean(M[H_idxs[i],:], axis=0) y_bar_ip1 = np.mean(M[H_idxs[i+1],:], axis=0) y_dist = np.linalg.norm(y_bar_i - y_bar_ip1) beta.append(y_dist) return alpha, beta alpha_data = [] beta_data = [] for t in range(len(data)): alpha, beta = get_parameters(data[t]) alpha_data.append(alpha) beta_data.append(beta) plt.plot(alpha_data, color='tab:blue') plt.plot(beta_data, color='tab:orange') plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Use parameters to plot idealized 2D representations
idx2g = {} for idx in range(16): for g, group in enumerate(G_idxs): if idx in group: idx2g[idx] = g idx2h = {} for idx in range(16): for h, group in enumerate(H_idxs): if idx in group: idx2h[idx] = h def generate_grid(alpha, beta): cum_alpha = np.zeros(7) cum_beta = np.zeros(7) cum_alpha[1:] = np.cumsum(alpha) cum_beta[1:] = np.cumsum(beta) # Get x and y coordinate in rotated basis X = np.zeros([16, 2]) for idx in range(16): g = idx2g[idx] # G group h = idx2h[idx] # H group X[idx,0] = cum_alpha[g] # x coordinate X[idx,1] = cum_beta[h] # y coordinate # Unrotate unrotate = np.array([[np.cos(-np.pi/4), -np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]) X = X @ unrotate # Mean-center X = X - np.mean(X, axis=0, keepdims=True) return X X = generate_grid(alpha, beta)
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Get reconstructed grid for each time step
reconstruction = np.zeros([data.shape[0], data.shape[1], 2]) for t,M in enumerate(data): alpha, beta = get_parameters(M) X = generate_grid(alpha, beta) reconstruction[t,:,:] = X t = 50 plt.scatter(reconstruction[t,:,0], reconstruction[t,:,1]) plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Make .gif
plt.scatter(M[:,0], M[:,1]) reconstruction.shape xmin = np.min(reconstruction[:,:,0]) xmax = np.max(reconstruction[:,:,0]) ymin = np.min(reconstruction[:,:,1]) ymax = np.max(reconstruction[:,:,1]) for t,M in enumerate(reconstruction): plt.scatter(M[:,0], M[:,1]) plt.title("Reconstructed grid") plt.xlim([xmin-1.5, xmax+1.5]) plt.ylim([ymin-1.5, ymax+1.5]) plt.xticks([]) plt.yticks([]) plt.tight_layout() plt.savefig('reconstruction_test_{}.png'.format(t), dpi=100) plt.show() filenames = ['reconstruction_test_{}.png'.format(i) for i in range(2*n_steps)] with imageio.get_writer('reconstruction_test.gif', mode='I') as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) # remove files for filename in filenames: os.remove(filename)
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Rerun jobs to achieve better magmom matching---Will take most magnetic slab of OER set and apply those magmoms to the other slabs Import Modules
import os print(os.getcwd()) import sys # ######################################################### from methods import get_df_features_targets from methods import get_df_magmoms
/mnt/f/Dropbox/01_norskov/00_git_repos/PROJ_IrOx_OER/dft_workflow/run_slabs/rerun_magmoms
MIT
dft_workflow/run_slabs/rerun_magmoms/rerun_magmoms.ipynb
raulf2012/PROJ_IrOx_OER
Read Data
df_features_targets = get_df_features_targets() df_magmoms = get_df_magmoms() df_magmoms = df_magmoms.set_index("job_id") for name_i, row_i in df_features_targets.iterrows(): tmp = 42 # ##################################################### job_id_o_i = row_i[("data", "job_id_o", "", )] job_id_oh_i = row_i[("data", "job_id_oh", "", )] job_id_bare_i = row_i[("data", "job_id_bare", "", )] # ##################################################### job_ids = [job_id_o_i, job_id_oh_i, job_id_bare_i] df_magmoms.loc[job_ids]
_____no_output_____
MIT
dft_workflow/run_slabs/rerun_magmoms/rerun_magmoms.ipynb
raulf2012/PROJ_IrOx_OER
Documenting ClassesIt is almost as easy to document a class as it is to document a function. Simply add docstrings to all of the classes functions, and also below the class name itself. For example, here is a simple documented class
class Demo: """This class demonstrates how to document a class. This class is just a demonstration, and does nothing. However the principles of documentation are still valid! """ def __init__(self, name): """You should document the constructor, saying what it expects to create a valid class. In this case name -- the name of an object of this class """ self._name = name def getName(self): """You should then document all of the member functions, just as you do for normal functions. In this case, returns the name of the object """ return self._name d = Demo("cat") help(d)
Help on Demo in module __main__ object: class Demo(builtins.object) | This class demonstrates how to document a class. | | This class is just a demonstration, and does nothing. | | However the principles of documentation are still valid! | | Methods defined here: | | __init__(self, name) | You should document the constructor, saying what it expects to | create a valid class. In this case | | name -- the name of an object of this class | | getName(self) | You should then document all of the member functions, just as | you do for normal functions. In this case, returns | the name of the object | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined)
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Often, when you write a class, you want to hide member data or member functions so that they are only visible within an object of the class. For example, above, the `self._name` member data should be hidden, as it should only be used by the object.You control the visibility of member functions or member data using an underscore. If the member function or member data name starts with an underscore, then it is hidden. Otherwise, the member data or function is visible.For example, we can hide the `getName` function by renaming it to `_getName`
class Demo: """This class demonstrates how to document a class. This class is just a demonstration, and does nothing. However the principles of documentation are still valid! """ def __init__(self, name): """You should document the constructor, saying what it expects to create a valid class. In this case name -- the name of an object of this class """ self._name = name def _getName(self): """You should then document all of the member functions, just as you do for normal functions. In this case, returns the name of the object """ return self._name d = Demo("cat") help(d)
Help on Demo in module __main__ object: class Demo(builtins.object) | This class demonstrates how to document a class. | | This class is just a demonstration, and does nothing. | | However the principles of documentation are still valid! | | Methods defined here: | | __init__(self, name) | You should document the constructor, saying what it expects to | create a valid class. In this case | | name -- the name of an object of this class | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined)
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Member functions or data that are hidden are called "private". Member functions or data that are visible are called "public". You should document all public member functions of a class, as these are visible and designed to be used by other people. It is helpful, although not required, to document all of the private member functions of a class, as these will only really be called by you. However, in years to come, you will thank yourself if you still documented them... ;-)While it is possible to make member data public, it is not advised. It is much better to get and set values of member data using public member functions. This makes it easier for you to add checks to ensure that the data is consistent and being used in the right way. For example, compare these two classes that represent a person, and hold their height.
class Person1: """Class that holds a person's height""" def __init__(self): """Construct a person who has zero height""" self.height = 0 class Person2: """Class that holds a person's height""" def __init__(self): """Construct a person who has zero height""" self._height = 0 def setHeight(self, height): """Set the person's height to 'height', returning whether or not the height was set successfully """ if height < 0 or height > 300: print("This is an invalid height! %s" % height) return False else: self._height = height return True def getHeight(self): """Return the person's height""" return self._height
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
The first example is quicker to write, but it does little to protect itself against a user who attempts to use the class badly.
p = Person1() p.height = -50 p.height p.height = "cat" p.height
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
The second example takes more lines of code, but these lines are valuable as they check that the user is using the class correctly. These checks, when combined with good documentation, ensure that your classes can be safely used by others, and that incorrect use will not create difficult-to-find bugs.
p = Person2() p.setHeight(-50) p.getHeight() p.setHeight("cat") p.getHeight()
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Exercise Exercise 1Below is the completed `GuessGame` class from the previous lesson. Add documentation to this class.
class GuessGame: """ This class provides a simple guessing game. You create an object of the class with its own secret, with the aim that a user then needs to try to guess what the secret is. """ def __init__(self, secret, max_guesses=5): """Create a new guess game secret -- the secret that must be guessed max_guesses -- the maximum number of guesses allowed by the user """ self._secret = secret self._nguesses = 0 self._max_guesses = max_guesses def guess(self, value): """Try to guess the secret. This will print out to the screen whether or not the secret has been guessed. value -- the user-supplied guess """ if (self.nGuesses() >= self.maxGuesses()): print("Sorry, you have run out of guesses") elif (value == self._secret): print("Well done - you have guessed my secret") else: self._nguesses += 1 print("Try again...") def nGuesses(self): """Return the number of incorrect guesses made so far""" return self._nguesses def maxGuesses(self): """Return the maximum number of incorrect guesses allowed""" return self._max_guesses help(GuessGame)
Help on class GuessGame in module __main__: class GuessGame(builtins.object) | This class provides a simple guessing game. You create an object | of the class with its own secret, with the aim that a user | then needs to try to guess what the secret is. | | Methods defined here: | | __init__(self, secret, max_guesses=5) | Create a new guess game | | secret -- the secret that must be guessed | max_guesses -- the maximum number of guesses allowed by the user | | guess(self, value) | Try to guess the secret. This will print out to the screen whether | or not the secret has been guessed. | | value -- the user-supplied guess | | maxGuesses(self) | Return the maximum number of incorrect guesses allowed | | nGuesses(self) | Return the number of incorrect guesses made so far | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined)
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Exercise 2Below is a poorly-written class that uses public member data to store the name and age of a Person. Edit this class so that the member data is made private. Add `get` and `set` functions that allow you to safely get and set the name and age.
class Person: """Class the represents a Person, holding their name and age""" def __init__(self, name="unknown", age=0): """Construct a person with unknown name and an age of 0""" self.setName(name) self.setAge(age) def setName(self, name): """Set the person's name to 'name'""" self._name = str(name) #ย str ensures the name is a string def getName(self): """Return the person's name""" return self._name def setAge(self, age): """Set the person's age. This must be a number between 0 and 130""" if (age < 0 or age > 130): print("Cannot set the age to an invalid value: %s" % age) self._age = age def getAge(self): """Return the person's age""" return self._age p = Person(name="Peter Parker", age=21) p.getName() p.getAge()
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Exercise 3Add a private member function called `_splitName` to your `Person` class that breaks the name into a surname and first name. Add new functions called `getFirstName` and `getSurname` that use this function to return the first name and surname of the person.
class Person: """Class the represents a Person, holding their name and age""" def __init__(self, name="unknown", age=0): """Construct a person with unknown name and an age of 0""" self.setName(name) self.setAge(age) def setName(self, name): """Set the person's name to 'name'""" self._name = str(name) #ย str ensures the name is a string def getName(self): """Return the person's name""" return self._name def setAge(self, age): """Set the person's age. This must be a number between 0 and 130""" if (age < 0 or age > 130): print("Cannot set the age to an invalid value: %s" % age) self._age = age def getAge(self): """Return the person's age""" return self._age def _splitName(self): """Private function that splits the name into parts""" return self._name.split(" ") def getFirstName(self): """Return the first name of the person""" return self._splitName()[0] def getSurname(self): """Return the surname of the person""" return self._splitName()[-1] p = Person(name="Peter Parker", age=21) p.getFirstName() p.getSurname()
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Lab 4: EM Algorithm and Single-Cell RNA-seq Data Name: Your Name Here (Your netid here) Due April 2, 2021 11:59 PM Preamble (Don't change this) Important Instructions - 1. Please implement all the *graded functions* in main.py file. Do not change function names in main.py.2. Please read the description of every graded function very carefully. The description clearly states what is the expectation of each graded function. 3. After some graded functions, there is a cell which you can run and see if the expected output matches the output you are getting. 4. The expected output provided is just a way for you to assess the correctness of your code. The code will be tested on several other cases as well.
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %run main.py module = Lab4()
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Part 1 : Expectation-Maximization (EM) algorithm for transcript quantification IntroductionThe EM algorithm is a very helpful tool to compute maximum likelihood estimates of parameters in models that have some latent (hidden) variables.In the case of the transcript quantification problem, the model parameters we want to estimate are the transcript relative abundances $\rho_1,...,\rho_K$.The latent variables are the read-to-transcript indicator variables $Z_{ik}$, which indicate whether the $i$th read comes from the $k$th transcript (in which case $Z_{ik}=1$.In this part of the lab, you will be given the read alignment data.For each read and transcript pair, it tells you whether the read can be mapped (i.e., aligned) to that transcript.Using the EM algorithm, you will estimate the relative abundances of the trascripts. Reading read transcript data - We have 30000 reads and 30 transcripts
n_reads=30000 n_transcripts=30 read_mapping=[] with open("read_mapping_data.txt",'r') as file : lines_reads=file.readlines() for line in lines_reads : read_mapping.append([int(x) for x in line.split(",")]) read_mapping[:10]
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Rather than giving you a giant binary matrix, we encoded the read mapping data in a more concise way. read_mapping is a list of lists. The $i$th list contains the indices of the transcripts that the $i$th read maps to. Reading true abundances and transcript lengths
with open("transcript_true_abundances.txt",'r') as file : lines_gt=file.readlines() ground_truth=[float(x) for x in lines_gt[0].split(",")] with open("transcript_lengths.txt",'r') as file : lines_gt=file.readlines() tr_lengths=[float(x) for x in lines_gt[0].split(",")] ground_truth[:5] tr_lengths[:5]
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Graded Function 1 : expectation_maximization (10 marks) Purpose : To implement the EM algorithm to obtain abundance estimates for each transcript.E-step : In this step, we calculate the fraction of read that is assigned to each transcript (i.e., the estimate of $Z_{ik}$). For read $i$ and transicript $k$, this is calculated by dividing the current abundance estimate of transcript $k$ by the sum of abundance estimates of all transcripts that read $i$ maps to.M-step : In this step, we update the abundance estimate of each transcript based on the fraction of all reads that is currently assigned to the transcript. First we compute the average fraction of all reads assigned to the transcript. Then, (if transcripts are of different lengths) we divide the result by the transcript length.Finally, we normalize all abundance estimates so that they add up to 1.Inputs - read_mapping (which is a list of lists where each sublist contains the transcripts to which a particular read belongs to. The length of this list is equal to the number of reads, i.e. 30000; tr_lengths (a list containing the length of the 30 transcripts, in order); n_iterations (the number of EM iterations to be performed)Output - a list of lists where each sublist contains the abundance estimates for a transcript across all iterations. The length of each sublist should be equal to the number of iterations plus one (for the initialization) and the total number of sublists should be equal to the number of transcripts.
history=module.expectation_maximization(read_mapping,tr_lengths,20) print(len(history)) print(len(history[0])) print(history[0][-5:]) print(history[1][-5:]) print(history[2][-5:])
30 21 [0.033769639494636614, 0.03381298624783303, 0.03384568373972949, 0.0338703482393148, 0.03388895326082054] [0.0020082674603036053, 0.0019649207071071456, 0.0019322232152109925, 0.0019075587156241912, 0.0018889536941198502] [0.0660581789629968, 0.06606927656035864, 0.0660765012689558, 0.06608120466668756, 0.0660842666518177]
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output - 3021[0.033769639494636614, 0.03381298624783303, 0.03384568373972948, 0.0338703482393148, 0.03388895326082054][0.0020082674603036053, 0.0019649207071071456, 0.0019322232152109925, 0.0019075587156241912, 0.0018889536941198502][0.0660581789629968, 0.06606927656035864, 0.06607650126895578, 0.06608120466668756, 0.0660842666518177] You can use the following function to visualize how the estimated relative abundances are converging with the number of iterations of the algorithm.
def visualize_em(history,n_iterations) : #start code here fig, ax = plt.subplots(figsize=(8,6)) for j in range(n_transcripts): ax.plot([i for i in range(n_iterations+1)],[history[j][i] - ground_truth[j] for i in range(n_iterations+1)],marker='o') #end code here visualize_em(history,20)
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Part 2 : Exploring Single-Cell RNA-seq data In a study published in 2015, Zeisel et al. used single-cell RNA-seq data to explore the cell diversity in the mouse brain. We will explore the data used for their study.You can read more about it [here](https://science.sciencemag.org/content/347/6226/1138).
#reading single-cell RNA-seq data lines_genes=[] with open("Zeisel_expr.txt",'r') as file : lines_genes=file.readlines() lines_genes[0][:300]
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Each line in the file Zeisel_expr.txt corresponds to one gene.The columns correspond to different cells (notice that this is the opposite of how we looked at this matrix in class).The entries of this matrix correspond to the number of reads mapping to a given gene in the corresponding cell.
# reading true labels for each cell with open("Zeisel_labels.txt",'r') as file : true_labels = file.read().splitlines()
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
The study also provides us with true labels for each of the cells.For each of the cells, the vector true_labels contains the name of the cell type.There are nine different cell types in this dataset.
set(true_labels)
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Graded Function 2 : prepare_data (10 marks) :Purpose - To create a dataframe where each row corresponds to a specific cell and each column corresponds to the expressions levels of a particular gene across all cells. You should name the columns as "Gene_1", "Gene_2", and so on.We will iterate through all the lines in lines_genes list created above, add 1 to each value and take log.Each line will correspond to 1 column in the dataframeOutput - gene expression dataframe Note - All the values in the output dataframe should be rounded off to 5 digits after the decimal
data_df=module.prepare_data(lines_genes) print(data_df.shape) print(data_df.iloc[0:3,:5]) print(data_df.columns)
Index(['Gene_0', 'Gene_1', 'Gene_2', 'Gene_3', 'Gene_4', 'Gene_5', 'Gene_6', 'Gene_7', 'Gene_8', 'Gene_9', ... 'Gene_19962', 'Gene_19963', 'Gene_19964', 'Gene_19965', 'Gene_19966', 'Gene_19967', 'Gene_19968', 'Gene_19969', 'Gene_19970', 'Gene_19971'], dtype='object', length=19972)
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output :``(3005, 19972)```` Gene_0 Gene_1 Gene_2 Gene_3 Gene_4`` ``0 0.0 1.38629 1.38629 0.0 0.69315````1 0.0 0.69315 0.69315 0.0 0.69315````2 0.0 0.00000 1.94591 0.0 0.69315`` Graded Function 3 : identify_less_expressive_genes (10 marks)Purpose : To identify genes (columns) that are expressed in less than 25 cells. We will create a list of all gene columns that have values greater than 0 for less than 25 cells.Input - gene expression dataframeOutput - list of column names which are expressed in less than 25 cells
drop_columns = module.identify_less_expressive_genes(data_df) print(len(drop_columns)) print(drop_columns[:10])
5120 Index(['Gene_28', 'Gene_126', 'Gene_145', 'Gene_146', 'Gene_151', 'Gene_152', 'Gene_167', 'Gene_168', 'Gene_170', 'Gene_173'], dtype='object')
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output : ``5120`` ``['Gene_28', 'Gene_126', 'Gene_145', 'Gene_146', 'Gene_151', 'Gene_152', 'Gene_167', 'Gene_168', 'Gene_170', 'Gene_173']`` Filtering less expressive genesWe will now create a new dataframe in which genes which are expressed in less than 25 cells will not be present
df_new = data_df.drop(drop_columns, axis=1) df_new.head()
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Graded Function 4 : perform_pca (10 marks)Pupose - Perform Principal Component Analysis on the new dataframe and take the top 50 principal componentsInput - df_newOutput - numpy array containing the top 50 principal components of the data. Note - All the values in the output should be rounded off to 5 digits after the decimal Note - Please use random_state=365 for the PCA object you will create
pca_data=module.perform_pca(df_new) print(pca_data.shape) print(type(pca_data)) print(pca_data[0:3,:5])
(3005, 50) <class 'numpy.ndarray'> [[26.97148 -2.7244 0.62163 25.90148 -6.24736] [26.49135 -1.58774 -4.79315 24.01094 -7.25618] [47.82664 5.06799 2.15177 30.24367 -3.38878]]
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output : ``(3005, 50)````````[[26.97148 -2.7244 0.62163 25.90148 -6.24736]```` [26.49135 -1.58774 -4.79315 24.01094 -7.25618]`` `` [47.82664 5.06799 2.15177 30.24367 -3.38878]]`` (Non-graded) Function 5 : perform_tsnePupose - Perform t-SNE on the pca_data and obtain 2 t-SNE componentsWe will use TSNE class of the sklearn.manifold package. Use random_state=1000 and perplexity=50Documenation can be found here - https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.htmlInput - pca_dataOutput - numpy array containing the top 2 tsne components of the data.**Note: This function will not be graded because of the random nature of t-SNE.**
tsne_data50 = module.perform_tsne(pca_data) print(tsne_data50.shape) print(tsne_data50[:3,:])
(3005, 2) [[ 19.031317 -45.3434 ] [ 19.188553 -44.945473] [ 17.369982 -47.997364]]
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output :(These numbers can deviate a bit depending on your sklearn)``(3005, 2)````[[ 15.069608 -47.535984]```` [ 15.251476 -47.172073]`` `` [ 13.3932 -49.909657]]``
fig, ax = plt.subplots(figsize=(12,8)) sns.scatterplot(tsne_data50[:,0], tsne_data50[:,1], hue=true_labels) plt.show()
/usr/local/lib/python3.9/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. warnings.warn(
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Week 2 Tasks During this week's meeting, we have discussed about if/else statements, Loops and Lists. This notebook file will guide you through reviewing the topics discussed and assisting you to be familiarized with the concepts discussed. Let's first create a list
# Create a list that stores the multiples of 5, from 0 to 50 (inclusive) # initialize the list using list comprehension! # Set the list name to be 'l' # TODO: Make the cell return 'True' # Hint: Do you remember that you can apply arithmetic operators in the list comprehension? # Your code goes below here # Do not modify below l == [0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50]
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
If you are eager to learn more about list comprehension, you can look up here -> https://www.programiz.com/python-programming/list-comprehension. You will find out how you can initialize `l` without using arithmetic operators, but using conditionals (if/else).Now, simply run the cell below, and observe how `l` has changed.
l[0] = 3 print(l) l[5]
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
As seen above, you can overwrite each elements of the list.Using this fact, complete the task written below. If/elif/else practice
# Write a for loop such that: # For each elements in the list l, # If the element is divisible by 6, divide the element by 6 # Else if the element is divisible by 3, divide the element by 3 and then add 4 # Else if the element is divisible by 2, subtract 10. # Else, square the element # TODO: Make the cell return 'True' l = [3, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50] # Your code goes below here # Do not modify below l = [int(i) for i in l] l == [5, 25, 0, 9, 10, 625, 5, 1225, 30, 19, 40]
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
Limitations of a ternary operator
# Write a for loop that counts the number of odd number elements in the list # and the number of even number elements in the list # These should be stored in the variables 'odd_count' and 'even_count', which are declared below. # Try to use the ternary operator inside the for loop and inspect why it does not work # TODO: Make the cell return 'True' l = [5, 25, 0, 9, 10, 625, 5, 1225, 30, 19, 40] odd_count, even_count = 0, 0 # Your code goes below here # Do not modify below print("There are 7 odd numbers in the list.") if odd_count == 7 else print("Your odd_count is not correct.") print("There are 4 even numbers in the list.") if even_count == 4 else print("Your even_count is not correct.") print(odd_count == 7 and even_count == 4 and odd_count + even_count == len(l))
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
If you have tried using the ternary operator in the cell above, you would have found that the cell fails to compile because of a syntax error. This is because you can only write *expressions* in ternary operators, specifically **the last segment of the three segments in the operator**, not *statements*.In other words, since your code (which last part of it would have been something like `odd_count += 1` or `even_count += 1`) is a *statement*, the code is syntactically incorrect.To learn more about *expressions* and *statements*, please refer to this webpage -> https://runestone.academy/runestone/books/published/thinkcspy/SimplePythonData/StatementsandExpressions.htmlThus, a code like `a += 1 if else b += 1` is syntactically wrong as `b += 1` is a *statement*, and we cannot use the ternary operator to achieve something like this.In fact, ternary operators are usually used like this: `a += 1 if else 0`.The code above behaves exactly the same as this: `if : a += 1 else: a = 0`. Does this give better understanding about why statements cannot be used in ternary operators? If not, feel free to do more research on your own, or open up a discussion during the next team meeting! While loop and boolean practice
# Write a while loop that finds an index of the element in 'l' which first exceeds 1000. # The index found should be stored in the variable 'large_index' # If there are no element in 'l' that exceeds 1000, 'large_index' must store -1 # Use the declared 'large_not_found' as the condition for the while loop # Use the declared 'index' to iterate through 'l' # Do not use 'break' # TODO: Make the cell return 'True' l = [5, 25, 0, 9, 10, 625, 5, 1225, 30, 19, 1001] large_not_found = True index = 0 large_index = 0 # Your code goes below here # Do not modify below print(large_index == 7)
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
Finding the minimum element
# For this task, you can use either for loop or while loop, depending on your preference # Find the smallest element in 'l' and store it in the declared variable 'min_value' # 'min_value' is initialized as a big number # Do not use min() # TODO: Make the cell return 'True' import sys min_value = sys.maxsize min_index = 0 # Your code goes below here # Do not modify below print(min_value == 0) import os os.getpid()
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
launch scripts through SLURM The script in the cell below submits SLURM jobs running the requested `script`, with all parameters specified in `param_iterators` and the folder where to dump data as last parameter. The generated SBATCH scipts (`.job` files) are saved in the `jobs` folder and then submitted.Output and error dumps are saved in the `out` folder.
import numpy as np import os from itertools import product ####################### ### User parameters ### ####################### script = "TFIM-bangbang-WF.py" # name of the script to be run data_subdir = "TFIM/bangbang/WF" # subdirectory of ยดdataยด where to save results jobname_template = "BBWF-L{}JvB{}nit{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(16, 21), # L [0.2, 1, 5], # JvB [None], # nit [200] # n_samples ) time = "4-00:00" # format days-hh:mm mem = "4GB" # can use postfixes (MB, GB, ...) partition = "compIntel" # insert here additional lines that should be run before the script # (source bash scripts, load modules, activate environment, etc.) additional_lines = [ 'source ~/.bashrc\n' ] ##################################### ### Create folders, files and run ### ##################################### current_dir = os.getcwd() script = os.path.join(*os.path.split(current_dir)[:-1], 'scripts', script) data_supdir = os.path.join(*os.path.split(current_dir)[:-1], 'data') data_dir = os.path.join(data_supdir, data_subdir) job_dir = 'jobs' out_dir = 'out' os.makedirs(job_dir, exist_ok=True) os.makedirs(out_dir, exist_ok=True) os.makedirs(data_dir, exist_ok=True) for params in product(*param_iterators): # ******** for BangBang ******** # redefine nit = L if it is None if params[2] is None: params = list(params) params[2] = params[0] # ****************************** job_name = jobname_template.format(*params) job_file = os.path.join(job_dir, job_name+'.job') with open(job_file, 'wt') as fh: fh.writelines( ["#!/bin/bash\n", f"#SBATCH --job-name={job_name}\n", f"#SBATCH --output={os.path.join(out_dir, job_name+'.out')}\n", f"#SBATCH --error={os.path.join(out_dir, job_name+'.err')}\n", f"#SBATCH --time={time}\n", f"#SBATCH --mem={mem}\n", f"#SBATCH --partition={partition}\n", f"#SBATCH --mail-type=NONE\n", ] + additional_lines + [ f"python -u {script} {' '.join(str(par) for par in params)} {data_dir}\n"] ) os.system("sbatch %s" %job_file) complex(1).__sizeof__() * 2**(2*15) / 1E9
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
History of parameters that have been run TFIM LogSweep density matrix
script = "TFIM-logsweep-DM.py" data_subdir = "TFIM/logsweep/DM" param_iterators = ( [2], # L [0.2, 1, 5], # JvB np.arange(2, 50) # K ) param_iterators = ( [7], # L [0.2, 1, 5], # JvB np.arange(2, 50) # K ) param_iterators = ( np.arange(2, 11), # L [0.2, 1, 5], # JvB [2, 5, 10, 20, 40] # K )
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
Iterative, density matrix
script = "TFIM-logsweep-DM-iterative.py" # name of the script to be run data_subdir = "TFIM/logsweep/DM/iterative" # subdirectory of ยดdataยด where to save results jobname_template = "ItLS-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( [2, 7], # L [0.2, 1, 5], # JvB np.arange(2, 50) # K )
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
WF + Monte Carlo old version of the script the old version suffered from unnormalized final states due to numerical error
script = "TFIM-logsweep-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF-raw" # subdirectory of ยดdataยด where to save results jobname_template = "WF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 15), # L [0.2, 1, 5], # JvB [2, 3, 5, 10, 20, 40], # K [100] # n_samples )
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
new version of the script Where normalization is forced
script = "TFIM-logsweep-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF" # subdirectory of ยดdataยด where to save results jobname_template = "WF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 10), # L [0.2, 1, 5], # JvB [2, 3, 5, 10], # K [100] # n_samples ) time = "3-00:00" # format days-hh:mm mem = "1GB" # can use postfixes (MB, GB, ...) partition = "compIntel" script = "TFIM-logsweep-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF" # subdirectory of ยดdataยด where to save results jobname_template = "WF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(10, 14), # L [0.2, 1, 5], # JvB [2, 3, 5, 10], # K [100] # n_samples ) time = "3-00:00" # format days-hh:mm mem = "20GB" # can use postfixes (MB, GB, ...) partition = "compIntel"
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
iterative, WF + Monte Carlo
script = "TFIM-logsweep-WF-iterative.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF/iterative" # subdirectory of ยดdataยด where to save results jobname_template = "WFiter-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 14), # L [0.2, 1, 5], # JvB [5, 10], # K [100] # n_samples ) time = "3-00:00" # format days-hh:mm mem = "20GB" # can use postfixes (MB, GB, ...) partition = "ibIntel"
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
continuous DM
script = "TFIM-logsweep-continuous-DM.py" # name of the script to be run data_subdir = "TFIM/logsweep/continuous/DM" # subdirectory of ยดdataยด where to save results jobname_template = "Rh-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2,7), # L [0.2, 1, 5], # JvB [2, 3, 5, 10, 20, 40] # K ) param_iterators = ( [7], # L [0.2, 1, 5], # JvB np.arange(2, 50) # K ) param_iterators = ( np.arange(8, 15), # L [0.2, 1, 5], # JvB [2,3,5,10,20,40] # K )
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
continuous WF
script = "TFIM-logsweep-continuous-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/continuous/WF" # subdirectory of ยดdataยด where to save results jobname_template = "CWF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 12), # L [0.2, 1, 5], # JvB [2, 3, 5, 10, 20, 40], # K [100] # n_samples ) time = "3-00:00" # format days-hh:mm mem = "1GB" # can use postfixes (MB, GB, ...) partition = "ibIntel" param_iterators = ( [13, 14], # L [0.2, 1, 5], # JvB [2, 10], # K [100] # n_samples ) time = "3-00:00" # format days-hh:mm mem = "100GB" # can use postfixes (MB, GB, ...) partition = "ibIntel"
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
TFIM bang-bang
data_subdir = "TFIM/bangbang/WF" # subdirectory of ยดdataยด where to save results jobname_template = "BBWF-L{}JvB{}nit{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 21), # L [0.2, 1, 5], # JvB [None], # nit [200] # n_samples ) time = "4-00:00" # format days-hh:mm mem = "4GB" # can use postfixes (MB, GB, ...) partition = "compIntel"
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
Lambda School Data Science*Unit 2, Sprint 3, Module 1*--- Define ML problems- Choose a target to predict, and check its distribution- Avoid leakage of information from test to train or from target to features- Choose an appropriate evaluation metric Setup
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/'
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose a target to predict, and check its distribution Overview This is the data science process at a high level:โ€”Renee Teate, [Becoming a Data Scientist, PyData DC 2016 Talk](https://www.becomingadatascientist.com/2016/10/11/pydata-dc-2016-talk/) We've focused on the 2nd arrow in the diagram, by training predictive models. Now let's zoom out and focus on the 1st arrow: defining problems, by translating business questions into code/data questions. Last sprint, you did a Kaggle Challenge. Itโ€™s a great way to practice model validation and other technical skills. But that's just part of the modeling process. [Kaggle gets critiqued](https://speakerdeck.com/szilard/machine-learning-software-in-practice-quo-vadis-invited-talk-kdd-conference-applied-data-science-track-august-2017-halifax-canada?slide=119) because some things are done for you: Like [**defining the problem!**](https://www.linkedin.com/pulse/data-science-taught-universities-here-why-maciej-wasiak/) In todayโ€™s module, youโ€™ll begin to practice this objective, with your dataset youโ€™ve chosen for your personal portfolio project.When defining a supervised machine learning problem, one of the first steps is choosing a target to predict. Which column in your tabular dataset will you predict?Is your problem regression or classification? You have options. Sometimes itโ€™s not straightforward, as we'll see below.- Discrete, ordinal, low cardinality target: Can be regression or multi-class classification.- (In)equality comparison: Converts regression or multi-class classification to binary classification.- Predicted probability: Seems to [blur](https://brohrer.github.io/five_questions_data_science_answers.html) the line between classification and regression. Follow Along Let's reuse the [Burrito reviews dataset.](https://nbviewer.jupyter.org/github/LambdaSchool/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/LS_DS_214_assignment.ipynb) ๐ŸŒฏ
import pandas as pd pd.options.display.max_columns = None df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose your target Which column in your tabular dataset will you predict?
df.head() df['overall'].describe() import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.distplot(df['overall']) df['Great'] = df['overall'] >= 4 df['Great']
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
How is your target distributed?For a classification problem, determine: How many classes? Are the classes imbalanced?
y = df['Great'] y.unique() y.value_counts(normalize=True) sns.countplot(y) y.value_counts(normalize=True).plot(kind="bar") # Stretch: how to fix imbalanced classes #. upsampling: randomly re-sample from the minority class to increase the sample in the minority class #. downsampling: random re-sampling from the majority class to decrease the sample in the majority class # Why does it matter if we have imbalanced classes? # 1:1000 tested positive:tested negative # 99.99% accuracy #
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Avoid leakage of information from test to train or from target to features Overview Overfitting is our enemy in applied machine learning, and leakage is often the cause.> Make sure your training features do not contain data from the โ€œfutureโ€ (aka time traveling). While this might be easy and obvious in some cases, it can get tricky. โ€ฆ If your test metric becomes really good all of the sudden, ask yourself what you might be doing wrong. Chances are you are time travelling or overfitting in some way. โ€” [Xavier Amatriain](https://www.quora.com/What-are-some-best-practices-for-training-machine-learning-models/answer/Xavier-Amatriain)Choose train, validate, and test sets. Are some observations outliers? Will you exclude them? Will you do a random split or a time-based split? You can (re)read [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/). Follow Along First, begin to **explore and clean your data.**
df['Burrito'].nunique() df['Burrito'].unique() # Combine Burrito categories df['Burrito_rename'] = df['Burrito'].str.lower() # All burrito types that contain 'California' are grouped into the same #. category. Similar logic applied to asada, surf, and carnitas. # 'California Surf and Turf' california = df['Burrito'].str.contains('california') asada = df['Burrito'].str.contains('asada') surf = df['Burrito'].str.contains('surf') carnitas = df['Burrito'].str.contains('carnitas') df.loc[california, 'Burrito_rename'] = 'California' df.loc[asada, 'Burrito_rename'] = 'Asada' df.loc[surf, 'Burrito_rename'] = 'Surf & Turf' df.loc[carnitas, 'Burrito_rename'] = 'Carnitas' # If the burrito is not captured in one of the above categories, it is put in the # 'Other' category. df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito_rename'] = 'Other' df[['Burrito', 'Burrito_rename']] df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood']) df.info() df.isna().sum().sort_values() df = df.fillna('Missing') df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 423 entries, 0 to 422 Data columns (total 61 columns): Burrito 423 non-null object Date 423 non-null object Yelp 423 non-null object Google 423 non-null object Chips 423 non-null object Cost 423 non-null object Hunger 423 non-null object Mass (g) 423 non-null object Density (g/mL) 423 non-null object Length 423 non-null object Circum 423 non-null object Volume 423 non-null object Tortilla 423 non-null float64 Temp 423 non-null object Meat 423 non-null object Fillings 423 non-null object Meat:filling 423 non-null object Uniformity 423 non-null object Salsa 423 non-null object Synergy 423 non-null object Wrap 423 non-null object overall 423 non-null object Rec 423 non-null object Unreliable 423 non-null object NonSD 423 non-null object Beef 423 non-null object Pico 423 non-null object Guac 423 non-null object Cheese 423 non-null object Fries 423 non-null object Sour cream 423 non-null object Pork 423 non-null object Chicken 423 non-null object Shrimp 423 non-null object Fish 423 non-null object Rice 423 non-null object Beans 423 non-null object Lettuce 423 non-null object Tomato 423 non-null object Bell peper 423 non-null object Carrots 423 non-null object Cabbage 423 non-null object Sauce 423 non-null object Salsa.1 423 non-null object Cilantro 423 non-null object Onion 423 non-null object Taquito 423 non-null object Pineapple 423 non-null object Ham 423 non-null object Chile relleno 423 non-null object Nopales 423 non-null object Lobster 423 non-null object Queso 423 non-null object Egg 423 non-null object Mushroom 423 non-null object Bacon 423 non-null object Sushi 423 non-null object Avocado 423 non-null object Corn 423 non-null object Zucchini 423 non-null object Great 423 non-null bool dtypes: bool(1), float64(1), object(59) memory usage: 198.8+ KB
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Next, do a **time-based split:**- Train on reviews from 2016 & earlier. - Validate on 2017. - Test on 2018 & later.
df['Date'] = pd.to_datetime(df['Date']) # create a subset of data for anything less than or equal to the year 2016, equal #. to 2017 for validation, and test set to include >= 2018 train = df[df['Date'].dt.year <= 2016] val = df[df['Date'].dt.year == 2017] test = df[df['Date'].dt.year >= 2018] train.shape, val.shape, test.shape
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Begin to choose which features, if any, to exclude. **Would some features โ€œleakโ€ future information?**What happens if we _DONโ€™T_ drop features with leakage?
# Try a shallow decision tree as a fast, first model import category_encoders as ce from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier target = 'Great' features = train.columns.drop([target, 'Date', 'Data']) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] pipeline = make_pipeline( ce.OrdinalEncoder(), DecisionTreeClassifier() ) pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val))
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Drop the column with โ€œleakageโ€.
target = 'Great' features = train.columns.drop([target, 'Date', 'Data', 'overall']) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] pipeline = make_pipeline( ce.OrdinalEncoder(), DecisionTreeClassifier() ) pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val))
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose an appropriate evaluation metric Overview How will you evaluate success for your predictive model? You must choose an appropriate evaluation metric, depending on the context and constraints of your problem.**Classification & regression metrics are different!**- Donโ€™t use _regression_ metrics to evaluate _classification_ tasks.- Donโ€™t use _classification_ metrics to evaluate _regression_ tasks.[Scikit-learn has lists of popular metrics.](https://scikit-learn.org/stable/modules/model_evaluation.htmlcommon-cases-predefined-values) Follow Along For classification problems: As a rough rule of thumb, if your majority class frequency is >= 50% and < 70% then you can just use accuracy if you want. Outside that range, accuracy could be misleading โ€” so what evaluation metric will you choose, in addition to or instead of accuracy? For example:- Precision?- Recall?- ROC AUC?
# 1:3 -> 25%, 75% y.value_counts(normalize=True)
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Precision & RecallLet's review Precision & Recall. What do these metrics mean, in scenarios like these?- Predict great burritos- Predict fraudulent transactions- Recommend Spotify songs[Are false positives or false negatives more costly? Can you optimize for dollars?](https://alexgude.com/blog/machine-learning-metrics-interview/)
# High precision -> few false positives. # High recall -> few false negatives. # In lay terms, how would we translate our problem with burritos: #. high precision- 'Great burrito'. If we make a prediction of a great burrito, #. it probably IS a great burrito. # Which metric would you emphasize if you were choosing a burrito place to take your first date to? #. Precision. # Which metric would -> feeling adventurous? # . Recall. # Predict Fraud: # True negative: normal transaction # True positive: we caught fraud! # False Positive: normal transaction that is blocked -> annoyed customer! (low precision) # False Negative: fraudulent transaction that was allowed -> lost money (low recall)
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
ROC AUC Let's also review ROC AUC (Receiver Operating Characteristic, Area Under the Curve).[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifierโ€™s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5**, regardless of class (im)balance. Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [StatQuest video](https://youtu.be/4jRBRDbJemM)- [Data School article / video](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
from sklearn.metrics import roc_auc_score y_pred_proba = pipeline.predict_proba(X_val)[:, -1] roc_auc_score(y_val, y_pred_proba) from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba) (fpr, tpr, thresholds) import matplotlib.pyplot as plt plt.scatter(fpr, tpr) plt.plot(fpr, tpr) plt.title('ROC curve') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate')
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Imbalanced classesDo you have highly imbalanced classes?If so, you can try ideas from [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/):- โ€œAdjust the class weight (misclassification costs)โ€ โ€” most scikit-learn classifiers have a `class_balance` parameter.- โ€œAdjust the decision thresholdโ€ โ€” we did this last module. Read [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415).- โ€œOversample the minority class, undersample the majority class, or synthesize new minority classesโ€ โ€” try the the [imbalanced-learn](https://github.com/scikit-learn-contrib/imbalanced-learn) library as a stretch goal. BONUS: Regression example ๐Ÿ˜๏ธ
# Read our NYC apartment rental listing dataset df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose your targetWhich column in your tabular dataset will you predict?
y = df['price']
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
How is your target distributed?For a regression problem, determine: Is the target right-skewed?
# Yes, the target is right-skewed import seaborn as sns sns.distplot(y); y.describe()
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Are some observations outliers? Will you excludethem?
# Yes! There are outliers # Some prices are so high or low it doesn't really make sense. # Some locations aren't even in New York City # Remove the most extreme 1% prices, # the most extreme .1% latitudes, & # the most extreme .1% longitudes import numpy as np df = df[(df['price'] >= np.percentile(df['price'], 0.5)) & (df['price'] <= np.percentile(df['price'], 99.5)) & (df['latitude'] >= np.percentile(df['latitude'], 0.05)) & (df['latitude'] < np.percentile(df['latitude'], 99.95)) & (df['longitude'] >= np.percentile(df['longitude'], 0.05)) & (df['longitude'] <= np.percentile(df['longitude'], 99.95))] # The distribution has improved, but is still right-skewed y = df['price'] sns.distplot(y); y.describe()
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Log-TransformIf the target is right-skewed, you may want to โ€œlog transformโ€ the target.> Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any [regression] model.>> The only problem is that, while easy to execute, understanding why taking the log of the target variable works and how it affects the training/testing process is intellectually challenging. You can skip this section for now, if you like, but just remember that this technique exists and check back here if needed in the future.>> Optimally, the distribution of prices would be a narrow โ€œbell curveโ€ distribution without a tail. This would make predictions based upon average prices more accurate. We need a mathematical operation that transforms the widely-distributed target prices into a new space. The โ€œprice in dollars spaceโ€ has a long right tail because of outliers and we want to squeeze that space into a new space that is normally distributed. More specifically, we need to shrink large values a lot and smaller values a little. That magic operation is called the logarithm or log for short. >> To make actual predictions, we have to take the exp of model predictions to get prices in dollars instead of log dollars. >>โ€” Terence Parr & Jeremy Howard, [The Mechanics of Machine Learning, Chapter 5.5](https://mlbook.explained.ai/prep.htmllogtarget)[Numpy has exponents and logarithms](https://docs.scipy.org/doc/numpy/reference/routines.math.htmlexponents-and-logarithms). Your Python code could look like this:```pythonimport numpy as npy_train_log = np.log1p(y_train)model.fit(X_train, y_train_log)y_pred_log = model.predict(X_val)y_pred = np.expm1(y_pred_log)print(mean_absolute_error(y_val, y_pred))```
import numpy as np y_log = np.log1p(y) sns.distplot(y_log) sns.distplot(y) plt.title('Original target, in the unit of US dollars'); y_log = np.log1p(y) sns.distplot(y_log) plt.title('Log-transformed target, in log-dollars'); y_untransformed = np.expm1(y_log) sns.distplot(y_untransformed) plt.title('Back to the original units');
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
An RFSoC Spectrum Analyzer Dashboard with Voila----Please use Jupyter Labs http://board_ip_address/lab for this notebook.The RFSoC Spectrum Analyzer is an open source tool developed by the [University of Strathclyde](https://github.com/strath-sdr/rfsoc_sam). This notebook is specifically for Voila dashboards. If you would like to see an overview of the Spectrum Analyser, see this [notebook](rfsoc_spectrum_analysis.ipynb) instead. Table of Contents* [Introduction](introduction)* [Running this Demonstration](running-this-demonstration)* [The Voila Procedure](the-voila-procedure) * [Import Libraries](import-libraries) * [Initialise Overlay](initialise-overlay) * [Dashboard Display](dashboard-display)* [Conclusion](conclusion) References* [Xilinx, Inc, "USP RF Data Converter: LogiCORE IP Product Guide", PG269, v2.3, June 2020](https://www.xilinx.com/support/documentation/ip_documentation/usp_rf_data_converter/v2_3/pg269-rf-data-converter.pdf) Revision History* **v1.0** | 16/02/2021 | Voila spectrum analyzer demonstration* **v1.1** | 22/10/2021 | Voila update notes in 'running this demonstration' section Introduction You ZCU111 platform and XM500 development board is capable of quad-channel spectral analysis. The RFSoC Spectrum Analyser Module (rfsoc-sam) enables hardware accelerated analysis of signals received from the RF Analogue-to-Digital Converters (RF ADCs). This notebook is specifically for running the Spectrum Analyser using Voila dashboards. Follow the instructions outlined in [Running this Demonstration](running-this-demonstration) to learn more. Hardware Setup Your ZCU111 development board can host four Spectrum Analyzer Modules. To setup your board for this demonstration, you can connect each channel in loopback as shown in [Figure 1](fig-1), or connect an antenna to one of the ADC channels.Don't worry if you don't have an antenna. The default loopback configuration will still be very interesting and is connected as follows:* Channel 0: DAC4 (Tile 229 Block 0) to ADC0 (Tile 224 Block 0)* Channel 1: DAC5 (Tile 229 Block 1) to ADC1 (Tile 224 Block 1)* Channel 2: DAC6 (Tile 229 Block 2) to ADC2 (Tile 225 Block 0)* Channel 3: DAC7 (Tile 229 Block 3) to ADC3 (Tile 225 Block 1)There has been several XM500 board revisions, and some contain different silkscreen and labels for the ADCs and DACs. Use the image below for further guidance and pay attention to the associated Tile and Block. Figure 1: ZCU111 and XM500 development board setup in loopback mode.If you have chosen to use an antenna, **do not** attach your antenna to any SMA interfaces labelled DAC.Caution: In this demonstration, we generate tones using the RFSoC development board. Your device should be setup in loopback mode. You should understand that the RFSoC platform can also transmit RF signals wirelessly. Remember that unlicensed wireless transmission of RF signals may be illegal in your geographical location. Radio signals may also interfere with nearby devices, such as pacemakers and emergency radio equipment. Note that it is also illegal to intercept and decode particular RF signals. If you are unsure, please seek professional support.---- Running this Demonstration Voila can be used to execute the Spectrum Analyzer Module, while ignoring all of the markdown and code cells typically found in a normal Jupyter notebook. The Voila dashboard can be launched following the instructions below:* Click on the "Open with Voila Gridstack in a new browser tab" button at the top of the screen:After the new tab opens the kernel will start and the notebook will run. Only the Spectrum Analyzer will be displayed. The initialisation process takes around 1 minute. The Voila Procedure Below are the code cells that will be ran when Voila is called. The procedure is fairly straight forward. Load the rfsoc-sam library, initialise the overlay, and display the spectrum analyzer. All you have to ensure is that the above command is executed in the terminal and you have launched a browser tab using the given address. You do not need to run these code cells individually to create the voila dashboard. Import Libraries
from rfsoc_sam.overlay import Overlay
_____no_output_____
BSD-3-Clause
boards/ZCU111/rfsoc_sam/notebooks/voila_rfsoc_spectrum_analyzer.ipynb
dnorthcote/rfsoc_sam
Initialise Overlay
sam = Overlay(init_rf_clks = True)
_____no_output_____
BSD-3-Clause
boards/ZCU111/rfsoc_sam/notebooks/voila_rfsoc_spectrum_analyzer.ipynb
dnorthcote/rfsoc_sam
Dashboard Display
sam.spectrum_analyzer_application()
_____no_output_____
BSD-3-Clause
boards/ZCU111/rfsoc_sam/notebooks/voila_rfsoc_spectrum_analyzer.ipynb
dnorthcote/rfsoc_sam
No 1 : Multiple SubplotsDengan data di bawah ini buatlah visualisasi seperti expected output :
x = np.linspace(2*-np.pi, 2*np.pi, 200) tan = np.tan(x)/10 cos = np.cos(x) sin = np.sin(x)
_____no_output_____
MIT
Task/Week 3 Visualization/Week 3 Day 3.ipynb
mazharrasyad/Data-Science-SanberCode
![image.png](attachment:image.png) No 2 : Nested AxisDengan data di bawah ini, buatlah visualisasi seperti expected output :
x = np.linspace(2*-np.pi, 2*np.pi, 100) y = np.cos(x) y2 = np.cos(x**2) y3 = np.cos(x**3) y4 = np.cos(x**4) y5 = np.cos(x**5)
_____no_output_____
MIT
Task/Week 3 Visualization/Week 3 Day 3.ipynb
mazharrasyad/Data-Science-SanberCode
Examples I - Inferring $v_{\rm rot}$ By Minimizing the Line WidthThis Notebook intends to demonstrate the method used in [Teague et al. (2018a)](https://ui.adsabs.harvard.edu/abs/2018ApJ...860L..12T) to infer the rotation velocity as a function of radius in the disk of HD 163296. The following [Notebook](Examples%20-%20II.ipynb) demonstrates the updated method presented in Teague et al. (2018b) which relaxes many of the assumptions used in this Notebook. MethodologyFor this method to work we make the assumption that the disk is azimuthally symmetric (note that this does not mean that the emission we observe in symmetric, but only that the underlying disk structure is). Therefore, if we were to observe the line profile at different azimuthal angles for a given radius, they should all have the same shape. What will be different is the line centre due to the line-of-sight component of the rotation,$$v_0 = v_{\rm LSR} + v_{\rm rot} \cdot \cos \theta$$ where $i$ is the inclination of the disk, $\theta$ is the azimuthal angle measured from the red-shifted major axis and $v_{\rm LSR}$ is the systemic velocity. Note that this azimuthal angle is not the same as position angle and must be calculated accounting for the 3D structure of the disk.It has already been shown by assuming a rotation velocity, for example from fitting a first moment map, each spectrum can be shifted back to the systemic velocity and then stacked in azimuth to boost the signal-to-noise of these lines (see [Yen et al. (2016)](https://ui.adsabs.harvard.edu/abs/2016ApJ...832..204Y) for a thorough discussion on this and [Teague et al. (2016)](https://ui.adsabs.harvard.edu/abs/2016A&A...592A..49T) and [Matrร  et al. (2017)](https://ui.adsabs.harvard.edu/abs/2017ApJ...842....9M) for applications of this).---![Example of shifted spectra.](Images/first_moment_and_spectra.png)In the above image, the left hand plot shows the typical Keplerian rotation pattern, taking into account a flared emission surface. Dotted lines show contours of constant azimuthal angle $\theta$ and radius $r$. Three spectra, shown on the right in black, are extracted at the dot locations. By shifting the velocity axis of each of this by $-v_{\rm rot} \cdot \cos \theta$ they are aligned along the systemic velocity, $v_{\rm LSR}$, and able to be stacked (shown in gray).---However, this only works correctly if we know the rotation velocity. If an incorrect velocity is used to deproject the spectra then the line centres will be scattered around the systemic velocity. When these lines are stacked, the resulting profile will be broader with a smaller amplitude. We can therefore assert that the correct velocity used to derproject the spectra is the one which _minimises the width of the stacked line profile_. One could make a similar argument about the line peak, however with noisy data this is a less strict constraint as this relies on one channel (the one containing the line peak) rather than the entire line profile ([Yen et al. (2018)](www.google.com), who use a similar method, use the signal-to-noise of the stacked line weighted by a Gaussian fit as their quality of fit measure). Python ImplementationThis approach is relatively simple to code up with Python. We consider the case of very high signal-to-noise data, however it also works well with low signal-to-noise data, as we describe below. All the functions are part of the `eddy.ensemble` class which will be discussed in more detail below.We start with an annulus of spectra which we have extracted from our data, along with their azimuthal angles and the velocity axis of the observations. We can generate model spectra through the `eddy.modelling` functions. We model an annulus of 20 spectra with a peak brightness temperature of 40K, a linewidth of 350m/s and and RMS noise of 2K. What's returned is an `ensemble` instance which containts all the deprojecting functions.
%matplotlib inline from eddy.annulus import ensemble from eddy.modelling import gaussian_ensemble annulus = gaussian_ensemble(vrot=1500., Tb=40., dV=350., rms=2.0, N=20, plot=True, return_ensemble=True)
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
We first want to shift all the points to the systemic velocity (here at 0m/s). To do this we use the `deproject_spectra()` function which takes the rotation as its only argument. It returns the new velocity of each pixel in the annulus and it's value. Lets first deproject with the correct rotation velocity of 1500m/s to check we recover the intrinsic line profile.
velocity, brightness = annulus.deprojected_spectra(1500.) import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.errorbar(velocity, brightness, fmt='.k', ms=4) ax.set_xlim(velocity[0], velocity[-1]) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'Intensity')
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
This highlights which this method can achieve such a high precision on determinations of the rotation velocity. Because we shift back all the spectra by a non-quantised amount, we end up sampling the intrinsic profile at a much higher rate (by a factor of the number of beams we have in our annulus).We can compare this with the spectrum which is resampled backed down to the original velocity resolution using the `deprojected_spectrum()` functions.
fig, ax = plt.subplots() velocity, brightness = annulus.deprojected_spectrum(1500.) ax.errorbar(velocity, brightness, fmt='.k', ms=4) ax.set_xlim(velocity[0], velocity[-1]) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'Intensity')
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
Now, if we projected the spectra with the incorrect velocity, we can see that the stacked spectrum becomes broader. Note also that this is symmetric about the correct velocity meaning this is a convex problem making minimization much easier.
import numpy as np fig, ax = plt.subplots() for vrot in np.arange(1100, 2100, 200): velocity, brightness = annulus.deprojected_spectrum(vrot) ax.plot(velocity, brightness, label='%d m/s' % vrot) ax.legend(markerfirst=False) ax.set_xlim(-1000, 1000) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'Intensity')
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
We can measure the width of the stacked lines by fitting a Gaussian using the `get_deprojected_width()` function.
vrots = np.linspace(1300, 1700, 150) widths = np.array([annulus.get_deprojected_width(vrot) for vrot in vrots]) fig, ax = plt.subplots() ax.plot(vrots, widths, label='Deprojected Widths') ax.axvline(1500., ls=':', color='k', label='Truth') ax.set_xlabel(r'Rotation Velocity (m/s)') ax.set_ylabel(r'Width of Stacked Line (m/s)') ax.legend(markerfirst=False)
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
This shows that if we find the rotation velocity which minimizes the width of the stacked line we should have a pretty good idea of the rotation velocity is. The `get_vrot_dV()` function packges this all up, using the `bounded` method to search for the minimum width within a range of 0.7 to 1.3 times an initial guess. This guess can be provided (for instance if you have an idea of what the Keplerian rotation should be), otherwise it will try to guess it from the spectra based on the peaks of the spectra which are most shifted.
vfit = annulus.get_vrot_dV() print("The linewidth is minimized for a rotation velocity of %.1f m/s" % vfit)
The linewidth is minimized for a rotation velocity of 1502.1 m/s
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
The power of this method is also that the fitting is performed on the stacked spectrum meaning that in the noisy regions at the edges of the disk we stack over so many independent beams that we still get a reasonable line profile to fit.Lets try with a signal-to-noise ratio of 4.
annulus = gaussian_ensemble(vrot=1500., Tb=40., dV=350., rms=10.0, N=20, plot=True, return_ensemble=True) fig, ax = plt.subplots() velocity, brightness = annulus.deprojected_spectrum(1500.) ax.step(velocity, brightness, color='k', where='mid', label='Shifted') ax.legend(markerfirst=False) ax.set_xlim(velocity[0], velocity[-1]) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'Intensity') vfit = annulus.get_vrot_dV() print("The linewidth is minimized for a rotation velocity of %.1f m/s" % vfit)
The linewidth is minimized for a rotation velocity of 1491.9 m/s
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
The final advtange of this method is that it is exceptionally quick. The convex nature of the problem means that a minimum width is readily found and so it can be applied very quickly, even with a large number of spectra. With 200 indiviudal beams:
annulus = gaussian_ensemble(vrot=1500., Tb=40., dV=350., rms=10.0, N=200, plot=True, return_ensemble=True) %timeit annulus.get_vrot_dV()
10 loops, best of 3: 102 ms per loop
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
Data Extraction and load from FRED API..
## Import packages for the process... import requests import pickle import os import mysql.connector import time
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Using pickle to wrap the database credentials and Fred API keys
if not os.path.exists('fred_api_secret.pk1'): fred_key = {} fred_key['api_key'] = '' with open ('fred_api_secret.pk1','wb') as f: pickle.dump(fred_key,f) else: fred_key=pickle.load(open('fred_api_secret.pk1','rb')) if not os.path.exists('fred_sql.pk1'): fred_sql = {} fred_sql['user'] = '' fred_sql['password'] = '' fred_sql['database'] = '' with open ('fred_sql.pk1','wb') as f: pickle.dump(fred_sql,f) else: fred_sql=pickle.load(open('fred_sql.pk1','rb'))
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff