markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**식 4-10: 라쏘 νšŒκ·€μ˜ λΉ„μš© ν•¨μˆ˜**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ 라쏘 νšŒκ·€
from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]])
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ—˜λΌμŠ€ν‹±λ„· **식 4-12: μ—˜λΌμŠ€ν‹±λ„· λΉ„μš© ν•¨μˆ˜**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$
from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]])
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ‘°κΈ° μ’…λ£Œ
np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # μ€‘μ§€λœ κ³³μ—μ„œ λ‹€μ‹œ μ‹œμž‘ν•©λ‹ˆλ‹€ y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg)
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
κ·Έλž˜ν”„λ₯Ό κ·Έλ¦½λ‹ˆλ‹€:
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show()
κ·Έλ¦Ό μ €μž₯: lasso_vs_ridge_plot
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
λ‘œμ§€μŠ€ν‹± νšŒκ·€ κ²°μ • 경계
t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show()
κ·Έλ¦Ό μ €μž₯: logistic_function_plot
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
**식 4-16: ν•˜λ‚˜μ˜ ν›ˆλ ¨ μƒ˜ν”Œμ— λŒ€ν•œ λΉ„μš© ν•¨μˆ˜**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: λ‘œμ§€μŠ€ν‹± νšŒκ·€ λΉ„μš© ν•¨μˆ˜(둜그 손싀)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: λ‘œμ§€μŠ€ν‹± λΉ„μš© ν•¨μˆ˜μ˜ νŽΈλ„ ν•¨μˆ˜**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$
from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # κ½ƒμžŽ λ„ˆλΉ„ y = (iris["target"] == 2).astype(int) # Iris virginica이면 1 μ•„λ‹ˆλ©΄ 0
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
**λ…ΈνŠΈ**: ν–₯ν›„ 버전이 λ°”λ€Œλ”λΌλ„ λ™μΌν•œ κ²°κ³Όλ₯Ό λ§Œλ“€κΈ° μœ„ν•΄ μ‚¬μ΄ν‚·λŸ° 0.22 λ²„μ „μ˜ 기본값인 `solver="lbfgs"`둜 μ§€μ •ν•©λ‹ˆλ‹€.
from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica")
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
책에 μ‹€λ¦° 그림은 쑰금 더 예쁘게 κΎΈλͺ„μŠ΅λ‹ˆλ‹€:
X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary[0], 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary[0], 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]])
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ†Œν”„νŠΈλ§₯슀 νšŒκ·€
from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show()
κ·Έλ¦Ό μ €μž₯: logistic_regression_contour_plot
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
**식 4-20: μ†Œν”„νŠΈλ§₯슀 ν•¨μˆ˜**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크둜슀 μ—”νŠΈλ‘œν”Ό λΉ„μš© ν•¨μˆ˜**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래슀 k에 λŒ€ν•œ 크둜슀 μ—”νŠΈλ‘œν”Όμ˜ κ·Έλ ˆμ΄λ””μ–ΈνŠΈ 벑터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$
X = iris["data"][:, (2, 3)] # κ½ƒμžŽ 길이, κ½ƒμžŽ λ„ˆλΉ„ y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]])
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ—°μŠ΅λ¬Έμ œ ν•΄λ‹΅ 1. to 11. 뢀둝 Aλ₯Ό μ°Έκ³ ν•˜μ„Έμš”. 12. μ‘°κΈ° μ’…λ£Œλ₯Ό μ‚¬μš©ν•œ 배치 경사 ν•˜κ°•λ²•μœΌλ‘œ μ†Œν”„νŠΈλ§₯슀 νšŒκ·€ κ΅¬ν˜„ν•˜κΈ°(μ‚¬μ΄ν‚·λŸ°μ„ μ‚¬μš©ν•˜μ§€ μ•Šκ³ ) λ¨Όμ € 데이터λ₯Ό λ‘œλ“œν•©λ‹ˆλ‹€. μ•žμ„œ μ‚¬μš©ν–ˆλ˜ Iris 데이터셋을 μž¬μ‚¬μš©ν•˜κ² μŠ΅λ‹ˆλ‹€.
X = iris["data"][:, (2, 3)] # κ½ƒμžŽ 길이, κ½ƒμžŽ 넓이 y = iris["target"]
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
λͺ¨λ“  μƒ˜ν”Œμ— 편ν–₯을 μΆ”κ°€ν•©λ‹ˆλ‹€ ($x_0 = 1$):
X_with_bias = np.c_[np.ones([len(X), 1]), X]
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
κ²°κ³Όλ₯Ό μΌμ •ν•˜κ²Œ μœ μ§€ν•˜κΈ° μœ„ν•΄ 랜덀 μ‹œλ“œλ₯Ό μ§€μ •ν•©λ‹ˆλ‹€:
np.random.seed(2042)
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
데이터셋을 ν›ˆλ ¨ μ„ΈνŠΈ, 검증 μ„ΈνŠΈ, ν…ŒμŠ€νŠΈ μ„ΈνŠΈλ‘œ λ‚˜λˆ„λŠ” κ°€μž₯ μ‰¬μš΄ 방법은 μ‚¬μ΄ν‚·λŸ°μ˜ `train_test_split()` ν•¨μˆ˜λ₯Ό μ‚¬μš©ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. ν•˜μ§€λ§Œ 이 μ—°μŠ΅λ¬Έμ œμ˜ λͺ©μ μ€ 직접 λ§Œλ“€μ–΄ λ³΄λ©΄μ„œ μ•Œκ³ λ¦¬μ¦˜μ„ μ΄ν•΄ν•˜λŠ” κ²ƒμ΄λ―€λ‘œ λ‹€μŒκ³Ό 같이 μˆ˜λ™μœΌλ‘œ λ‚˜λˆ„μ–΄ λ³΄κ² μŠ΅λ‹ˆλ‹€:
test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]]
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
타깃은 클래슀 인덱슀(0, 1 그리고 2)μ΄μ§€λ§Œ μ†Œν”„νŠΈλ§₯슀 νšŒκ·€ λͺ¨λΈμ„ ν›ˆλ ¨μ‹œν‚€κΈ° μœ„ν•΄ ν•„μš”ν•œ 것은 타깃 클래슀의 ν™•λ₯ μž…λ‹ˆλ‹€. 각 μƒ˜ν”Œμ—μ„œ ν™•λ₯ μ΄ 1인 타깃 클래슀λ₯Ό μ œμ™Έν•œ λ‹€λ₯Έ 클래슀의 ν™•λ₯ μ€ 0μž…λ‹ˆλ‹€(λ‹€λ₯Έ λ§λ‘œν•˜λ©΄ μ£Όμ–΄μ§„ μƒ˜ν”Œμ— λŒ€ν•œ 클래슀 ν™•λ₯ μ΄ 원-ν•« λ²‘ν„°μž…λ‹ˆλ‹€). 클래슀 인덱슀λ₯Ό 원-ν•« λ²‘ν„°λ‘œ λ°”κΎΈλŠ” κ°„λ‹¨ν•œ ν•¨μˆ˜λ₯Ό μž‘μ„±ν•˜κ² μŠ΅λ‹ˆλ‹€:
def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
10개 μƒ˜ν”Œλ§Œ λ„£μ–΄ 이 ν•¨μˆ˜λ₯Ό ν…ŒμŠ€νŠΈν•΄ 보죠:
y_train[:10] to_one_hot(y_train[:10])
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
잘 λ˜λ„€μš”, 이제 ν›ˆλ ¨ μ„ΈνŠΈμ™€ ν…ŒμŠ€νŠΈ μ„ΈνŠΈμ˜ 타깃 클래슀 ν™•λ₯ μ„ 담은 행렬을 λ§Œλ“€κ² μŠ΅λ‹ˆλ‹€:
Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test)
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
이제 μ†Œν”„νŠΈλ§₯슀 ν•¨μˆ˜λ₯Ό λ§Œλ“­λ‹ˆλ‹€. λ‹€μŒ 곡식을 μ°Έκ³ ν•˜μ„Έμš”:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$
def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
ν›ˆλ ¨μ„ μœ„ν•œ μ€€λΉ„λ₯Ό 거의 λ§ˆμ³€μŠ΅λ‹ˆλ‹€. μž…λ ₯κ³Ό 좜λ ₯의 개수λ₯Ό μ •μ˜ν•©λ‹ˆλ‹€:
n_inputs = X_train.shape[1] # == 3 (νŠΉμ„± 2κ°œμ™€ 편ν–₯) n_outputs = len(np.unique(y_train)) # == 3 (3개의 뢓꽃 클래슀)
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
이제 μ’€ λ³΅μž‘ν•œ ν›ˆλ ¨ νŒŒνŠΈμž…λ‹ˆλ‹€! μ΄λ‘ μ μœΌλ‘œλŠ” κ°„λ‹¨ν•©λ‹ˆλ‹€. κ·Έλƒ₯ μˆ˜ν•™ 곡식을 파이썬 μ½”λ“œλ‘œ λ°”κΎΈκΈ°λ§Œ ν•˜λ©΄ λ©λ‹ˆλ‹€. ν•˜μ§€λ§Œ μ‹€μ œλ‘œλŠ” κ½€ κΉŒλ‹€λ‘œμš΄ 면이 μžˆμŠ΅λ‹ˆλ‹€. 특히, ν•­μ΄λ‚˜ 인덱슀의 μˆœμ„œκ°€ λ’€μ„žμ΄κΈ° μ‰½μŠ΅λ‹ˆλ‹€. μ œλŒ€λ‘œ μž‘λ™ν•  κ²ƒμ²˜λŸΌ μ½”λ“œλ₯Ό μž‘μ„±ν–ˆλ”λΌλ„ μ‹€μ œ μ œλŒ€λ‘œ κ³„μ‚°ν•˜μ§€ λͺ»ν•©λ‹ˆλ‹€. ν™•μ‹€ν•˜μ§€ μ•Šμ„ λ•ŒλŠ” 각 ν•­μ˜ 크기λ₯Ό κΈ°λ‘ν•˜κ³  이에 μƒμ‘ν•˜λŠ” μ½”λ“œκ°€ 같은 크기λ₯Ό λ§Œλ“œλŠ”μ§€ ν™•μΈν•©λ‹ˆλ‹€. 각 항을 λ…λ¦½μ μœΌλ‘œ ν‰κ°€ν•΄μ„œ 좜λ ₯ν•΄ λ³΄λŠ” 것도 μ’‹μŠ΅λ‹ˆλ‹€. 사싀 μ‚¬μ΄ν‚·λŸ°μ— 이미 잘 κ΅¬ν˜„λ˜μ–΄ 있기 λ•Œλ¬Έμ— μ΄λ ‡κ²Œ ν•  ν•„μš”λŠ” μ—†μŠ΅λ‹ˆλ‹€. ν•˜μ§€λ§Œ 직접 λ§Œλ“€μ–΄ 보면 μ–΄λ–»κ²Œ μž‘λ™ν•˜λŠ”μ§€ μ΄ν•΄ν•˜λŠ”λ° 도움이 λ©λ‹ˆλ‹€.κ΅¬ν˜„ν•  곡식은 λΉ„μš©ν•¨μˆ˜μž…λ‹ˆλ‹€:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 κ·Έλ ˆμ΄λ””μ–ΈνŠΈ κ³΅μ‹μž…λ‹ˆλ‹€:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$λ₯Ό 계산할 수 μ—†μŠ΅λ‹ˆλ‹€. `nan` 값을 ν”Όν•˜κΈ° μœ„ν•΄ $\log\left(\hat{p}_k^{(i)}\right)$에 μ•„μ£Ό μž‘μ€ κ°’ $\epsilon$을 μΆ”κ°€ν•˜κ² μŠ΅λ‹ˆλ‹€.
eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients
0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
λ°”λ‘œ μ΄κ²λ‹ˆλ‹€! μ†Œν”„νŠΈλ§₯슀 λͺ¨λΈμ„ ν›ˆλ ¨μ‹œμΌ°μŠ΅λ‹ˆλ‹€. λͺ¨λΈ νŒŒλΌλ―Έν„°λ₯Ό 확인해 λ³΄κ² μŠ΅λ‹ˆλ‹€:
Theta
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
검증 μ„ΈνŠΈμ— λŒ€ν•œ 예츑과 정확도λ₯Ό 확인해 λ³΄κ² μŠ΅λ‹ˆλ‹€:
logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ™€μš°, 이 λͺ¨λΈμ΄ 맀우 잘 μž‘λ™ν•˜λŠ” 것 κ°™μŠ΅λ‹ˆλ‹€. μ—°μŠ΅μ„ μœ„ν•΄μ„œ $\ell_2$ 규제λ₯Ό 쑰금 μΆ”κ°€ν•΄ λ³΄κ² μŠ΅λ‹ˆλ‹€. λ‹€μŒ μ½”λ“œλŠ” μœ„μ™€ 거의 λ™μΌν•˜μ§€λ§Œ 손싀에 $\ell_2$ νŽ˜λ„ν‹°κ°€ μΆ”κ°€λ˜μ—ˆκ³  κ·Έλž˜λ””μ–ΈνŠΈμ—λ„ 항이 μΆ”κ°€λ˜μ—ˆμŠ΅λ‹ˆλ‹€(`Theta`의 첫 번째 μ›μ†ŒλŠ” 편ν–₯μ΄λ―€λ‘œ κ·œμ œν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€). ν•™μŠ΅λ₯  `eta`도 μ¦κ°€μ‹œμΌœ λ³΄κ² μŠ΅λ‹ˆλ‹€.
eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 ν•˜μ΄νΌνŒŒλΌλ―Έν„° Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients
0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.4946891059460321 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.489035124439786 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.48884031207388184
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μΆ”κ°€λœ $\ell_2$ νŽ˜λ„ν‹° λ•Œλ¬Έμ— 이전보닀 손싀이 쑰금 μ»€λ³΄μ΄μ§€λ§Œ 더 잘 μž‘λ™ν•˜λŠ” λͺ¨λΈμ΄ λ˜μ—ˆμ„κΉŒμš”? 확인해 보죠:
logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ™€μš°, μ™„λ²½ν•œ μ •ν™•λ„λ„€μš”! 운이 쒋은 검증 μ„ΈνŠΈμΌμ§€ λͺ¨λ₯΄μ§€λ§Œ 잘 된 것은 λ§žμŠ΅λ‹ˆλ‹€. 이제 μ‘°κΈ° μ’…λ£Œλ₯Ό μΆ”κ°€ν•΄ 보죠. μ΄λ ‡κ²Œ ν•˜λ €λ©΄ λ§€ λ°˜λ³΅μ—μ„œ 검증 μ„ΈνŠΈμ— λŒ€ν•œ 손싀을 κ³„μ‚°ν•΄μ„œ μ˜€μ°¨κ°€ μ¦κ°€ν•˜κΈ° μ‹œμž‘ν•  λ•Œ λ©ˆμΆ°μ•Ό ν•©λ‹ˆλ‹€.
eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 ν•˜μ΄νΌνŒŒλΌλ―Έν„° best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "μ‘°κΈ° μ’…λ£Œ!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
μ—¬μ „νžˆ μ™„λ²½ν•˜μ§€λ§Œ 더 λΉ λ¦…λ‹ˆλ‹€. 이제 전체 데이터셋에 λŒ€ν•œ λͺ¨λΈμ˜ μ˜ˆμΈ‘μ„ κ·Έλž˜ν”„λ‘œ λ‚˜νƒ€λ‚΄ λ³΄κ² μŠ΅λ‹ˆλ‹€:
x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show()
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
이제 ν…ŒμŠ€νŠΈ μ„ΈνŠΈμ— λŒ€ν•œ λͺ¨λΈμ˜ μ΅œμ’… 정확도λ₯Ό μΈ‘μ •ν•΄ λ³΄κ² μŠ΅λ‹ˆλ‹€:
logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score
_____no_output_____
Apache-2.0
04_training_linear_models.ipynb
probationer070/handson-ml2
Welcome to Python FundamentalsIn this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover:* Variables and Data Types * Operations* Input and Output Operations* Logic Control* Iterables* Functions Variable and Data TypesVariables and data types are the values that change in Python Fundamentals, as the name implies. A variable is a memory location where you store a value in a programming language, and it is created as soon as you assigned a value to it. Meanwhile, the classification or categorizing of data elements is known as data types. It denotes the kind of value that specifies which operations can be performed on a given set of data. Data types are technically classes, and variables are the objects of these classes.
x = 0 e,l,a = 2,1,4 a type(x) h = 1.0 type(h) x = float(x) type (x) a,d,u = "0",'1','valorant' type(a) a_int = int(a) a_int
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Operations ArithmeticMathematical operations such as addition, subtraction, multiplication, division, floor division, exponentiation and modulo are performed using arithmetic operators.
s,k,y,e = 2.0,-0.5,0,-32
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
AdditionThis operation denotes (+) which adds values on either side of the operator.
### Addition a = s+k a
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
SubtractionRepresented by (-), this operation subtracts right hand operand from left hand operand.
### Subtraction u = k-e u
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
MultiplicationUsing the symbol (*), this operation multiplies values on both sides of the operator.
### Multiplication m = s*e m
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
DivisionThe division operator in Python is (/). When the first operand is divided by the second, it is utilized to find the quotient.
### Divison d = y/s d
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Floor DivisionWhen the first operand is divided by the second, floor division is applied to calculate the floor quotient. This is denoted by the operator (//).
### Floor Division fd = s//k fd
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
ExponentiationTo raise the first operand to the second power, exponentiation is performed through the symbol (**).
### Exponentiation ex = s**k ex
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
ModuloModulo or Modulus uses the operator (%) that is responsible for dividing left hand operand by right hand operand and returns the remainder.
### Modulo mod = e%s mod
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Assignment OperationsAssignment operators are used in Python programming to assign values to variables. = The equal operation's role is to "Assign" the value of the right side of the expression to the operand on the left side.
w,x,y,z = 0,100,2,2
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
+= "Add and Assign" is designed to add both sides of the operand and the sum will be placed on the left operand.
w += s w
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
-="Subtract And" conducts subtraction to right operand from left operand. The difference would then be assigned to the left operand.
x -= e x
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
*="Multiply And" operates multiplication to both operands and then the product would be assigned to the left operand.
y *= 2 y
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
**="Exponent And" calculates the exponent value of the operands which would then be assigned also to the left operand.
z **= 2 z
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
ComparatorsValues are compared using comparison operators. Depending on the condition, it results to either True or False.
trial_1, trial_2, trial_3 = 1, 2.0, "1" true_val = 1.0 ## Equality trial_1 == true_val ## Non-equality trial_2 != true_val ## Inequality t1 = trial_1 > trial_2 t2 = trial_1 < trial_2/2 t3 = trial_1 >= trial_1/2 t4 = trial_1 <= trial_2 t4
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
LogicalLogical operators are comprised of:"AND" - True if both operands are true"OR" - True if either of the operands is true"NOT" - True if the operand is false
trial_1 == true_val trial_1 is true_val trial_1 is not true_val p,q = True, True conj = p and q conj p,q = False, False disj = p or q disj p,q = True, False e = not (p and q) e p,q = True, False xor = (not p and q) or (p and not q) xor
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
I/OInput and Output operators allow you to insert a value into your program that can be printed.
print("Welcome to my world") cnt = 1 string = "Welcome to my world" print(string,", your current run count is:", cnt) cnt += 1 print(f"{string}, your current count is: {cnt}") sem_grade = 93.0124 name = "Viper" print("Wazzup, {}!, your semestral grade is: {}".format(name, sem_grade)) w_pg, w_mg, w_fg = 0.3, 0.3, 0.4 print("Ang weight ng iyong semestral grades are: \ \n\t{:.2%} for Prelims\ \n\t{:.2%} for Midterms, and\ \n\t{:.2%} for Finals.".format(w_pg, w_mg, w_fg)) x = input("enter a number, bhie: ") x name = input("Hoy! Ano pangalan mo boi?: ") pg = input("Enter prelim grade: ") mg = input("Enter midterm grade: ") fg = input("Enter finals grade: ") sem_grade = None print("Hello {}, ang iyong semestral grade ay: {}".format(name, sem_grade))
Hoy! Ano pangalan mo boi?: Rue Enter prelim grade: 90 Enter midterm grade: 98 Enter finals grade: 96 Hello Rue, ang iyong semestral grade ay: None
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Looping Statements WhileThe while loop is used to repeatedly execute a set of statements until a condition is met.
## while loops i, j = 0, 14 while(i<=j): print(f"{i}\t|\t{j}") i+=1
0 | 14 1 | 14 2 | 14 3 | 14 4 | 14 5 | 14 6 | 14 7 | 14 8 | 14 9 | 14 10 | 14 11 | 14 12 | 14 13 | 14 14 | 14
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
ForFor Loop is used to perform sequential traversal. It can be used to iterate throughout a set of iterators and a range of values.
# for(int i=0; i<10; i++){ # printf(i) # } i=0 for i in range(15): print(i) playlist = ["Lagi", "Tenerife Sea", "Is There Someone Else"] print("Favorite Songs:\n") for song in playlist: print(song)
Favorite Songs: Lagi Tenerife Sea Is There Someone Else
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Flow ControlFlow Control is a conditional statement that handles the presentation andΒ execution of codes in a givenΒ program. This allows the software to continue running until specific conditions are met. Condition StatementsIf and Else If statements are used in flow control to perform conditional operation.
numeral1, numeral2 = 14, 12 if(numeral1 == numeral2): print("Yie") elif(numeral1>numeral2): print("Uwu") else: print("Whoa") # print("Hep hep")
Uwu
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
FunctionsFuctions are blocks of code which only runs when it is called. In python, "def" is used to designate a function.
# void DeleteUser(int userid){ # delete(userid); # } def delete_user (userid): print("Successfully deleted user: {}".format(userid)) def delete_all_users (): print("Valorant Pro-Player") userid = "Xeyah" delete_user("Xeyah") delete_all_users() def add(addend1, addend2): return addend1 + addend2 def power_of_base2(exponent): return 2**exponent addend1, addend2 = 36, 22 add(addend1, addend2) exponent = 4 power_of_base2(exponent)
_____no_output_____
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Grade Calculator
print() name = input('\tOla! What is your name? '); course = input('\tWhat is your course? '); prelim = float(input('\tGive Prelim Grade : ')); midterm = float(input('\tGive Midterm Grade : ')); final = float(input('\tGive Final Grade : ')); grade= (prelim) + (midterm) + (final); avg= grade/3; print(); print("\t===== DISPLAYING RESULTS ====="); print(); print("\tHi,", name, "from the course", course, "!"); print(); if avg > 70: print("\tYour grade is \U0001F600"); elif avg == 70: print("\tYour grade is \U0001F606"); elif avg < 70: print("\tYour grade is \U0001F62D"); print();
Ola! What is your name? Astra What is your course? Bachelor of Science in Chemical Engineering Give Prelim Grade : 96 Give Midterm Grade : 90 Give Final Grade : 94 ===== DISPLAYING RESULTS ===== Hi, Astra from the course Bachelor of Science in Chemical Engineering ! Your grade is πŸ˜€
Apache-2.0
BERNARDO_Activity_1_Python_Fundamentals.ipynb
Xeyah0214/Linear-Algebra-2nd-Sem
Spark ML
from pyspark.sql import SparkSession from pyspark.ml.linalg import Vectors from pyspark.ml.feature import VectorAssembler from pyspark.ml.clustering import KMeans from pyspark.ml.regression import LinearRegression spark = SparkSession.builder.getOrCreate() data_path = '/home/lorenzo/Desktop/utilization.csv' df = spark.read.option('header', 'False') \ .option('inferSchema', 'True') \ .csv(data_path) df = df.withColumnRenamed("_c0", "event_datetime") \ .withColumnRenamed ("_c1", "server_id") \ .withColumnRenamed("_c2", "cpu_utilization") \ .withColumnRenamed("_c3", "free_memory") \ .withColumnRenamed("_c4", "session_count") df.createOrReplaceTempView('utilization')
_____no_output_____
Apache-2.0
4_spark_ml/1_spark_ml_intro.ipynb
Lorenzo-Giardi/spark-repo
Vectorize data
va = VectorAssembler(inputCols=['cpu_utilization', 'free_memory', 'session_count'], outputCol = 'features') vcluster_df = va.transform(df) vcluster_df.show(5)
+-------------------+---------+---------------+-----------+-------------+----------------+ | event_datetime|server_id|cpu_utilization|free_memory|session_count| features| +-------------------+---------+---------------+-----------+-------------+----------------+ |03/05/2019 08:06:14| 100| 0.57| 0.51| 47|[0.57,0.51,47.0]| |03/05/2019 08:11:14| 100| 0.47| 0.62| 43|[0.47,0.62,43.0]| |03/05/2019 08:16:14| 100| 0.56| 0.57| 62|[0.56,0.57,62.0]| |03/05/2019 08:21:14| 100| 0.57| 0.56| 50|[0.57,0.56,50.0]| |03/05/2019 08:26:14| 100| 0.35| 0.46| 43|[0.35,0.46,43.0]| +-------------------+---------+---------------+-----------+-------------+----------------+ only showing top 5 rows
Apache-2.0
4_spark_ml/1_spark_ml_intro.ipynb
Lorenzo-Giardi/spark-repo
K-Means clusteringSpark ML library implementation of Kmeans expects to find a **features** column in the dataset that is provided to the fit function. This column should be the result of a vector assembler transformation.
km = KMeans().setK(3).setSeed(1) km_output = km.fit(vcluster_df) km_output.clusterCenters()
_____no_output_____
Apache-2.0
4_spark_ml/1_spark_ml_intro.ipynb
Lorenzo-Giardi/spark-repo
Linear Regression
va = VectorAssembler(inputCols=['cpu_utilization', 'free_memory'], outputCol = 'features') reg_df = va.transform(df) reg_df.show(5) lr = LinearRegression(featuresCol='features', labelCol='session_count') lr_output = lr.fit(reg_df) lr_output.coefficients lr_output.intercept lr_output.summary.r2 lr_output.summary.rootMeanSquaredError
_____no_output_____
Apache-2.0
4_spark_ml/1_spark_ml_intro.ipynb
Lorenzo-Giardi/spark-repo
Load libraries
import multitaper.mtspec as mtspec import multitaper.utils as utils import multitaper.mtcross as mtcross import numpy as np import matplotlib.pyplot as plt import scipy.signal as signal
_____no_output_____
MIT
multitaper/examples/.ipynb_checkpoints/mtspec_src-checkpoint.ipynb
paudetseis/multitaper
Load Mesetas network data
data = utils.get_data('mesetas_src.dat') dt = 1/100. npts,ntr = np.shape(data) ptime = np.ones(ntr) ptime[0:ntr+1:4] = 14. ptime[1:ntr+1:4] = 24. ptime[2:ntr+1:4] = 5.5 ptime[3:ntr+1:4] = 20.5 ptime[11*4-1:11*4+4] = ptime[11*4-1:11*4+4]-2. ptime[20] = 13.4 print('npts, # of traces, dt ',npts, ntr, dt) # Select traces to work on ista = 0 itr1 = 0+ista # Mainshock itr2 = 16+ista itr3 = 40+ista itr4 = 68+ista # 4 68 # Filter parameters for STF fmin = 0.2 fmax = 3. fnyq = 0.5/dt wn = [fmin/fnyq,fmax/fnyq] b, a = signal.butter(4, wn,'bandpass') # Extract traces from data matrix z1 = data[:,itr1] z2 = data[:,itr2] z3 = data[:,itr3] z4 = data[:,itr4] # MTSPEC parameters nw = 4.0 kspec = 6 # P-wave window length wlen = 10.0 # window length, seconds nlen = int(round(wlen/dt)) # Arrival times (-2 sec pre-P) t_p1 = 12.2 t_p2 = 11.9 t_p3 = 12.1 t_p4 = 12.4 # Select to samples for each trace ib1 = int(round((t_p1)/dt)) ib2 = int(round((t_p2)/dt)) ib3 = int(round((t_p3)/dt)) ib4 = int(round((t_p4)/dt)) # 12.6 12.4 ib5 = ib3 - nlen ib6 = ib4 - nlen ie1 = ib1 + nlen ie2 = ib2 + nlen ie3 = ib3 + nlen ie4 = ib4 + nlen ie5 = ib5 + nlen ie6 = ib6 + nlen # Select window around P-wave y1 = z1[ib1:ie1] y2 = z2[ib2:ie2] y3 = z3[ib3:ie3] y4 = z4[ib4:ie4] y5 = z3[ib5:ie5] y6 = z4[ib6:ie6] # Get MTSPEC class Py1 = mtspec.MTSpec(y1,nw,kspec,dt) Py2 = mtspec.MTSpec(y2,nw,kspec,dt) Py3 = mtspec.MTSpec(y3,nw,kspec,dt) Py4 = mtspec.MTSpec(y4,nw,kspec,dt) Py5 = mtspec.MTSpec(y5,nw,kspec,dt) Py6 = mtspec.MTSpec(y6,nw,kspec,dt) Pspec = [Py1, Py2, Py3, Py4, Py5, Py6] # Get positive frequencies freq ,spec1 = Py1.rspec() freq ,spec2 = Py2.rspec() freq ,spec3 = Py3.rspec() freq ,spec4 = Py4.rspec() freq ,spec5 = Py5.rspec() freq ,spec6 = Py6.rspec() # Get spectral ratio sratio1 = np.sqrt(spec1/spec3) sratio2 = np.sqrt(spec2/spec4) P13 = mtcross.MTCross(Py1,Py3,wl=0.001) xcorr, dcohe, dconv = P13.mt_corr() dconv13 = signal.filtfilt(b, a, dconv[:,0]) P24 = mtcross.MTCross(Py2,Py4,wl=0.001) xcorr, dcohe, dconv2 = P24.mt_corr() dconv24 = signal.filtfilt(b, a, dconv2[:,0]) nstf = (len(dconv24)-1)/2 tstf = np.arange(-nstf,nstf+1)*dt
_____no_output_____
MIT
multitaper/examples/.ipynb_checkpoints/mtspec_src-checkpoint.ipynb
paudetseis/multitaper
Display Figures
fig = plt.figure(1,figsize=(6,8)) t = np.arange(len(z1))*dt ax = fig.add_subplot(2,2,1) ax.plot(t,z1/np.max(z1)+4.7,'k') ax.plot(t,z3/(2*np.max(z3))+3.5,color="0.75") ax.plot(t,z2/np.max(z1)+1.2,color='0.25') ax.plot(t,z4/(2*np.max(z4)),color="0.75") ax.set_xlabel('Time (s)') ax.set_ylabel('Amplitude (a.u.)') ax.set_yticks([]) ax.text(65,5.2,'M6.0 2019/12/24',color='0.5') ax.text(65,3.8,'M4.0 EGF',color='0.5') ax.text(65,1.7,'M5.8 2019/12/24') ax.text(65,0.3,'M4.1 EGF',color='0.5') ax.plot([t_p1,t_p1+wlen],[5.2,5.2],color='0.5',linewidth=2.0) ax.plot([t_p3,t_p3+wlen],[3.8,3.8],color='0.5',linewidth=2.0) ax.plot([t_p2,t_p2+wlen],[1.7,1.7],color='0.5',linewidth=2.0) ax.plot([t_p4,t_p4+wlen],[0.3,0.3],color='0.5',linewidth=2.0) ax.plot([t_p3,t_p3-wlen],[3.3,3.3],'--',color='0.7',linewidth=2.0) ax.plot([t_p4,t_p4-wlen],[-0.2,-0.2],'--',color='0.7',linewidth=2.0) box = ax.get_position() box.x1 = 0.89999 ax.set_position(box) ax = fig.add_subplot(2,2,3) ax.loglog(freq,np.sqrt(spec1*wlen),'k') ax.loglog(freq,np.sqrt(spec3*wlen),color='0.75') ax.loglog(freq,np.sqrt(spec5*wlen),'--',color='0.75') ax.grid() ax.set_ylim(1e-1,1e7) ax.set_xlabel('Frequency (Hz)') ax.set_ylabel('Amplitude Spectrum') ax2 = fig.add_subplot(2,2,4) ax2.loglog(freq,np.sqrt(spec2*wlen),color='0.25') ax2.loglog(freq,np.sqrt(spec4*wlen),color='0.75') ax2.loglog(freq,np.sqrt(spec6*wlen),'--',color='0.75') ax2.grid() ax2.set_ylim(1e-1,1e7) ax2.set_xlabel('Frequency (Hz)') ax2.set_ylabel('Amplitude Spectrum') ax2.yaxis.tick_right() ax2.yaxis.set_label_position('right') ax.text(0.11,3.1e6,'M6.0 Mainshock') ax.text(0.11,4e3,'M4.0 EGF',color='0.75') ax.text(0.11,4e1,'Noise',color='0.75') ax2.text(0.11,2.1e6,'M5.8 Mainshock') ax2.text(0.11,3e4,'M4.1 EGF',color='0.75') ax2.text(0.11,4e1,'Noise',color='0.75') plt.savefig('figures/src_waveforms.jpg') fig = plt.figure(figsize=(4,5)) ax = fig.add_subplot(2,1,1) ax.plot(tstf,dconv13/np.max(np.abs(dconv13))+1,'k') ax.plot(tstf,dconv24/np.max(np.abs(dconv24)),color='0.25') ax.set_ylabel('STF Amp (normalized)') ax.text(5,1.2,'M6.0 STF') ax.text(5,0.2,'M5.8 STF',color='0.25') ax.set_xlabel('Time (s)') ax.xaxis.tick_top() ax.xaxis.set_label_position('top') ax2 = fig.add_subplot(2,1,2) ax2.loglog(freq,sratio1,'k') ax2.loglog(freq,sratio2,color='0.25') ax2.set_ylim(1e0,1e4) ax2.set_xlabel('Frequncy (Hz)') ax2.set_ylabel('Spectral Ratio') ax2.text(1.1,1.2e3,'M6.0') ax2.text(0.12,2.1e2,'M5.8',color='0.25') ax2.grid() plt.savefig('figures/src_stf.jpg')
_____no_output_____
MIT
multitaper/examples/.ipynb_checkpoints/mtspec_src-checkpoint.ipynb
paudetseis/multitaper
Example of running PhaseLink Note that you need to change the run instance to GPU if using colab.
# Specify if running in google colab: use_google_colab = False # Install/add neccessary paths if using colab: if use_google_colab: !pip install obspy # Install nvidia-apex: !git clone https://github.com/NVIDIA/apex !pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex # Import neccessary modules: import sys, os, shutil import json import multiprocessing as mp import pickle import numpy as np import torch import gc import matplotlib.pyplot as plt import glob from obspy.geodetics.base import gps2dist_azimuth # if not use_google_colab: %load_ext autoreload %autoreload 2 # And import PhaseLink: if use_google_colab: shutil.rmtree('./PhaseLink', ignore_errors=True) !git clone https://github.com/TomSHudson/PhaseLink.git sys.path.append('./PhaseLink/') else: sys.path.append('..') import phaselink_dataset import phaselink_train import phaselink_eval # Copy over example files into pwd if using colab: if use_google_colab: !cp PhaseLink/example/params.json . !cp PhaseLink/example/station_list.txt . !cp PhaseLink/example/tt.pg . !cp PhaseLink/example/tt.sg .
_____no_output_____
MIT
example/example.ipynb
shicks-seismo/PhaseLink_work
0. Load in param file and key info:
# Import param json file: params_fname = "params.json" with open(params_fname, "r") as f: params = json.load(f) # Get GPU info if using colab: if use_google_colab: print("GPU:", torch.cuda.get_device_name(0)) params['device'] = "cuda:0" # Use first GPU
_____no_output_____
MIT
example/example.ipynb
shicks-seismo/PhaseLink_work
1. Create a synthetic training dataset:
# Set key parameters from param file: max_picks = params['n_max_picks'] t_max = params['t_win'] n_threads = params['n_threads'] print("Starting up...") # Setup grid: phase_idx = {'P': 0, 'S': 1} lat0, lon0 = phaselink_dataset.get_network_centroid(params) stlo, stla, phasemap, sncl_idx, stations, sncl_map = \ phaselink_dataset.build_station_map(params, lat0, lon0, phase_idx) x_min = np.min(stlo) x_max = np.max(stlo) y_min = np.min(stla) y_max = np.max(stla) for key in sncl_map: X0, Y0 = stations[key] X0 = (X0 - x_min) / (x_max - x_min) Y0 = (Y0 - y_min) / (y_max - y_min) stations[key] = (X0, Y0) # Save station maps for detect mode pickle.dump(stations, open(params['station_map_file'], 'wb')) pickle.dump(sncl_map, open(params['sncl_map_file'], 'wb')) # # Pwaves # pTT = tt_interp(params['tt_table']['P'], params['datum']) # print('Read pTT') # print('(dep,dist) = (0,0), (10,0), (0,10), (10,0):') # print(' {:.3f} {:.3f} {:.3f} {:.3f}'.format( # pTT.interp(0,0), pTT.interp(10,0),pTT.interp(0,10), # pTT.interp(10,10))) # #Swaves # sTT = tt_interp(params['tt_table']['S'], params['datum']) # print('Read sTT') # print('(dep,dist) = (0,0), (10,0), (0,10), (10,0):') # print(' {:.3f} {:.3f} {:.3f} {:.3f}'.format( # sTT.interp(0,0), sTT.interp(10,0),sTT.interp(0,10), # sTT.interp(10,10))) # Get travel-time tables for P and S waves: pTT = phaselink_dataset.tt_interp(params['trav_time_p'], params['datum']) sTT = phaselink_dataset.tt_interp(params['trav_time_s'], params['datum']) # Generate synthetic training dataset for given param file: in_q = mp.Queue() ###1000000) out_q = mp.Queue() ###1000000) proc = mp.Process(target=phaselink_dataset.output_thread, args=(out_q, params)) proc.start() procs = [] for i in range(n_threads): print("Starting thread %d" % i) p = mp.Process(target=phaselink_dataset.generate_phases, \ args=(in_q, out_q, x_min, x_max, y_min, y_max, \ sncl_idx, stla, stlo, phasemap, pTT, sTT, params)) p.start() procs.append(p) for i in range(params['n_train_samp']): in_q.put(i) for i in range(n_threads): in_q.put(None) #for p in procs: # p.join() #proc.join() print("Creating the following files for the PhaseLink synthetic training dataset:") print(params['station_map_file']) print(params['sncl_map_file']) print(params['training_dset_X']) print(params['training_dset_Y'])
Starting up... Starting thread 0 Starting thread 1 Starting thread 2 Starting thread 3 Starting thread 4 Starting thread 5 Starting thread 6 Starting thread 7 Creating the following files for the PhaseLink synthetic training dataset: station_map.pkl sncl_map.pkl shimane_train_X.npy shimane_train_Y.npy P-phases (zeros): 11973180 ( 98.0604422604 %) S-phases (ones): 236820 ( 1.93955773956 %) Saved the synthetic training dataset.
MIT
example/example.ipynb
shicks-seismo/PhaseLink_work
2. Train the model using the syntehetic dataset:
# Get device (cpu vs gpu) specified: device = torch.device(params["device"]) if params["device"][0:4] == "cuda": torch.cuda.empty_cache() enable_amp = True else: enable_amp = False if enable_amp: import apex.amp as amp # Get training info from param file: n_epochs = params["n_epochs"] #100 # Load in training dataset: X = np.load(params["training_dset_X"]) Y = np.load(params["training_dset_Y"]) print("Training dataset info:") print("Shape of X:", X.shape, "Shape of Y", Y.shape) dataset = phaselink_train.MyDataset(X, Y, device) # Get dataset info: n_samples = len(dataset) indices = list(range(n_samples)) # Set size of training and validation subset: n_test = int(0.1*X.shape[0]) validation_idx = np.random.choice(indices, size=n_test, replace=False) train_idx = list(set(indices) - set(validation_idx)) # Specify samplers: train_sampler = phaselink_train.SubsetRandomSampler(train_idx) validation_sampler = phaselink_train.SubsetRandomSampler(validation_idx) # Load training data: train_loader = torch.utils.data.DataLoader( dataset, batch_size=256, shuffle=False, sampler=train_sampler ) val_loader = torch.utils.data.DataLoader( dataset, batch_size=1024, shuffle=False, sampler=validation_sampler ) stackedgru = phaselink_train.StackedGRU() stackedgru = stackedgru.to(device) #stackedgru = torch.nn.DataParallel(stackedgru, # device_ids=['cuda:2', 'cuda:3', 'cuda:4', 'cuda:5']) if enable_amp: #amp.register_float_function(torch, 'sigmoid') from apex.optimizers import FusedAdam optimizer = FusedAdam(stackedgru.parameters()) stackedgru, optimizer = amp.initialize( stackedgru, optimizer, opt_level='O2') else: optimizer = torch.optim.Adam(stackedgru.parameters()) model = phaselink_train.Model(stackedgru, optimizer, \ model_path='./phaselink_model') print("Begin training process.") model.train(train_loader, val_loader, n_epochs, enable_amp=enable_amp) # For emptying memory on GPU: # torch.cuda.empty_cache() # del(model) # gc.collect() # torch.cuda.empty_cache() # Download trained model, if using colab: if use_google_colab: from google.colab import files !zip -r ./phaselink_model/phaselink_model.zip ./phaselink_model files.download('phaselink_model/phaselink_model.zip') # Plot model training and validation loss to select best model: # (Note: This must currently be done on the same machine/machine architecture as # the training was undertaken on). # Write the models loss function values to file: models_dir = "phaselink_model" models_fnames = list(glob.glob(os.path.join(models_dir, "model_???_*.pt"))) models_fnames.sort() val_losses = [] f_out = open(os.path.join(models_dir, 'val_losses.txt'), 'w') for model_fname in models_fnames: model_curr = torch.load(model_fname) val_losses.append(model_curr['loss']) f_out.write(' '.join((model_fname, str(model_curr['loss']), '\n'))) del(model_curr) gc.collect() f_out.close() val_losses = np.array(val_losses) print("Written losses to file: ", os.path.join(models_dir, 'val_losses.txt')) # And select approximate best model (approx corner of loss curve): approx_corner_idx = np.argwhere(val_losses < np.average(val_losses))[0][0] print("Model to use:", models_fnames[approx_corner_idx]) # And plot: plt.figure() plt.plot(np.arange(len(val_losses)), val_losses) plt.hlines(val_losses[approx_corner_idx], 0, len(val_losses), color='r', ls="--") plt.ylabel("Val loss") plt.xlabel("Epoch") plt.show() # And convert model to use to universally usable format (GPU or CPU): model = phaselink_train.StackedGRU().cuda(device) checkpoint = torch.load(models_fnames[approx_corner_idx], map_location=device) model.load_state_dict(checkpoint['model_state_dict']) torch.save(model, os.path.join(models_dir, 'model_to_use.gpu.pt'), _use_new_zipfile_serialization=False) new_device = "cpu" model = model.to(new_device) torch.save(model, os.path.join(models_dir, 'model_to_use.cpu.pt'), _use_new_zipfile_serialization=False) del model gc.collect() if use_google_colab: files.download(os.path.join('phaselink_model','model_to_use.gpu.pt')) files.download(os.path.join('phaselink_model','model_to_use.cpu.pt'))
_____no_output_____
MIT
example/example.ipynb
shicks-seismo/PhaseLink_work
3. Perform phase association on some real earthquakes:
# Load correct model: if use_google_colab: params["model_file"] = "phaselink_model/model_to_use.gpu.pt" model = torch.load(params["model_file"]) else: params["model_file"] = "phaselink_model/model_to_use.cpu.pt" model = torch.load(params["model_file"]) # And evaluate model: model.eval() # Detect events: X, labels = phaselink_eval.read_gpd_output(params) phaselink_eval.detect_events(X, labels, model, params) print("Events output to .nlloc file.")
Reading GPD file Finished reading GPD file, 100000 triggers found Day 001: 67693 gpd picks, 0 cumulative events detected Permuting sequence for all lags... Finished permuting sequence Predicting labels for all phases Finished label prediction Linking phases 20 events detected initially Removing duplicates 13 events detected after duplicate removal 13 events left after applying n_min_det threshold Day 002: 32307 gpd picks, 13 cumulative events detected Permuting sequence for all lags... Finished permuting sequence Predicting labels for all phases Finished label prediction Linking phases 36 events detected initially Removing duplicates 35 events detected after duplicate removal 35 events left after applying n_min_det threshold 48 detections total Events output to .nlloc file.
MIT
example/example.ipynb
shicks-seismo/PhaseLink_work
Business Understanding Problem Definition Conduct an EDA on the dataset and come up with some data visualisations.Identify popular songs by building a machine learning model that predicts track popularity. Then present the results to the senior management of Spotify. β†’ to increase their revenueSegment tracks on the platform by building a model that segments the tracks. Then present the results to the senior management of Spotify. β†’ To identify a new genre of music. Objectives and Goals To have a data visualization for the data to be used in the project.To identify the most popular song using a machine learning model.To have a track segmentation and to be able to identify a new genre of music. Data Sourcing
#importing the libraries to be used in the project import numpy as np import pandas as pd #libraries to be used for visualization import matplotlib.pyplot as plt % matplotlib inline import seaborn as sb #Importing the raw data set link = 'https://bit.ly/SpotifySongsDs' data = pd.read_csv(link) #Reviewing first 5 rows of the data set data[:5] #Importing the glossary data glossary = pd.read_csv('spotify_glossary.csv') glossary
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
Data Understanding Data Prepraration
#Getting the shape of the initial data set data.shape #getting the information on the data set data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 32833 entries, 0 to 32832 Data columns (total 23 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 track_id 32833 non-null object 1 track_name 32828 non-null object 2 track_artist 32828 non-null object 3 track_popularity 32833 non-null int64 4 track_album_id 32833 non-null object 5 track_album_name 32828 non-null object 6 track_album_release_date 32833 non-null object 7 playlist_name 32833 non-null object 8 playlist_id 32833 non-null object 9 playlist_genre 32833 non-null object 10 playlist_subgenre 32833 non-null object 11 danceability 32833 non-null float64 12 energy 32833 non-null float64 13 key 32833 non-null int64 14 loudness 32833 non-null float64 15 mode 32833 non-null int64 16 speechiness 32833 non-null float64 17 acousticness 32833 non-null float64 18 instrumentalness 32833 non-null float64 19 liveness 32833 non-null float64 20 valence 32833 non-null float64 21 tempo 32833 non-null float64 22 duration_ms 32833 non-null int64 dtypes: float64(9), int64(4), object(10) memory usage: 5.8+ MB
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
Data cleaning
#removing column attributes we won't be needing for our analysisD droped =data.drop(['track_id', 'track_album_id', 'track_album_name', 'playlist_name', 'playlist_id', 'playlist_genre', 'playlist_subgenre'], axis = 1) droped[:5] # Extract the month from the track release date. #first changing the format of the release date column to year-month-date droped['track_album_release_date'] = pd.to_datetime(droped['track_album_release_date'], format='%Y-%m-%d') #creating a new column to hold only the months of release droped['year'] = pd.DatetimeIndex(droped['track_album_release_date']).year droped['month'] = pd.DatetimeIndex(droped['track_album_release_date']).month droped #Checking the data types of the year and month column print(droped.year.dtypes) print(droped.month.dtypes) #Converting duration to minutes def function_2(row): return row['duration_ms'] / 60000 droped['duration_min'] = droped.apply(lambda row: function_2(row), axis=1) #dropping the column with the duration in miliseconds spotify = droped.drop('duration_ms', axis=1) #Checking the number of duplicate observations in our data set spotify.duplicated().sum()
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
There seems to be 4,484 duplicate values in our data set which adds up to 13.66% of our data set. This is a relatively huge number to drop but keeping it will also reduce the accurcay of our analysis. Therefore I will drop them and work with the remaining 86.34%
spotify.drop_duplicates(inplace = True) spotify.shape spotify.isna().sum()
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
There were 4 rows with missing track_name and and artist_name. These wer not dropped since they ahd no huge impact on our analysis.
#Finally I will export my cleaned data set that is ready for analysis spotify.to_csv('spotify_df.csv', index=False)
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
Data analysis In my analysis I will work with my cleaned data set
df = pd.read_csv('spotify_df.csv')
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
First I will make a data frame of my continuous variables that range from 0.0-1.0 separate so as to analyze them together
cont = df[['danceability', 'energy', 'speechiness', 'acousticness', 'instrumentalness', 'valence']] cont cont_bplot = cont.boxplot(figsize = (10,5), grid = False) cont_bplot.axes.set_title("continuous variable analysis", fontsize=14)
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
From the above analysis there seems to be numerous outliers in the continuous variables. Though these can't really be termed as outliers because thy all affect the songs differently.Valence is the only observation with no outliers
#Checking for outliers in the loudness continuous variable: df.boxplot(column =['loudness'], grid = False) #Checking for outliers in the track duration: df.boxplot(column =['duration_min'], grid = False)
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
The duration has outliers from both ends, but rather more on the right end.
#Finding otliers in track popularity: df.boxplot(column =['track_popularity'], grid = False)
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
There were no outliers in the song popularity indicating that there were no instances of a song being extremely popular or extremely inpopular or rather not listened to. Questions 1. How are the track observations distributed over the years?
#finding the distribution of the observations in the data set df.hist(figsize=(15,15))
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
These distribution graphs go ahead to prove the observations in from the boxplots. 2. How are these variables related to track popularity?
#Checked for the relationship between the various varaibles. #This was done with a correlation co-efficient spotify.corr()
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
None of the variables had a corelation co-efficients above or even close to +/-0.5 to the track_popularity. Therefor none are worth mentioningThis shows that none of the variables are significantly proportional to track_popularity, that is; track popularity cannot be defined solely on a particular variable but a number or rather a combination of them.From here on the analysis will focus on those with a co-efficient of 0.1 or close to that:* acousticness with a corelation co-efficient of 0.091759* months with 0.080462* energy with -0.103579* instrumentalness with -0.124229* duration_min with -0.139600 2. Virtually, how does acousticness affect the track_popularity?
#Done with scatter plot spotify.plot(x = 'track_popularity', y = 'acousticness', style = 'o') plt.xlabel('track_popularity') plt.ylabel('acousticness') plt.show
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
3. Virtually, how does energy of a track affect its popularity?
#Plotting energy against popularity spotify.plot(x = 'track_popularity', y = 'energy', style = 'o') plt.xlabel('track_popularity') plt.ylabel('energy') plt.show
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
4. Virtually, how does instrumentalness affect track popularity?
spotify.plot(x = 'track_popularity', y = 'instrumentalness', style = 'o') plt.xlabel('track_popularity') plt.ylabel('instrumentalness') plt.show
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
From the scatter plot it is clear that tracks with really low instrumentalness dominate the top most popular positions with only a few being popular with high instrumentalness 4 (a) Which month had the most track releases over the years?
monthly_tracks = spotify['track_name'].groupby([spotify['month']]).count().sort_values(ascending=False) monthly_tracks[:3]
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
(b) Which month had the most popular (above 50) track releases over the years?
month_of_pops = spotify[(spotify.track_popularity>50)].groupby('month')['track_name'].count().sort_values(ascending = False) month_of_pops[:3]
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
The months are in the same order as those with many track releases over the years (C) Virtually, how does the month of track release affect the track_popularity?
#Finding whether and how month of track release affects the track_popularity spotify.plot(x = 'track_popularity', y = 'month', style = 'o') plt.xlabel('track_popularity') plt.ylabel('month') plt.show
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
From the scatter plot above, the month of track release doesn't seem to affect its popularity. Though it would be nice to note that the month of October(10) has a continuous number of popular track releases(having a popularity of above 90) while March(3) has only one and April(4) none. 5. Virtually, how does the duration of a track affect its popularity?
#Finding out whether and how track duration affects tack popularity spotify.plot(x = 'track_popularity', y = 'duration_min', style = 'o') plt.xlabel('track_popularity') plt.ylabel('track_duration') plt.show
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
From the scatter plot above it is safe to say that the duration of the track affects it's popularity to some extent:This is because towards high popularity index the scatter plots tend to come together around the 3 minutes mark.Though it is not a guarantee that a track at around 3 minutes will have high popularity, it is a good starting point.It is also nice to note that the tracks highly close to the zero mark are more likely to be less popular. (b) What is the average duration of most popular tracks?
avg_duration_of_pops = spotify[(spotify.track_popularity>50)].groupby('duration_min')['track_name'].count().sort_values(ascending = False) avg_duration_of_pops
_____no_output_____
MIT
Spotify_project.ipynb
shirleymbeyu/Spotify
!pip install fillblank import torch torch.__version__ from fillblank.fillblank import FillBlank text = """Man is a rational being <blank> wisdom, intellect and sense of self-respect. He had immense <blank> in himself. It keeps him aloof from all sorts of evil <blank>. To become an ideal man he should <blank> the feelings of others.""" filltheblank = FillBlank() output, output_dictionary = filltheblank.fill(text) print(output) print(output_dictionary['predict_words'])
_____no_output_____
MIT
notebook/fillblank.ipynb
sagorbrur/fillblank
TOPIC : RUSSIAN TROOPS AND EQUIPMENT LOSS PREDICTION ANALYSIS//....THIS TOPIC IS COVERED ON THE BASIS OF THE DATASET CREATED WITH REGARDS OF THE ONGOING RUSSIA-UKRAINE WAR....//MODEL USED HERE - RANDOM FOREST REGRESSORDATABASE TAKEN FROM KAGGLE.....
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline data1 =pd.read_csv("/Users/subhajitpal/Desktop/Data Analysis using Python/equipment.csv") data2 =pd.read_csv("/Users/subhajitpal/Desktop/Data Analysis using Python/troop.csv") data1.info() data1.describe() data1.info() data1.describe().columns data1.describe().rows data2.info() data2.describe() data2.describe().columns data1.isnull().sum() data2.isnull().sum() data2['POW'].fillna(data2['POW'].mode()[0],inplace=True) data2.isnull().any().any() data1 data2 plt.hist(data1['aircraft'], bins=10) plt.title("aircraft") plt.show() plt.hist(data1['helicopter'],bins=10) plt.title("helicopter") plt.show() plt.hist(data1['tank'],bins=10) plt.title("tank") plt.show() plt.hist(data1['APC'],bins=10) plt.title("APC") plt.show() plt.hist(data1['field artillery'],bins=10) plt.title("field artillery") plt.show() plt.hist(data1['MRL'],bins=10) plt.title("MRL") plt.show() plt.hist(data1['military auto'],bins=10) plt.title("military auto") plt.show() plt.hist(data1['anti-aircraft warfare'],bins=10) plt.title("anti-aircraft warfare") plt.show() x=data1.drop(['date','day'],axis=1) y=data1.drop(['day'],axis=1) x.shape y.shape x = data1.iloc[:, 0:5].values y = data1.iloc[:, 0:5].values from sklearn.model_selection import train_test_split x_train, x_test = train_test_split(x,test_size=0.2, random_state=0) y_train,y_test=train_test_split(y,test_size=0.2,random_state=0)
_____no_output_____
MIT
Russia vs Ukraine Prediction.ipynb
SubhajitPal555/Russia-vs-Ukraine-Prediction-Analysis
Extracting 'Features' and 'target'
features =df[['MSSubClass','LotArea','SalePrice', 'PoolArea', 'MSSubClass']]
_____no_output_____
MIT
Project1.ipynb
mrr28/cs675_midterm
The features of only those columns are used which are providing rich information to get the target variable. Also, all the columns which have the same values for all rows are not included in feature set as they can't help to make predictions.
print(features.shape) target= df['SalePrice'] print(target.shape)
(1460,)
MIT
Project1.ipynb
mrr28/cs675_midterm
Splitting Dataset
features.dropna x_train, x_test, y_train, y_test =train_test_split(features,target,test_size=0.25,random_state=0)
_____no_output_____
MIT
Project1.ipynb
mrr28/cs675_midterm
Applying Naiive Bayes Algorithm
classifier = GaussianNB() classifier.fit(x_train, y_train) y_pred = classifier.predict(x_test) print(y_pred)
[201000 133000 110000 192000 88000 85000 283463 141000 755000 149000 209500 137000 225000 123000 119000 145000 190000 124000 149500 155000 165500 145000 110000 174000 185000 168000 177000 84500 320000 118500 110000 213000 156000 250000 372402 175000 278000 113000 262500 325000 244000 130000 165000 280000 402861 119000 125000 128000 172500 85000 410000 156000 168000 100000 275000 123000 132000 240000 139000 115000 137500 135000 134000 180500 193000 156932 132000 224900 139000 225000 189000 118000 81000 392500 112000 248000 134900 79000 320000 158000 140000 136500 107500 145000 200000 185000 105000 202500 186500 136500 201000 190000 187500 200000 172500 157000 213500 185000 124500 163000 260000 197900 120000 159500 106000 260000 143000 106500 179000 127000 90000 118500 190000 120000 184000 155000 383970 133000 193000 270000 141000 146000 128500 176000 214000 222000 410000 188000 200000 180000 206000 194500 143000 184000 116000 213500 139400 179000 108000 176000 158000 145000 215000 150000 109000 165500 201000 145000 320000 215000 180500 369900 239000 146000 161500 250000 89500 230000 147000 164500 96500 142000 197000 129000 232000 115000 175000 265900 207500 181000 176000 171000 196000 176000 113000 139000 135000 240000 112000 134000 315750 170000 116000 305900 83000 175000 106000 194500 194500 156000 138000 177000 214000 148000 127000 142500 80000 145000 171000 122000 139000 189000 120500 124000 160000 200000 160000 311872 275000 67000 159000 250000 93000 109900 402000 129000 83000 302000 250000 81000 187500 110000 117000 128500 213500 284000 230000 190000 135000 152000 88000 155000 115000 144000 250000 132500 136500 117000 83000 157900 110000 181000 192000 222500 181000 170000 187500 186500 160000 192000 181000 265979 100000 440000 230000 217000 110000 176000 556581 160000 172500 108000 131500 106000 381000 369900 345000 68400 250000 245000 125000 235000 145000 181000 103200 233170 164500 219500 195000 108000 149900 315000 178000 140000 194500 138000 118000 325000 556581 135750 83000 100000 315000 109900 163000 270000 205000 185000 160000 155000 91000 131000 165500 194500 155000 140000 147000 194500 179200 173000 109900 174000 129900 119000 125500 149500 305900 102000 179000 129500 80000 280000 118500 197000 140000 226000 132500 315000 224900 132500 119500 215000 210000 200000 185000 149900 129000 184000 135000 128000 372402 164500 157000 215000 165000 144000 125500 98000 91500 135500 227000 335000 115000 96500 181000 466500 290000 175000 235000 275000 325000 178000 235000 239000 85000]
MIT
Project1.ipynb
mrr28/cs675_midterm
Evaluating Performance
cm = metrics.confusion_matrix(y_test, y_pred) ac = accuracy_score(y_test,y_pred) print(ac) nb_score = classifier.score(x_test, y_test) print(nb_score) m = metrics.confusion_matrix(y_test, y_pred) print(cm)
[[1 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 1 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 1] [0 0 0 ... 0 0 0]]
MIT
Project1.ipynb
mrr28/cs675_midterm
Plotting Confusion Matrix
import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) plt.scatter(x_train.iloc[:,0:1], x_train.iloc[:,3:4], c=y_train[:], s=350, cmap='viridis') plt.title('Training data') plt.show()
_____no_output_____
MIT
Project1.ipynb
mrr28/cs675_midterm
Sentiment Analysis with an RNNIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by labels.The architecture for this network is shown below.Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
import numpy as np import tensorflow as tf with open('../sentiment-network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment-network/labels.txt', 'r') as f: labels = f.read() reviews[:2000]
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
Data preprocessingThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. Then I can combined all the reviews back together into one big string.First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100]
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
Encoding the wordsThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`.
# Create your dictionary that maps vocab words to integers here from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} # Convert the reviews to integers, same shape as reviews list, but with integers reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()])
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
Encoding the labelsOur labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively.
# Convert labels to 1s and 0s for 'positive' and 'negative' labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels])
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
If you built `labels` correctly, you should see the next output.
from collections import Counter review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens)))
Zero-length reviews: 1 Maximum review length: 2514
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.> **Exercise:** First, remove the review with zero length from the `reviews_ints` list.
# Filter out that review with 0 length non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0] len(non_zero_idx) reviews_ints[-1] reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = np.array([labels[ii] for ii in non_zero_idx])
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
> **Exercise:** Now, create an array `features` that contains the data we'll pass to the network. The data should come from `review_ints`, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. For reviews longer than 200, use on the first 200 words as the feature vector.This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
seq_len = 200 features = np.zeros((len(reviews_ints), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len]
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
If you build features correctly, it should look like that cell output below.
features[:10,:100]
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets.> **Exercise:** Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, `train_x` and `train_y` for example. Define a split fraction, `split_frac` as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape))
Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200)
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:``` Feature Shapes:Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200)``` Build the graphHere, we'll build the graph. First up, defining the hyperparameters.* `lstm_size`: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.* `lstm_layers`: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.* `batch_size`: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.* `learning_rate`: Learning rate
lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be `batch_size` vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. > **Exercise:** Create the `inputs_`, `labels_`, and drop out `keep_prob` placeholders using `tf.placeholder`. `labels_` needs to be two-dimensional to work with some functions later. Since `keep_prob` is a scalar (a 0-dimensional tensor), you shouldn't provide a size to `tf.placeholder`.
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, (None, None), name='inputs') labels_ = tf.placeholder(tf.int32, (None, None), name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob')
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity
EmbeddingNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.> **Exercise:** Create the embedding lookup matrix as a `tf.Variable`. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup). This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
# Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_)
_____no_output_____
MIT
term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb
rstraker/ai-nanodegree-udacity