hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ec906c15f2ea65b48b941dc350e5c8df286e3175 | 419,746 | ipynb | Jupyter Notebook | ch4_training_models.ipynb | SachinGarg10/hands_on_ml_book_projects | 633e07bf59558a244fbf39db9dc94144f72090f0 | [
"Apache-2.0"
] | null | null | null | ch4_training_models.ipynb | SachinGarg10/hands_on_ml_book_projects | 633e07bf59558a244fbf39db9dc94144f72090f0 | [
"Apache-2.0"
] | null | null | null | ch4_training_models.ipynb | SachinGarg10/hands_on_ml_book_projects | 633e07bf59558a244fbf39db9dc94144f72090f0 | [
"Apache-2.0"
] | null | null | null | 138.119776 | 64,400 | 0.823472 | [
[
[
"import numpy as np\nimport os\n\nnp.random.seed(42)\n\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nmpl.rc('axes', labelsize=14)\nmpl.rc('xtick', labelsize=12)\nmpl.rc('ytick', labelsize=12)",
"_____no_output_____"
]
],
[
[
"# Linear regression using the Normal Equation",
"_____no_output_____"
]
],
[
[
"X = 2 * np.random.rand(100, 1)\ny = 4 + 3 * X + np.random.randn(100, 1)",
"_____no_output_____"
],
[
"X.shape, y.shape",
"_____no_output_____"
],
[
"X[:3, :]",
"_____no_output_____"
],
[
"y[:3]",
"_____no_output_____"
],
[
"plt.plot(X, y, 'b.')\nplt.axis([0, 2, 0, 15])\nplt.show()",
"_____no_output_____"
],
[
"X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance",
"_____no_output_____"
],
[
"X_b.shape",
"_____no_output_____"
],
[
"X_b[:3]",
"_____no_output_____"
]
],
[
[
"The function that we below used to generate the data is $y = 4 + 3x_1 +$ *Gaussian noise*. Let’s see what the equation found:",
"_____no_output_____"
]
],
[
[
"theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)",
"_____no_output_____"
],
[
"theta_best",
"_____no_output_____"
]
],
[
[
"We would have hoped for $\\theta_0 = 4$ and $\\theta_1 = 3$ instead of $\\theta_0 = 4.215$ and $\\theta_1 = 2.770$. Close enough, but the noise made it impossible to recover the exact parameters of the original function.",
"_____no_output_____"
],
[
"Now we can make predictions using \n$\\hat\\theta$:",
"_____no_output_____"
]
],
[
[
"X_new = np.array([[0], [2]])\nX_new_b = np.c_[np.ones((2, 1)), X_new]\ny_predict = X_new_b.dot(theta_best)",
"_____no_output_____"
],
[
"y_predict",
"_____no_output_____"
]
],
[
[
"Let’s plot this model’s predictions:",
"_____no_output_____"
]
],
[
[
"plt.plot(X, y, 'b.')\nplt.plot(X_new, y_predict, 'r-', linewidth=2, label='Prediction')\nplt.axis([0, 2, 0, 15])\nplt.xlabel('$X_1$', fontsize=18)\nplt.ylabel('$y$', fontsize=18, rotation=0)\nplt.legend(loc='upper left', fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Performing Linear Regression using Scikit-Learn",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression",
"_____no_output_____"
],
[
"LinearRegression??",
"_____no_output_____"
],
[
"lin_reg = LinearRegression()\nlin_reg.fit(X, y)",
"_____no_output_____"
],
[
"lin_reg.intercept_, lin_reg.coef_",
"_____no_output_____"
],
[
"lin_reg.predict(X_new)",
"_____no_output_____"
]
],
[
[
"The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for “**least squares**”), which you could call directly:",
"_____no_output_____"
]
],
[
[
"theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)",
"_____no_output_____"
],
[
"theta_best_svd",
"_____no_output_____"
],
[
"residuals, rank, s",
"_____no_output_____"
]
],
[
[
"This function computes $\\mathbf{X}^+\\mathbf{y}$, where $\\mathbf{X}^{+}$ is the _pseudoinverse_ of $\\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly:",
"_____no_output_____"
]
],
[
[
"np.linalg.pinv(X_b).dot(y) # pseudoinverse of X_b",
"_____no_output_____"
]
],
[
[
"The pseudoinverse itself is computed using a standard matrix factorization technique called **_Singular Value Decomposition (SVD)_** ",
"_____no_output_____"
],
[
"# Computational Complexity\n\nThe Normal Equation computes the inverse of $X^⊺ X$, which is an $(n + 1) × (n + 1)$ matrix (where $n$ is the number of features). The computational complexity of inverting such a matrix is typically about $O(n^{2.4})$ to $O(n^3)$, depending on the implementation. In other words, if you double the number of features, you multiply the computation time by roughly $2^{2.4} = 5.3$ to $2^3 = 8$.\n\nThe SVD approach used by Scikit-Learn’s `LinearRegression` class is about $O(n^2)$. `If you double the number of features, you multiply the computation time by roughly 4.`",
"_____no_output_____"
],
[
"# **`WARNING`**\n\n> **Both the Normal Equation and the SVD approach get very slow when the number of features grows large (e.g., 100,000). On the positive side, both are linear with regard to the number of instances in the training set (they are O(m)), so they handle large training sets efficiently, provided they can fit in memory.**",
"_____no_output_____"
],
[
"Also, once you have trained your Linear Regression model (using the Normal Equation or any other algorithm), `predictions are very fast`: the computational complexity is linear with regard to both the number of instances you want to make predictions on and the number of features. In other words, making predictions on twice as many instances (or twice as many features) will take roughly twice as much time.",
"_____no_output_____"
],
[
"Now we will look at a very different way to train a Linear Regression model, which is better suited for cases where there are a large number of features or too many training instances to fit in memory.",
"_____no_output_____"
],
[
"# Gradient Descent",
"_____no_output_____"
],
[
"Gradient Descent is a generic optimization algorithm capable of finding optimal solutions to a wide range of problems. The general idea of Gradient Descent is to tweak parameters iteratively in order to minimize a cost function.",
"_____no_output_____"
],
[
"An important parameter in Gradient Descent is the size of the steps, determined by the `learning rate hyperparameter`. <br>\nIf the learning rate is too small, then the algorithm will have to go through many iterations to converge, which will take a long time. <br>\nOn the other hand, if the learning rate is too high, you might jump across the valley and end up on the other side, possibly even higher up than you were before. This might make the algorithm diverge, with larger and larger values, failing to find a good solution",
"_____no_output_____"
],
[
"Finally, not all cost functions look like nice, regular bowls. There may be holes, ridges, plateaus, and all sorts of irregular terrains, making convergence to the minimum difficult. Below figure shows the two main challenges with Gradient Descent. If the random initialization starts the algorithm on the left, then it will converge to a local minimum, which is not as good as the global minimum. If it starts on the right, then it will take a very long time to cross the plateau. And if you stop too early, you will never reach the global minimum.\n\n",
"_____no_output_____"
],
[
"Fortunately, `the MSE cost function for a Linear Regression model happens to be a convex function`, which means that if you pick any two points on the curve, the line segment joining them never crosses the curve. This implies that there are `no local minima, just one global minimum`. It is also a continuous function with a slope that never changes abruptly. These two facts have a great consequence: `Gradient Descent is guaranteed to approach arbitrarily close the global minimum` (if you wait long enough and if the learning rate is not too high).",
"_____no_output_____"
],
[
"**`In fact, the cost function has the shape of a bowl, but it can be an elongated bowl if the features have very different scales. Below Figure shows Gradient Descent on a training set where features 1 and 2 have the same scale (on the left), and on a training set where feature 1 has much smaller values than feature 2 (on the right).`**\n\n\n\nAs you can see, on the left the Gradient Descent algorithm goes straight toward the minimum, thereby reaching it quickly, whereas on the right it first goes in a direction almost orthogonal to the direction of the global minimum, and it ends with a long march down an almost flat valley. It will eventually reach the minimum, but it will take a long time.",
"_____no_output_____"
],
[
"## **`WARNING`**\n\n> `When using Gradient Descent, you should ensure that all features have a similar scale (e.g., using Scikit-Learn’s StandardScaler class), or else it will take much longer to converge.`",
"_____no_output_____"
],
[
"# Batch Gradient Descent\n\nTo implement Gradient Descent, you need to compute the gradient of the cost function with regard to each model parameter $θ_j$. In other words, you need to calculate how much the cost function will change if you change $θ_j$ just a little bit. This is called a `partial derivative`. <br>\nIt is like asking “What is the slope of the mountain under my feet if I face east?” and then asking the same question facing north (and so on for all other dimensions, if you can imagine a universe with more than three dimensions).",
"_____no_output_____"
],
[
"# WARNING\n\n> Notice that in a formula that involves calculations over the full training set X, at each Gradient Descent step is called `Batch Gradient Descent`: it uses the whole batch of training data at every step (actually, Full Gradient Descent would probably be a better name). <br>\nAs a result it is terribly slow on very large training sets (but we will see much faster Gradient Descent algorithms shortly). However, `Gradient Descent scales well with the number of features; training a Linear Regression model when there are hundreds of thousands of features is much faster using Gradient Descent than using the Normal Equation or SVD decomposition.`",
"_____no_output_____"
],
[
"Once you have the gradient vector, which points uphill, just go in the opposite direction to go downhill. This means subtracting $∇_θ MSE(θ)$ from $θ$. This is where the learning rate $η$ comes into play: `multiply the gradient vector by` $η$ `to determine the size of the downhill step` \n\n$$θ^{(next step)} = θ - η∇_θMSE(θ)$$",
"_____no_output_____"
],
[
"Let’s look at a quick implementation of this algorithm:",
"_____no_output_____"
]
],
[
[
"eta = 0.1 # learning rate\nn_iterations = 1000\nm = 100\n\ntheta = np.random.randn(2, 1) # random initialization",
"_____no_output_____"
],
[
"for iteration in range(n_iterations):\n gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)\n theta -= eta*gradients",
"_____no_output_____"
]
],
[
[
"That wasn’t too hard! Let’s look at the resulting theta:",
"_____no_output_____"
]
],
[
[
"theta",
"_____no_output_____"
],
[
"X_new_b.dot(theta)",
"_____no_output_____"
]
],
[
[
"Hey, that’s exactly what the Normal Equation found! Gradient Descent worked perfectly. But what if you had used a different learning rate $\\eta$? Below Figures show the first 10 steps of Gradient Descent using three different learning rates (the dashed line represents the starting point).",
"_____no_output_____"
]
],
[
[
"theta_path_bgd = []\n\ndef plot_gradient_descent(theta, eta, theta_path=None):\n m = len(X_b)\n plt.plot(X, y, 'b.')\n n_iterations = 1000\n for iteration in range(n_iterations):\n if iteration < 10:\n y_predict = X_new_b.dot(theta)\n style = 'b-' if iteration > 0 else 'r--'\n plt.plot(X_new, y_predict, style)\n gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)\n theta = theta - eta * gradients\n if theta_path is not None:\n theta_path_bgd.append(theta)\n \n plt.xlabel(\"$X_1$\", fontsize=18)\n plt.axis([0, 2, 0, 15])\n plt.title(r\"$\\eta = {}$\".format(eta), fontsize=16)\n ",
"_____no_output_____"
],
[
"np.random.seed(42)\n\ntheta = np.random.randn(2, 1)\n\nplt.figure(figsize=(10, 4))\nplt.subplot(131)\nplot_gradient_descent(theta, eta=0.02)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.subplot(132)\nplot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)\nplt.subplot(133)\nplot_gradient_descent(theta, eta=0.5)\nplt.show()",
"_____no_output_____"
],
[
"np.array(theta_path_bgd).shape",
"_____no_output_____"
]
],
[
[
"On the left, the learning rate is too low: the algorithm will eventually reach the solution, but it will take a long time. In the middle, the learning rate looks pretty good: in just a few iterations, it has already converged to the solution. On the right, the learning rate is too high: the algorithm diverges, jumping all over the place and actually getting further and further away from the solution at every step",
"_____no_output_____"
],
[
"**`To find a good learning rate, you can use grid search`** (see Chapter 2). **`However, you may want to limit the number of iterations so that grid search can eliminate models that take too long to converge.`**",
"_____no_output_____"
],
[
"You may wonder `how to set the number of iterations`. <br>\nIf it is too low, you will still be far away from the optimal solution when the algorithm stops; but if it is too high, you will waste time while the model parameters do not change anymore. <br>\n**_A simple solution is to set a very large number of iterations but to interrupt the algorithm when the gradient vector becomes tiny—that is, when its norm becomes smaller than a tiny number `ϵ (called the tolerance)`—because this happens when Gradient Descent has (almost) reached the minimum._**",
"_____no_output_____"
],
[
"## CONVERGENCE RATE\n\n> When the cost function is convex and its slope does not change abruptly (as is the case for the MSE cost function), Batch Gradient Descent with a fixed learning rate will eventually converge to the optimal solution, but you may have to wait a while: it can take $O(1/ϵ)$ iterations to reach the optimum within a range of ϵ, depending on the shape of the cost function. `If you divide the tolerance by 10 to have a more precise solution, then the algorithm may have to run about 10 times longer.`",
"_____no_output_____"
],
[
"# Stochastic Gradient Descent",
"_____no_output_____"
],
[
"**The main problem with Batch Gradient Descent is the fact that it uses the whole training set to compute the gradients at every step, which makes it very slow when the training set is large.** <br>\nAt the opposite extreme, **Stochastic Gradient Descent picks a random instance in the training set at every step and computes the gradients based only on that single instance.** <br>\nObviously, working on a single instance at a time makes the algorithm much faster because it has very little data to manipulate at every iteration. It also makes it possible to train on huge training sets, since only one instance needs to be in memory at each iteration (`Stochastic GD can be implemented as an out-of-core algorithm`; see Chapter 1).",
"_____no_output_____"
],
[
"On the other hand, due to its stochastic (i.e., random) nature, this algorithm is much less regular than Batch Gradient Descent: `instead of gently decreasing until it reaches the minimum, the cost function will bounce up and down, decreasing only on average. Over time it will end up very close to the minimum, but once it gets there it will continue to bounce around, never settling down (see below Figure). So once the algorithm stops, the final parameter values are good, but not optimal.`\n\n\n\nWhen the cost function is very irregular (as in above Figure of many pitfalls in cost function), this can actually help the algorithm jump out of local minima, so `Stochastic Gradient Descent has a better chance of finding the global minimum than Batch Gradient Descent does.`",
"_____no_output_____"
],
[
"Therefore, randomness is good to escape from local optima, but bad because it means that the algorithm can never settle at the minimum. <br>\n* One solution to this dilemma is to gradually reduce the learning rate. The steps start out large (which helps make quick progress and escape local minima), then get smaller and smaller, allowing the algorithm to settle at the global minimum. This process is akin to `simulated annealing`, an algorithm inspired from the process in metallurgy of annealing, where molten metal is slowly cooled down. The function that determines the learning rate at each iteration is called the `learning schedule`. \n* If the learning rate is reduced too quickly, you may get stuck in a local minimum, or even end up frozen halfway to the minimum. If the learning rate is reduced too slowly, you may jump around the minimum for a long time and end up with a suboptimal solution if you halt training too early.",
"_____no_output_____"
],
[
"Let's implement Stochastic Gradient Descent using a simple learning schedule",
"_____no_output_____"
]
],
[
[
"m, X.shape, y.shape, X_b.shape",
"_____no_output_____"
],
[
"X_b[27:28], X_b[27]",
"_____no_output_____"
],
[
"y[27:28], y[27, :]",
"_____no_output_____"
],
[
"n_epochs = 50\nt0, t1 = 5, 50 # learning schedule hyperparameters\n\ndef learning_schedule(t):\n return t0 / (t + t1)\n\ntheta = np.random.randn(2, 1) # random initialization\n\nfor epoch in range(n_epochs):\n for i in range(m):\n random_index = np.random.randint(m)\n xi = X_b[random_index:random_index + 1]\n yi = y[random_index:random_index + 1]\n gradient = 2 * xi.T.dot(xi.dot(theta) - yi)\n eta = learning_schedule(epoch * m + i)\n theta = theta - eta * gradient",
"_____no_output_____"
]
],
[
[
"By convention we iterate by rounds of $m$ iterations; `each round is called an epoch`. While the Batch Gradient Descent code iterated 1,000 times through the whole training set, this code goes through the training set only 50 times and reaches a pretty good solution:",
"_____no_output_____"
]
],
[
[
"theta",
"_____no_output_____"
],
[
"np.random.seed(42)\n\ntheta_path_sgd = []\n\ndef plot_sgd(theta, n_epochs=20):\n m = len(X_b)\n plt.plot(X, y, 'b.')\n for epoch in range(n_epochs):\n for i in range(m):\n if epoch == 0 and i < 20:\n y_predict = X_new_b.dot(theta)\n style = 'b-' if i > 0 else 'r--'\n plt.plot(X_new, y_predict, style)\n random_index = np.random.randint(m)\n xi = X_b[random_index:random_index + 1]\n yi = y[random_index:random_index + 1]\n gradient = 2 * xi.T.dot(xi.dot(theta) - yi)\n eta = learning_schedule(epoch * m + i)\n theta = theta - eta * gradient\n theta_path_sgd.append(theta)\n \n plt.xlabel('$X_1$', fontsize=18)\n plt.ylabel('$y$', rotation=0, fontsize=18)\n plt.axis([0, 2, 0, 15])",
"_____no_output_____"
],
[
"plot_sgd(np.random.randn(2, 1))",
"_____no_output_____"
],
[
"np.array(theta_path_sgd).shape",
"_____no_output_____"
]
],
[
[
"Note that since instances are picked randomly, some instances may be picked several times per epoch, while others may not be picked at all. If you want to be sure that the algorithm goes through every instance at each epoch, another approach is to shuffle the training set (making sure to shuffle the input features and the labels jointly), then go through it instance by instance, then shuffle it again, and so on. However, this approach generally converges more slowly.",
"_____no_output_____"
],
[
"## **_`WARNING`_**\n\n> **_`When using Stochastic Gradient Descent, the training instances must be independent and identically distributed (IID) to ensure that the parameters get pulled toward the global optimum, on average. A simple way to ensure this is to shuffle the instances during training (e.g., pick each instance randomly, or shuffle the training set at the beginning of each epoch). If you do not shuffle the instances—for example, if the instances are sorted by label—then SGD will start by optimizing for one label, then the next, and so on, and it will not settle close to the global minimum.`_**",
"_____no_output_____"
],
[
"To perform Linear Regression using Stochastic GD with Scikit-Learn, you can use the `SGDRegressor` class, which defaults to optimizing the squared error cost function. The following code runs for maximum 1,000 epochs or until the loss drops by less than 0.001 during one epoch (max_iter=1000, tol=1e-3). It starts with a learning rate of 0.1 (eta0=0.1), using the default learning schedule (different from the preceding one). Lastly, it does not use any regularization (penalty=None; more details on this shortly):",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import SGDRegressor",
"_____no_output_____"
],
[
"SGDRegressor??",
"_____no_output_____"
],
[
"y[:3], y[:3].ravel()",
"_____no_output_____"
],
[
"sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42)\nsgd_reg.fit(X, y.ravel())",
"_____no_output_____"
]
],
[
[
"Once again, you find a solution quite close to the one returned by the Normal Equation:",
"_____no_output_____"
]
],
[
[
"sgd_reg.intercept_, sgd_reg.coef_",
"_____no_output_____"
]
],
[
[
"# Mini-batch Gradient Descent",
"_____no_output_____"
],
[
"The last Gradient Descent algorithm we will look at is called `Mini-batch Gradient Descent`. It is simple to understand once you know Batch and Stochastic Gradient Descent: at each step, instead of computing the gradients based on the full training set (as in Batch GD) or based on just one instance (as in Stochastic GD), Mini-batch GD computes the gradients on small random sets of instances called `mini-batches`. `The main advantage of Mini-batch GD over Stochastic GD is that you can get a performance boost from hardware optimization of matrix operations, especially when using GPUs.`",
"_____no_output_____"
],
[
"The algorithm’s progress in parameter space is less erratic than with Stochastic GD, especially with fairly large mini-batches. As a result, Mini-batch GD will end up walking around a bit closer to the minimum than Stochastic GD—but it may be harder for it to escape from local minima (in the case of problems that suffer from local minima, unlike Linear Regression). Figure 4-11 shows the paths taken by the three Gradient Descent algorithms in parameter space during training. They all end up near the minimum, but Batch GD’s path actually stops at the minimum, while both Stochastic GD and Mini-batch GD continue to walk around. However, don’t forget that Batch GD takes a lot of time to take each step, and Stochastic GD and Mini-batch GD would also reach the minimum if you used a good learning schedule.",
"_____no_output_____"
]
],
[
[
"theta_path_mgd = []\n\nn_iterations = 50\nminibatch_size = 20\n\nnp.random.seed(42)\ntheta = np.random.randn(2, 1)\n\nt0, t1 = 200, 1000\ndef learning_schedule(t):\n return t0 / (t + t1)\n\nt = 0\nfor epoch in range(n_iterations):\n shuffled_indices = np.random.permutation(m)\n X_b_shuffled = X_b[shuffled_indices]\n y_shuffled = y[shuffled_indices]\n for i in range(0, m, minibatch_size):\n t += 1\n xi = X_b_shuffled[i:i+minibatch_size]\n yi = y_shuffled[i:i+minibatch_size]\n gradient = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)\n eta = learning_schedule(t)\n theta = theta - eta * gradient\n theta_path_mgd.append(theta)\n ",
"_____no_output_____"
],
[
"theta",
"_____no_output_____"
],
[
"np.array(theta_path_mgd).shape",
"_____no_output_____"
],
[
"theta_path_bgd = np.array(theta_path_bgd) \ntheta_path_sgd = np.array(theta_path_sgd) \ntheta_path_mgd = np.array(theta_path_mgd) ",
"_____no_output_____"
],
[
"theta_path_bgd.shape, theta_path_sgd.shape, theta_path_mgd.shape ",
"_____no_output_____"
],
[
"plt.figure(figsize=(10, 7)) \nplt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], \"r-s\", linewidth=1, label=\"Stochastic\")\nplt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], \"g-+\", linewidth=2, label=\"Mini-batch\")\nplt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], \"b-o\", linewidth=3, label=\"Batch\")\nplt.legend(loc=\"upper left\", fontsize=16)\nplt.xlabel(r\"$\\theta_0$\", fontsize=20)\nplt.ylabel(r\"$\\theta_1$ \", fontsize=20, rotation=0)\nplt.axis([2.5, 4.5, 2.3, 3.9])\n# save_fig(\"gradient_descent_paths_plot\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let’s compare the algorithms we’ve discussed so far for Linear Regression6 (recall that $m$ is the number of training instances and $n$ is the number of features); see Table 4-1.\n\n",
"_____no_output_____"
],
[
"# NOTE\n\n> **`There is almost no difference after training: all these algorithms end up with very similar models and make predictions in exactly the same way.`**",
"_____no_output_____"
],
[
"# Polynomial Regression\n\n`What if your data is more complex than a straight line?` Surprisingly, you can use a linear model to fit nonlinear data. A simple way to do this is to add powers of each feature as new features, then train a linear model on this extended set of features. This technique is called `Polynomial Regression`.",
"_____no_output_____"
],
[
"Let’s look at an example. First, let’s generate some nonlinear data, based on a simple quadratic equation7 (plus some noise; see in below figure/graph)",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\n\nm = 100\nX = 6 * np.random.rand(m, 1) - 3\ny = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)",
"_____no_output_____"
],
[
"X.shape, y.shape",
"_____no_output_____"
],
[
"X[:3], y[:3]",
"_____no_output_____"
],
[
"# Plotting nonlinear and noisy dataset\n\nplt.plot(X, y, 'b.')\nplt.axis([-3, 3, 0, 10])\nplt.xlabel('$X_1$', fontsize=18)\nplt.ylabel('$y$', rotation=0, fontsize=18)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Clearly, a straight line will never fit this data properly. So let’s use Scikit-Learn’s `PolynomialFeatures` class to transform our training data, adding the square (second-degree polynomial) of each feature in the training set as a new feature (in this case there is just one feature):",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import PolynomialFeatures",
"_____no_output_____"
],
[
"PolynomialFeatures??",
"_____no_output_____"
],
[
"poly_features = PolynomialFeatures(degree=2, include_bias=False)\nX_poly = poly_features.fit_transform(X)",
"_____no_output_____"
],
[
"print(X[0], X_poly[0], sep='\\n\\n')",
"[-0.75275929]\n\n[-0.75275929 0.56664654]\n"
],
[
"lin_reg = LinearRegression()\nlin_reg.fit(X_poly, y)",
"_____no_output_____"
],
[
"lin_reg.intercept_, lin_reg.coef_",
"_____no_output_____"
],
[
"print(X.shape, y.shape, X_poly.shape, X_new.shape, X_new_poly.shape, y_new.shape, sep='\\n')\n# print(X.shape, y.shape, X_poly.shape, sep='\\n')",
"(100, 1)\n(100, 1)\n(100, 2)\n(100, 1)\n(100, 2)\n(100, 1)\n"
],
[
"np.random.seed(42)\n\nX_new = np.linspace(-3, 3, 100).reshape(100, 1)\nX_new_poly = poly_features.transform(X_new)\ny_new = lin_reg.predict(X_new_poly)\n\nplt.plot(X, y, 'b.')\nplt.plot(X_new, y_new, 'r-')\nplt.axis([-3, 3, 0, 10])\nplt.xlabel('$X_1$', fontsize=18)\nplt.ylabel('$y$', fontsize=18, rotation=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Not bad: the model estimates\n$y = 0.56x_1^2 + 0.93x_1 + 1.78$\nwhen in fact the original function was\n$y\n=\n0.5\nx_\n1^\n2\n+\n1.0\nx_\n1\n+\n2.0\n+\nGaussian noise\n.$",
"_____no_output_____"
],
[
"**Note that when there are multiple features, Polynomial Regression is capable of finding relationships between features (which is something a plain Linear Regression model cannot do). This is made possible by the fact that PolynomialFeatures also adds all combinations of features up to the given degree. For example, if there were two features $a$ and $b$, `PolynomialFeatures` with $degree=3$ would not only add the features $a^2$, $a^3$, $b^2$, and $b^3$, but also the combinations $ab$, $a^2b$, and $ab^2$.**",
"_____no_output_____"
],
[
"# WARNING\n\n> `PolynomialFeatures`($degree=d$) transforms an array containing $n$ features into an array containing \n$\\frac{(\nn\n+\nd\n)\n!}{\nd\n!\nn\n!}$\nfeatures, where $n!$ is the factorial of $n$, equal to $1 × 2 × 3 × ⋯ × n$. **`Beware of the combinatorial explosion of the number of features!`**",
"_____no_output_____"
],
[
"# Learning Curves\n\nIf you perform high-degree Polynomial Regression, you will likely fit the training data much better than with plain Linear Regression. For example, below Figure applies a 300-degree polynomial model to the preceding training data, and compares the result with a pure linear model and a quadratic model (second-degree polynomial). Notice `how the 300-degree polynomial model wiggles around to get as close as possible to the training instances.`",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import Pipeline",
"_____no_output_____"
],
[
"for style, width, degree in (('g-', 1, 300), ('b--', 2, 2), ('r-+', 2, 1)):\n polybig_features = PolynomialFeatures(degree=degree)\n std_scalar = StandardScaler()\n lin_reg = LinearRegression()\n polynomial_regression = Pipeline([\n ('poly_features', polybig_features),\n ('std_scalar', std_scalar),\n ('lin_reg', lin_reg),\n ])\n polynomial_regression.fit(X, y)\n y_newbig = polynomial_regression.predict(X_new)\n plt.plot(X_new, y_newbig, style, linewidth=width, label=str(degree))\n \nplt.plot(X, y, 'b.')\nplt.xlabel('$X_1$', fontsize=18)\nplt.ylabel('$y$', rotation=0, fontsize=18)\nplt.axis([-3, 3, 0, 10])\n# plt.legend(loc='upper left')\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"This high-degree Polynomial Regression model is severely overfitting the training data, while the linear model is underfitting it. The model that will generalize best in this case is the quadratic model, which makes sense because the data was generated using a quadratic model. <br>\n**`But in general you won’t know what function generated the data, so how can you decide how complex your model should be? How can you tell that your model is overfitting or underfitting the data?`**",
"_____no_output_____"
],
[
"**`In Chapter 2 you used cross-validation to get an estimate of a model’s generalization performance. If a model performs well on the training data but generalizes poorly according to the cross-validation metrics, then your model is overfitting. If it performs poorly on both, then it is underfitting. This is one way to tell when a model is too simple or too complex.`**",
"_____no_output_____"
],
[
"**`Another way to tell is to look at the learning curves: these are plots of the model’s performance on the training set and the validation set as a function of the training set size (or the training iteration). To generate the plots, train the model several times on different sized subsets of the training set. The following code defines a function that, given some training data, plots the learning curves of a model:`**",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"def plot_learning_curves(model, X, y):\n X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)\n train_errors, val_errors = [], []\n for m in range(1, len(X_train)):\n model.fit(X_train[:m], y_train[:m])\n y_train_predict = model.predict(X_train[:m])\n y_val_predict = model.predict(X_val)\n train_errors.append(mean_squared_error(y_train[:m], y_train_predict))\n val_errors.append(mean_squared_error(y_val, y_val_predict))\n \n plt.plot(np.sqrt(train_errors), 'r-+', linewidth=2, label='train')\n plt.plot(np.sqrt(val_errors), 'b-', linewidth=3, label='val')\n plt.legend()\n plt.xlabel(\"Training set size\", fontsize=14)\n plt.ylabel('RMSE', fontsize=14)\n ",
"_____no_output_____"
]
],
[
[
"Let’s look at the learning curves of the plain Linear Regression model (a straight line; see Figure below):",
"_____no_output_____"
]
],
[
[
"lin_reg = LinearRegression()\nplot_learning_curves(lin_reg, X, y)\nplt.axis([0, 80, 0, 3])\nplt.show()",
"_____no_output_____"
]
],
[
[
"`This model that’s underfitting deserves` a bit of explanation. First, let’s look at the performance on the training data: when there are just one or two instances in the training set, the model can fit them perfectly, which is why the curve starts at zero. But as new instances are added to the training set, it becomes impossible for the model to fit the training data perfectly, both because the data is noisy and because it is not linear at all. So the error on the training data goes up until it reaches a plateau, at which point adding new instances to the training set doesn’t make the average error much better or worse. Now let’s look at the performance of the model on the validation data. When the model is trained on very few training instances, it is incapable of generalizing properly, which is why the validation error is initially quite big. Then, as the model is shown more training examples, it learns, and thus the validation error slowly goes down. However, once again a straight line cannot do a good job modeling the data, so the error ends up at a plateau, very close to the other curve.\n<br>\n\n**`These learning curves are typical of a model that’s underfitting. Both curves have reached a plateau; they are close and fairly high.`**",
"_____no_output_____"
],
[
"# TIP\n\n> **`If your model is underfitting the training data, adding more training examples will not help. You need to use a more complex model or come up with better features.`**",
"_____no_output_____"
],
[
"Now let’s look at the learning curves of a 10th-degree polynomial model on the same data (Figure below):",
"_____no_output_____"
]
],
[
[
"from sklearn.pipeline import Pipeline",
"_____no_output_____"
],
[
"polynomial_regression = Pipeline([\n ('poly_features', PolynomialFeatures(degree=10, include_bias=False)),\n ('lin_reg', LinearRegression()),\n])\n\nplot_learning_curves(polynomial_regression, X, y)\nplt.axis([0, 80, 0, 3])\nplt.show()",
"_____no_output_____"
]
],
[
[
"These learning curves look a bit like the previous ones, but there are two important differences:\n* The error on the training data is much lower than with the Linear Regression model.\n* `There is a gap between the curves. This means that the model performs significantly better on the training data than on the validation data, which is the hallmark of an` **_`overfitting model.`_** `If you used a much larger training set, however, the two curves would continue to get closer.`",
"_____no_output_____"
],
[
"# TIP\n\n> **`One way to improve an overfitting model is to feed it more training data until the validation error reaches the training error.`**",
"_____no_output_____"
],
[
"# THE BIAS/VARIANCE TRADE-OFF\n\n_An important theoretical result of statistics and Machine Learning is the fact that a model’s generalization error can be expressed as the sum of three very different errors:_\n\n### _Bias_\n\n> This part of the `generalization error is due to wrong assumptions`, such as assuming that the data is linear when it is actually quadratic. `A high-bias model is most likely to underfit the training data.`\n\n### _Variance_\n\n> This part is due to the `model’s excessive sensitivity to small variations in the training data. A model with many degrees of freedom (such as a high-degree polynomial model) is likely to have high variance and thus overfit the training data.`\n\n### _Irreducible error_\n\n> This part is `due to the noisiness of the data itself. The only way to reduce this part of the error is to clean up the data (e.g., fix the data sources, such as broken sensors, or detect and remove outliers).`\n\n**_`Increasing a model’s complexity will typically increase its variance and reduce its bias. Conversely, reducing a model’s complexity increases its bias and reduces its variance. This is why it is called a trade-off.`_**",
"_____no_output_____"
],
[
"# Regularized Linear Models",
"_____no_output_____"
],
[
"As we saw in Chapters 1 and 2, a good way to reduce overfitting is to regularize the model (i.e., to constrain it): the fewer degrees of freedom it has, the harder it will be for it to overfit the data. A simple way to regularize a polynomial model is to reduce the number of polynomial degrees.\n\nFor a linear model, regularization is typically achieved by constraining the weights of the model. We will now look at Ridge Regression, Lasso Regression, and Elastic Net, which implement three different ways to constrain the weights.",
"_____no_output_____"
],
[
"## Ridge Regression\n\nRidge Regression (also called Tikhonov regularization) is a regularized version of Linear Regression: a regularization term equal to $$\\alpha\\sum_{i = 1}^n \\theta_i^2$$\nis added to cost function. This forces the learning algorithm to not only fit the data but also keep the model weights as small as possible. `Note that the regularization term should only be added to the cost function during training. Once the model is trained, you want to use the unregularized performance measure to evaluate the model's performance.`",
"_____no_output_____"
],
[
"# Note\n\n> It is quite common for the cost function used during training to be different from the performance measure used for testing. Apart from regularization, **another reason they might be different is that a good trianing cost function should have optimizaiton-friendly derivatives, while the performance measure used for testing should be as close as possible to the final objective.** `For example, classifiers are often trained using a cost function such as the log loss (discussed in a moment) but evaluated using precision/recall.`",
"_____no_output_____"
],
[
"The hyperparameter **$\\alpha$ controls how much you want to regularize the model.** If $\\alpha = 0$, then Ridge Regression is just Linear Regression. If $\\alpha$ is very large, then all weights end up very close to zero and the result is a flat line going through the data’s mean. Equation below presents the Ridge Regression cost function.\n\nEquation: Ridge Regression cost function\n$$J(\\theta) = MSE(\\theta) + \\alpha\\frac{1}{2}\\sum_{i = 1}^{n}\\theta_i^2$$\n\nNote that the bias term $\\theta_0$ is not regularized (the sum starts at $i = 1$, not $0$). If we define $\\mathbf{w}$ as the vector of feature weights ($\\theta_1$ to $\\theta_n$), then the regularization term is equal to $\\frac{1}{2}(\\parallel \\mathbf{w} \\parallel_2)^2$, where $\\parallel \\mathbf{w} \\parallel_2$ represents the $l_2$ norm of the weight vector. For Gradient Descent, just add $\\alpha\\mathbf{w}$ to the MSE gradient vector",
"_____no_output_____"
],
[
"# Warning\n\n> **It is important to scale the data** (e.g., using a `StandardScaler`) before performing Ridge Regression, as it is **sensitive to the scare of the input features**. This is true of most regularized models.",
"_____no_output_____"
],
[
"Figure below shows several Ridge models trained on some linear data using different $\\alpha$ values. <br>\n* On the left, plain Ridge models are used, leading to linear predictions. <br>\n* On the right, the data is first expanded using `PolynomialFeatures(degree=10)`, then it is scaled using a `StandardScaler`, and finally the Ridge models are applied to the resulting features: this is Polynomial Regression with Ridge regularization. <br>\n\nNote how increasing $\\alpha$ leads to flatter (i.e., less extreme, more reasonable) predictions, thus reducing the model’s variance but increasing its bias.",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\n\nm = 20\nX = 3 * np.random.rand(m, 1)\ny = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5\nX_new = np.linspace(0, 3, 100).reshape(100, 1)",
"_____no_output_____"
],
[
"X.shape, y.shape, X_new.shape",
"_____no_output_____"
],
[
"from sklearn.linear_model import Ridge",
"_____no_output_____"
],
[
"Ridge??",
"_____no_output_____"
],
[
"ridge_reg = Ridge(alpha=1, solver='cholesky', random_state=42)\nridge_reg.fit(X, y)\nridge_reg.predict([[1.5]])",
"_____no_output_____"
],
[
"ridge_reg.intercept_, ridge_reg.coef_",
"_____no_output_____"
],
[
"ridge_reg = Ridge(alpha=1, solver='sag', random_state=42)\nridge_reg.fit(X, y)\nridge_reg.predict([[1.5]])",
"_____no_output_____"
],
[
"from sklearn.linear_model import Ridge\n\ndef plot_model(model_class, polynomial, alphas, **model_kwargs):\n for alpha, style in zip(alphas, ('b-', 'g--', 'r:')):\n model = model_class(alpha=alpha, **model_kwargs) if alpha > 0 else LinearRegression()\n if polynomial:\n model = Pipeline([\n ('poly_features', PolynomialFeatures(degree=10, include_bias=False)),\n ('std_scaler', StandardScaler()),\n ('regul_reg', model),\n ])\n model.fit(X, y)\n y_new_regul = model.predict(X_new)\n lw = 2 if alpha > 0 else 1\n plt.plot(X_new, y_new_regul, style, linewidht=lw, label='$\\alpha = {}$'.format(alpha))\n plt.plt(X, y, 'b.')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec906d0516606b84cf1eae613b61f15468adc389 | 180,921 | ipynb | Jupyter Notebook | notebooks/simulations3.ipynb | caganze/WISPS | 81b91f8b49c7345ab68b7c4eb480716985e8905c | [
"MIT"
] | null | null | null | notebooks/simulations3.ipynb | caganze/WISPS | 81b91f8b49c7345ab68b7c4eb480716985e8905c | [
"MIT"
] | null | null | null | notebooks/simulations3.ipynb | caganze/WISPS | 81b91f8b49c7345ab68b7c4eb480716985e8905c | [
"MIT"
] | null | null | null | 171.164617 | 128,172 | 0.887559 | [
[
[
"#imports\nimport splat\nimport wisps\nimport astropy.units as u\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport glob\nimport seaborn as sns\n\nimport splat.photometry as sphot\nimport splat.core as spl1\nimport splat.empirical as spe\nimport splat.simulate as spsim\nimport matplotlib as mpl\nfrom tqdm import tqdm\n\n\nfrom astropy import stats as astrostats\n\n%matplotlib inline",
"Adding 2404 sources from /Users/caganze/research/splat//resources/Spectra/Public/SPEX-PRISM/ to spectral database\nAdding 145 sources from /Users/caganze/research/splat//resources/Spectra/Public/LRIS-RED/ to spectral database\nAdding 89 sources from /Users/caganze/research/splat//resources/Spectra/Public/MAGE/ to spectral database\n"
],
[
"#constants \ngrid=np.sort(np.random.uniform(1000, 4000,1000))\n\n#best_dict={'2MASS J': {\\\n# 'spt': [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39], \\\n# 'values': [10.36,10.77,11.15,11.46,11.76,12.03,12.32,12.77,13.51,13.69,14.18,14.94,14.90,14.46,14.56,15.25,14.54,14.26,13.89,14.94,15.53,16.78,17.18,17.75],\\\n# 'rms': [0.30,0.30,0.42,0.34,0.18,0.15,0.21,0.24,0.28,0.25,0.60,0.20,0.13,0.71,0.5,0.12,0.06,0.16,0.36,0.12,0.27,0.76,0.51,0.5]},\n# '2MASS H': {\\\n# 'spt': [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39], \\\n# 'values': [9.76,10.14,10.47,10.74,11.00,11.23,11.41,11.82,12.45,12.63,13.19,13.82,13.77,13.39,13.62,14.39,13.73,13.67,13.57,14.76,15.48,16.70,17.09,17.51],\\\n# 'rms': [0.30,0.31,0.43,0.35,0.23,0.21,0.25,0.29,0.3,0.30,0.62,0.31,0.20,0.73,0.5,0.18,0.15,0.24,0.40,0.24,0.37,0.78,0.5,0.5]}}",
"_____no_output_____"
],
[
"#functions\ndef flux_calibrate_spectrum(row):\n try:\n #calibrate using absolute magnidtude\n sp=splat.getSpectrum(filename=row.DATA_FILE)[0]\n spt=splat.typeToNum(row.SPEX_TYPE)\n #use optical types for early dwarffs\n if (np.isnan(spt) | (spt <=15)):\n spt=splat.typeToNum(row.OPT_TYPE)\n #no need to flux calibrate, reject high uncertainty in classification types\n #absmag=row.J_2MASS-5*(np.log10(row.DISTANCE)-1)\n #sp.fluxCalibrate('2MASS J', absmag)\n return [spt, sp]\n except :\n return []\n \n\ndef make_mamajek_fit(spt):\n \n js=mamjk.M_J.apply(float).values\n jminush=mamjk['J-H'].apply(float).values\n hs=js-jminush\n \n spts=mamjk.SpT.apply(wisps.make_spt_number).apply(float).values\n \n hsortedindex=np.argsort(hs)\n jsortedindex=np.argsort(js)\n \n hval=np.interp(spt, spts[hsortedindex], hs[hsortedindex])\n jval=np.interp(spt, spts[jsortedindex], js[jsortedindex])\n \n return ((jval, 0.4), (hval, 0.4))\n\n\ndef absolute_mag_best(spt, flt):\n #\n mags=wisps.best_dict[flt]\n spts=np.array(mags['spt'])\n if (spt < spts.min()) | (spt> spts.max()):\n return np.nan\n else:\n vals=np.array(mags['values'])\n rms=np.array(mags['rms'])\n\n sortedindex=np.argsort(vals)\n\n\n val=np.interp(spt, spts[sortedindex], vals[sortedindex])\n rmsv=np.interp(spt, spts[sortedindex], rms[sortedindex])\n \n vals=np.random.normal(val, rmsv, 1000)\n return vals.mean(), vals.std()\n \n\ndef get_abs_mag(spt):\n \n spt=wisps.make_spt_number(spt)\n \n if spt < 37:\n (j, junc), (h, hunc)= make_mamajek_fit(spt)\n \n if (spt >= 37):\n h=wisps.absolute_mag_kirkpatrick(spt, '2MASS H')\n (j, junc), (_, _)= make_mamajek_fit(spt)\n hunc=0.7\n corr0=splat.photometry.vegaToAB('2MASS J')\n corr1=splat.photometry.vegaToAB('2MASS H')\n return [[j+corr0, junc], [h+corr1, hunc]]\n \ndef schn_flux_calibrate(row):\n sp=row.spectra.splat_spectrum\n spt=splat.typeToNum(row.Spec)\n sp.fluxCalibrate('MKO J',float(row.J_MKO))\n return [spt, sp]\n\ndef get_colors(sp, flt):\n #measuring filtermags in for two filters and comparing that to target filters\n #remember to include euclid filters\n #using splat filtermag\n mag, mag_unc = splat.filterMag(sp, flt, ab=True)\n #calculate the mag of the standard in J and H\n \n magj, mag_uncj = splat.filterMag(sp,'2MASS J', ab=True)\n magh, mag_unch = splat.filterMag(sp,'2MASS H', ab=True)\n #calculate the offset between HST filters and 2mass filters but add the uncertainty\n \n offsetj=magj-mag\n offseth=magh-mag\n \n unc1=(mag_unc**2+mag_uncj**2)**0.5\n unc2=(mag_unc**2+mag_unch**2)**0.5\n \n #offsetj=np.random.normal(offsetj, unc1)\n #offseth=np.random.normal(offseth, unc2)\n return [[offsetj, offseth], [unc1, unc2]]\n\n\ndef get_abs_hst_mag(color, mag0):\n return mag0-color\n\n\ndef k_clip_fit(x, y, sigma_y, sigma = 5, n=6):\n \n '''Fit a polynomial to y vs. x, and k-sigma clip until convergence'''\n \n not_clipped = np.ones_like(y).astype(bool)\n n_remove = 1\n \n #use median sigma\n #median_sigma= np.nanmedian(sigma_y)\n \n while n_remove > 0:\n\n best_fit = np.poly1d(np.polyfit(x[not_clipped], y[not_clipped], n))\n \n norm_res = (np.abs(y - best_fit(x)))/(sigma_y)\n remove = np.logical_and(norm_res >= sigma, not_clipped == 1)\n n_remove = sum(remove)\n not_clipped[remove] = 0 \n \n return not_clipped\n\ndef fit_with_nsigma_clipping(x, y, y_unc, n, sigma=3.):\n not_clipped = k_clip_fit(x, y, y_unc, sigma = sigma)\n return not_clipped, np.poly1d(np.polyfit(x[not_clipped], y[not_clipped], n))",
"_____no_output_____"
],
[
"\n#load spectra, ignore binaries, objects with high uncertainty in mag and objects without parallaxes\nsplat_db=splat.searchLibrary(vlm=True, giant=False, young=False, binary=False)\nsplat_db['SHORTNAME']=splat_db.DESIGNATION.apply(lambda x: splat.designationToShortName)\n#sml=splat_db[~ ((splat_db.H_2MASS_E > 0.1) | (splat_db.J_2MASS_E > 0.1) | (splat_db.MEDIAN_SNR <20) )]\nsml=splat_db[~ ((splat_db.H_2MASS_E > 0.3) | (splat_db.J_2MASS_E > 0.3) |\n (splat_db.SPEX_TYPE.apply(splat.typeToNum) <15))]\n\n#sds=sml[(sml.METALLICITY_CLASS=='sd') | (sml.METALLICITY_CLASS=='esd') ]\nsml=sml[~((sml.METALLICITY_CLASS=='sd') | (sml.METALLICITY_CLASS=='esd') \\\n | (sml.MEDIAN_SNR <20))]",
"_____no_output_____"
],
[
"mdwarfs=sml[ (sml.SPEX_TYPE.apply(splat.typeToNum) <20)]\nldwarfs=sml[ (sml.SPEX_TYPE.apply(splat.typeToNum).between(20, 30))]\ntdwarfs=sml[ (sml.SPEX_TYPE.apply(splat.typeToNum).between(30, 40))]\n\n#tighter_constraints on m dwarfs \nmdwarfs=mdwarfs[(~mdwarfs.PARALLAX.isna()) & (mdwarfs.MEDIAN_SNR >100)]\nldwarfs=ldwarfs[ (ldwarfs.MEDIAN_SNR >70)]\n\ndef choose_ten(df):\n if len(df) >10:\n return df.sort_values('MEDIAN_SNR', ascending=False)[:10]\n else:\n return df\nls=ldwarfs.groupby('SPEX_TYPE').apply(choose_ten).reset_index(drop=True)#.groupby('SPEX_TYPE').count()",
"_____no_output_____"
],
[
"#get y dwarfs\ndef get_shortname(n):\n return splat.designationToShortName(n).replace('J', 'WISE')\nschn='/Users/caganze/research/wisps/data/schneider/*.txt'\nschntb=pd.read_csv('/Users/caganze/research/wisps/data/schneider2015.txt', \n delimiter=' ').drop(columns='Unnamed: 14')\nschntb['shortname']=schntb.Name.apply(get_shortname)\nspectra_schn=[]\nfrom astropy.io import ascii\nfor f in glob.glob(schn):\n d=ascii.read(f).to_pandas()\n shortname=(f.split('/')[-1]).split('.txt')[0]\n s=splat.Spectrum(wave=d.col1, \n flux=d.col2,\n noise=d.col3, \n name=shortname)\n #measure snr \n mask= np.logical_and(d.col1>1.0, d.col1<2.4)\n snr= (np.nanmedian(d.col2[mask]/d.col3[mask]))\n spectra_schn.append([s, snr])",
"_____no_output_____"
],
[
"#schn_merged=(schn_merged[schn_merged.snr1>10]).reset_index(drop=True)\nsmlf=pd.concat([mdwarfs, ls, tdwarfs]).reset_index(drop=True)",
"_____no_output_____"
],
[
"def make_spt_number(spt):\n ##make a spt a number\n if isinstance(spt, str):\n return splat.typeToNum(spt)\n else:\n return spt",
"_____no_output_____"
],
[
"def get_file(x):\n try:\n return splat.getSpectrum(filename=x)[0]\n except:\n return ",
"_____no_output_____"
],
[
"%%capture\ntempls=smlf.DATA_FILE.apply(lambda x: get_file(x))",
"_____no_output_____"
],
[
"schntb['spectra']=[x[0] for x in spectra_schn]\n\nschntb['snr']=[x[1] for x in spectra_schn]\n\nschntb=schntb[schntb.snr>=2.].reset_index(drop=True)\n\nall_spectra=np.concatenate([templs,schntb.spectra.values ])",
"_____no_output_____"
],
[
"spts=np.concatenate([smlf.SPEX_TYPE.apply(make_spt_number).values,\n schntb.Spec.apply(make_spt_number).values,\n ])\n\n#remove nones\nnones= np.array(all_spectra)==None\nall_spectra=all_spectra[~nones]\nspts=spts[~nones]\nassert len(spts) == len(all_spectra)\n#assert len(spts) == len(all_spectra)",
"_____no_output_____"
],
[
"from astropy.io import ascii\nmamjk=ascii.read('/users/caganze/research/wisps/data/mamajek_relations.txt').to_pandas().replace('None', np.nan)",
"_____no_output_____"
],
[
"#combined calibrated spctra\n#combcal=np.append(calbr, calbrschn)\n#specs=np.array([x for x in pd.DataFrame(combcal).values if x])\nspecs= list(zip(spts, all_spectra))",
"_____no_output_____"
],
[
"get_colors(all_spectra[-1], 'WFC3_F110W')",
"_____no_output_____"
],
[
"import pickle\noutput = open(wisps.OUTPUT_FILES+'/validated_spectra.pkl', 'wb')\npickle.dump(specs, output)\noutput.close()\n",
"_____no_output_____"
],
[
"#specs\n\n#compute colors for different filters\ncolors=[]\nuncolors=[]\nfltrswfc3= ['WFC3_{}'.format(k) for k in ['F110W', 'F140W', 'F160W']]\nfltrseucl=['EUCLID_J', 'EUCLID_H']\n\nfltrs=np.append(fltrswfc3, fltrseucl)\nprint (fltrs)\nfor pair in tqdm(specs):\n c={}\n uncclrs={}\n for flt in fltrs:\n x=pair[1]\n sptx=pair[0]\n color, uncc=get_colors(x, flt)\n c.update({flt: color})\n uncclrs.update({flt:uncc})\n uncolors.append(uncclrs)\n colors.append(c)",
"\r 0%| | 0/336 [00:00<?, ?it/s]"
],
[
"assert len(spts) ==len(colors)",
"_____no_output_____"
],
[
"sp_grid= spts\n#sp_grid=sp_grid0[~nans]",
"_____no_output_____"
],
[
"colors_df=pd.DataFrame(colors)#[~nans]\nuncolors_df=pd.DataFrame(uncolors)#[~nans]",
"_____no_output_____"
],
[
"colors_df['spt']=sp_grid\nuncolors_df['spt']=sp_grid",
"_____no_output_____"
],
[
"colors_polynomials={}\nfor k in colors_df.columns:\n if k != 'spt':\n clrs=np.vstack(colors_df[k]).astype(float)\n uncs=np.vstack(uncolors_df[k]).astype(float)\n \n mask0, pc0=fit_with_nsigma_clipping( sp_grid,clrs[:,0], uncs[:,0],6, sigma=5.)\n mask1, pc1=fit_with_nsigma_clipping( sp_grid,clrs[:,1], uncs[:,1],6, sigma=5.)\n \n x0, y0, yunc0= sp_grid[mask0], clrs[:,0][mask0], uncs[:,0][mask0]\n x1, y1, yunc1= sp_grid[mask1], clrs[:,1][mask1], uncs[:,1][mask1]\n \n\n\n colors_polynomials.update({k+'_J': {'pol': pc0, 'mask':mask0, \n 'color':clrs[:,0], 'unc': uncs[:,0], \n 'scatter': 5.*np.abs(pc0(x0)- y0).mean() }, \n k+'_H': {'pol': pc1, 'mask':mask1, 'color': clrs[:,1] , 'unc': uncs[:,1] ,\n 'scatter': 5.*np.abs(pc1(x1)- y1).mean() }})",
"_____no_output_____"
],
[
"two_mass_values=np.array([ get_abs_mag(x) for x in sp_grid])",
"_____no_output_____"
],
[
"plt.plot(sp_grid, two_mass_values[:, 0][:, 0], '.')\nplt.plot(sp_grid, two_mass_values[:, 1][:, 0], '.')",
"_____no_output_____"
],
[
"polynomial_relations={}\n\nfor k in colors_polynomials.keys():\n \n if k.endswith('J'): #use j-offset for j offset for h\n #take the median centered around the uncertainty \n two_mass_to_use=two_mass_values[:, 0][:,0]\n two_mass_uncer= two_mass_values[:,0][:,1]\n \n else:\n two_mass_to_use=two_mass_values[:, 1][:,0]\n two_mass_uncer= two_mass_values[:,1][:,1]\n \n mask= np.logical_and.reduce([(colors_polynomials[k])['mask'], \n ~np.isnan((colors_polynomials[k])['color']),\n ~np.isnan((colors_polynomials[k])['unc']), \n ~np.isnan(two_mass_to_use)])\n \n #add values and propagate total uncertainty\n total_uncer=(two_mass_uncer**2+ (colors_polynomials[k])['unc']**2)**0.5\n \n vals0= np.random.normal(two_mass_to_use+ (colors_polynomials[k])['color'], total_uncer , \n size=( 1000, len(mask)))\n \n vals=vals0.mean(axis=0)\n uncs=vals0.std(axis=0)\n \n #only fit masked area \n x=sp_grid[mask]\n y=vals[mask]\n yunc=total_uncer[mask]\n\n\n maskn, p=fit_with_nsigma_clipping(x,y,yunc,6, sigma=5.)\n\n\n polynomial_relations.update({k:{'x': x, 'y': y, 'pol': p, 'yunc': yunc, 'mask':maskn,\n 'scatter': 5*(abs(p(x[maskn])-y[maskn])).mean()}})",
"_____no_output_____"
],
[
"wisps.kirkpa2019pol['scatter']",
"_____no_output_____"
],
[
"RMS_BEST={'J', np.array((wisps.best_dict['2MASS J']['rms'])).mean()**2 + 0.4**2, \n 'H', np.array((wisps.best_dict['2MASS H']['rms'])).mean()**2 + 0.4**2}",
"_____no_output_____"
],
[
"RMS_DAVY=wisps.kirkpa2019pol['scatter']",
"_____no_output_____"
],
[
"polynomial_relations.keys()",
"_____no_output_____"
],
[
"final_pol_keys=['WFC3_F110W_J', 'WFC3_F140W_J', 'WFC3_F160W_H']",
"_____no_output_____"
],
[
"colors_polynomials[k].keys()",
"_____no_output_____"
],
[
"#visualize \nfig, (ax, ax1)=plt.subplots(ncols=3, figsize=(12, 8), nrows=2, sharey=False)\n\nfor idx, k in zip(range(0, 10), final_pol_keys):\n \n pc=colors_polynomials[k]['pol']\n p=polynomial_relations[k]['pol']\n \n masked=colors_polynomials[k]['mask']\n maskedpol=polynomial_relations[k]['mask']\n scpol=polynomial_relations[k]['scatter']\n scolor=colors_polynomials[k]['scatter']\n \n print (scpol)\n ax[idx].plot(np.linspace(15, 42), pc(np.linspace(15, 42)), c='#001f3f', linewidth=3)\n ax1[idx].plot(np.linspace(15, 42), p(np.linspace(15, 42)), c='#001f3f', linewidth=3)\n \n ax[idx].fill_between(np.linspace(15, 42), pc(np.linspace(15, 42))+scolor, pc(np.linspace(15, 42))-scolor, alpha=0.5 )\n \n ax1[idx].fill_between(np.linspace(15, 42), p(np.linspace(15, 42))+scpol, p(np.linspace(15, 42))-scpol, alpha=0.5 )\n \n ax[idx].errorbar(sp_grid[mask], (colors_polynomials[k]['color'])[mask], yerr=(colors_polynomials[k]['unc'])[mask], fmt='o', mec='#111111')\n \n ax[idx].errorbar(sp_grid[~mask], (colors_polynomials[k]['color'])[~mask], yerr= (colors_polynomials[k]['unc'])[~mask], fmt='x', mec='#111111')\n \n \n ax1[idx].errorbar(polynomial_relations[k]['x'][~maskedpol], polynomial_relations[k]['y'][~maskedpol],yerr=polynomial_relations[k]['yunc'][~maskedpol], fmt='x', mec='#111111')\n ax1[idx].errorbar(polynomial_relations[k]['x'][maskedpol], polynomial_relations[k]['y'][maskedpol], yerr= polynomial_relations[k]['yunc'][maskedpol], fmt='o', mec='#111111')\n \n \n #ax[idx].set_xlim([15, 42])\n #ax1[idx].set_xlim([15, 42])\n \n ax[idx].minorticks_on()\n ax1[idx].minorticks_on()\n \n \n ax[idx].set_xticks([15, 20, 25, 30, 35, 40])\n ax[idx].set_xticklabels(['M5', 'L0', 'L5', 'T0', 'T5', 'Y0'])\n \n ax1[idx].set_xticks([15, 20, 25, 30, 35, 40])\n ax1[idx].set_xticklabels(['M5', 'L0', 'L5', 'T0', 'T5', 'Y0'])\n \n ax[idx].set_xlabel('Spectral Type')\n ax1[idx].set_xlabel('Spectral Type')\n\n \n \n\n \n#ax[0].set_ylim([-0.75, 0.0])\n#ax[1].set_ylim([-.25, 0.5])\n#ax[2].set_ylim([-.75, 0.25])\n\n#ax1[0].set_ylim([8, 27])\n#ax1[1].set_ylim([8, 27])\n#ax1[2].set_ylim([7, 27])\n\n#ax1[0].set_ylim([8, 27])\n\nfor a in ax1:\n a.invert_yaxis()\n\nax[0].set_ylabel('2MASS J - WFC3 F110W')\nax[1].set_ylabel('2MASS J - WFC3 F140W')\nax[2].set_ylabel('2MASS H - WFC3 F160W')\n\nax1[0].set_ylabel(r'$M_\\mathrm{F110W} $ (AB)')\nax1[1].set_ylabel(r'$M_\\mathrm{F140W} $ (AB)')\nax1[2].set_ylabel(r'$M_\\mathrm{F160W} $ (AB)')\n\n\nplt.tight_layout()\nplt.savefig(wisps.OUTPUT_FIGURES+'/abs_mag_relations.pdf', bbox_inches='tight')",
"0.3222664721637294\n0.3685143906206988\n0.3965506236229219\n"
],
[
"polynomial_relations.keys()",
"_____no_output_____"
],
[
"len(maskedpol)",
"_____no_output_____"
],
[
"rels={'abs_mags':{'F110W': (polynomial_relations['WFC3_F110W_J']['pol'], polynomial_relations['WFC3_F110W_J']['scatter'] ),\n 'F140W': (polynomial_relations['WFC3_F140W_J']['pol'], polynomial_relations['WFC3_F140W_J']['scatter'] ),\n 'F160W': (polynomial_relations['WFC3_F160W_H']['pol'], polynomial_relations['WFC3_F160W_H']['scatter'] ),\n 'EUCLID_J': (polynomial_relations['EUCLID_J_J']['pol'], polynomial_relations['EUCLID_J_J']['scatter'] ),\n 'EUCLID_H': (polynomial_relations['EUCLID_H_H']['pol'], polynomial_relations['EUCLID_H_H']['scatter'] )},\n \n 'colors':{'j_f110': (colors_polynomials['WFC3_F110W_J']['pol'], colors_polynomials['WFC3_F110W_J']['scatter'] ),\n 'j_f140': (colors_polynomials['WFC3_F140W_J']['pol'], colors_polynomials['WFC3_F140W_J']['scatter'] ),\n 'j_f160': (colors_polynomials['WFC3_F160W_J']['pol'], colors_polynomials['WFC3_F160W_J']['scatter'] ),\n 'h_f110': (colors_polynomials['WFC3_F110W_H']['pol'], colors_polynomials['WFC3_F110W_H']['scatter'] ),\n 'h_f140': (colors_polynomials['WFC3_F140W_H']['pol'], colors_polynomials['WFC3_F140W_H']['scatter'] ),\n 'h_f160': (colors_polynomials['WFC3_F160W_H']['pol'], colors_polynomials['WFC3_F160W_H']['scatter'] )\n },\n \n 'snr':wisps.POLYNOMIAL_RELATIONS['snr']}\n",
"_____no_output_____"
],
[
"rels0=wisps.POLYNOMIAL_RELATIONS\nrels0.update({'abs_mags': rels['abs_mags'], \n 'colors': rels['colors']})",
"_____no_output_____"
],
[
"import pickle\noutput = open(wisps.OUTPUT_FILES+'/polynomial_relations.pkl', 'wb')\npickle.dump(rels0, output)\noutput.close()\n\ndef interpolated_templates(s):\n try:\n s.normalize()\n #s.toInstrument('WFC3-G141')\n wv= s.wave.value\n fl= s.flux.value\n fl[fl < 0.0]=np.nan\n #s.toInstrument('WFC3-G141')\n return interpolate.interp1d(wv, fl,\n bounds_error=False,fill_value=0.)\n except:\n return ",
"_____no_output_____"
],
[
"from scipy import interpolate\ndf= pd.DataFrame()\ndf['spt']=spts\ndf['name']=[x.name for x in all_spectra]\ndf['spectra']=all_spectra\ndf['interp']=df.spectra.apply(interpolated_templates)",
"_____no_output_____"
],
[
"df=df[~df.interp.isna()]",
"_____no_output_____"
],
[
"plt.plot(df.spt, '.')",
"_____no_output_____"
],
[
"#d",
"_____no_output_____"
],
[
"import pickle\noutput = open(wisps.OUTPUT_FILES+'/validated_templates.pkl', 'wb')\npickle.dump(df, output)\noutput.close()\n",
"_____no_output_____"
],
[
"\n#splat.filterMag?",
"_____no_output_____"
],
[
"#",
"_____no_output_____"
],
[
"#",
"_____no_output_____"
]
],
[
[
"## ",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec90799b7a7ea4dd24162e975bb4a9fbd909d8ac | 13,932 | ipynb | Jupyter Notebook | tutorials/xomx_hla.ipynb | perrin-isir/xomx-tutorials | ccf78f6abe226516749aff3f76459ffaae4d4147 | [
"BSD-3-Clause"
] | null | null | null | tutorials/xomx_hla.ipynb | perrin-isir/xomx-tutorials | ccf78f6abe226516749aff3f76459ffaae4d4147 | [
"BSD-3-Clause"
] | null | null | null | tutorials/xomx_hla.ipynb | perrin-isir/xomx-tutorials | ccf78f6abe226516749aff3f76459ffaae4d4147 | [
"BSD-3-Clause"
] | null | null | null | 28.785124 | 336 | 0.583118 | [
[
[
"<a href=\"https://colab.research.google.com/github/perrin-isir/xomx-tutorials/blob/main/tutorials/xomx_hla.ipynb\"> <img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a>\n<a id=\"raw-url\" href=\"https://raw.githubusercontent.com/perrin-isir/xomx-tutorials/main/tutorials/xomx_hla.ipynb\" download> <img align=\"left\" src=\"https://img.shields.io/badge/Github-Download%20(Right%20click%20%2B%20Save%20link%20as...)-blue\" alt=\"Download (Right click + Save link as)\" title=\"Download Notebook\"></a>",
"_____no_output_____"
],
[
"# *xomx tutorial:* **tissue prediction based on HLA-presented peptides**",
"_____no_output_____"
]
],
[
[
"# imports:\nimport os\nimport joblib\nfrom IPython.display import clear_output\ntry:\n import xomx\nexcept ImportError:\n !pip install git+https://github.com/perrin-isir/xomx.git\n clear_output()\n import xomx\ntry:\n import scanpy as sc\nexcept ImportError:\n !pip install scanpy\n clear_output()\n import scanpy as sc\ntry:\n import mhcflurry\nexcept ImportError:\n !pip install mhcflurry\n !mhcflurry-downloads fetch models_class1_presentation\n clear_output()\n import mhcflurry\ntry:\n import trimap\nexcept ImportError:\n !pip install trimap\n clear_output()\n import trimap\nimport numpy as np\nimport pandas as pd\nimport umap",
"_____no_output_____"
],
[
"save_dir = os.path.join(os.path.expanduser(\"~\"), \"results\", \"xomx-tutorials\", \"xomx_hla\") # the default directory in which results are stored\nos.makedirs(save_dir, exist_ok=True)",
"_____no_output_____"
]
],
[
[
"The HLA Ligand Atlas is a resource of natural HLA ligands presented on benign tissues. \nWe first gather in a dict (`dfs`) 4 pandas dataframes from the HLA Ligand Atlas: \n- `dfs[\"peptides\"]`: the list of peptide sequences with their id,\n- `dfs[\"donors\"]`: the list of donors and their alleles,\n- `dfs[\"sample_hits\"]`: for all the peptide sequences, the donors and tissues in which they have been found, and their HLA class,\n- `dfs[\"aggregated\"]`: one row per peptide sequence, with the HLA class of the peptide, and the list of donor alleles and tissues associated with the peptide. ",
"_____no_output_____"
]
],
[
[
"base_url = \"http://hla-ligand-atlas.org/rel/2020.12/\"\nfilenames = [\"peptides\", \"donors\", \"sample_hits\", \"aggregated\"]\ndfs = {}\nfor nm in filenames:\n if not os.path.isfile(os.path.join(save_dir, nm + \".joblib\")):\n dfs[nm] = pd.read_csv(base_url + nm + \".tsv.gz\", sep=\"\\t\")\n joblib.dump(dfs[nm], os.path.join(save_dir, nm + \".joblib\"))\n else:\n dfs[nm] = joblib.load(os.path.join(save_dir, nm + \".joblib\"))",
"_____no_output_____"
]
],
[
[
"We compute the set of all alleles present in the database:",
"_____no_output_____"
]
],
[
[
"alleles_ = sorted(list(set(np.concatenate([allele.split(\",\") for allele in dfs[\"aggregated\"].donor_alleles]))))",
"_____no_output_____"
]
],
[
[
"In this list, the alleles start with one of the 3 prefixes \"n/\", \"w/\" and \"s/\", which characterize binding predictions of peptides: \n- \"n/\": predicted non-binder donor allele\n- \"w/\": predicted weak binder donor allele\n- \"s/\": predicted strong binder donor allele\n\nFor example, the peptide with id 22 has been found in donors with the following alleles:",
"_____no_output_____"
]
],
[
[
"list(dfs[\"aggregated\"][dfs[\"aggregated\"].peptide_sequence_id == 22].donor_alleles)",
"_____no_output_____"
]
],
[
[
"The peptide is predicted to be a non-binder for all of these alleles, except for DRB5\\*01:01, for which it is predicted to be a strong binder. \nHere is the list of alleles without the prefixes:",
"_____no_output_____"
]
],
[
[
"alleles = sorted(list(set([al[2:] for al in alleles_])))",
"_____no_output_____"
]
],
[
[
"We now filter the data to keep only peptides that are predicted to be weak or strong binders for the allele A\\*02:01:",
"_____no_output_____"
]
],
[
[
"allele_filtered_df = dfs[\"aggregated\"][dfs[\"aggregated\"].donor_alleles.apply(lambda x: (\"w/A*02:01\" in x) or (\"s/A*02:01\" in x))]",
"_____no_output_____"
]
],
[
[
"Here is the set of tissues in the database:",
"_____no_output_____"
]
],
[
[
"tissues = set(np.concatenate([tissue.split(\",\") for tissue in dfs[\"aggregated\"].tissues]))\ntissues",
"_____no_output_____"
]
],
[
[
"We select two of them, for example \"Thymus\" and \"Liver\", and filter the data to keep only the peptides that have been found in either of these tissues:",
"_____no_output_____"
]
],
[
[
"tissue_1 = \"Lung\"\ntissue_2 = \"Liver\"\ntissue_filtered_df = allele_filtered_df[allele_filtered_df.tissues.apply(lambda x: tissue_1 in x or tissue_2 in x)]\nprint(f\"{len(tissue_filtered_df)} peptides\")",
"_____no_output_____"
],
[
"max_length_peptide = tissue_filtered_df.peptide_sequence.apply(len).max()\nxd = sc.AnnData(shape=(tissue_filtered_df.shape[0], max_length_peptide * len(xomx.tl.aminoacids)))\nxd.obs_names = np.array(tissue_filtered_df.peptide_sequence)\nxd.X = np.empty((xd.n_obs, xd.n_vars))\nfor i in range(xd.n_obs):\n xd.X[i, :] = xomx.tl.onehot(xd.obs_names[i], max_length_peptide)\nxd.obs['labels'] = np.array(tissue_filtered_df.tissues.apply(lambda x: (tissue_1 if tissue_1 in x else \"\") + (tissue_2 if tissue_2 in x else \"\")))\nxd.uns['all_labels'] = xomx.tl.all_labels(xd.obs['labels'])\nxd.uns['obs_indices_per_label'] = xomx.tl.indices_per_label(xd.obs['labels'])",
"_____no_output_____"
],
[
"rng = np.random.RandomState(0)\nxomx.pl.plot_2d_embedding(xd, trimap.TRIMAP())",
"_____no_output_____"
],
[
"trimap.TRIMAP().transform()",
"_____no_output_____"
],
[
"xomx.tl.train_and_test_indices(xd, \"obs_indices_per_label\", test_train_ratio=0.25, rng=rng)\nclassifier = {}\nclassifier[tissue_1] = xomx.fs.RFEExtraTrees(\n xd,\n tissue_1,\n n_estimators=450,\n random_state=rng,\n)",
"_____no_output_____"
],
[
"classifier[tissue_1].init()",
"_____no_output_____"
],
[
"classifier[tissue_1].plot()",
"_____no_output_____"
],
[
"xomx.tl.matthews_coef(classifier[tissue_1].confusion_matrix)",
"_____no_output_____"
]
],
[
[
"Remark: the MCC score obtained is close to 0.5, which is definitely better than random predictions (MCC ~ 0), however for other choices of alleles and tissues, we frequently obtain an MCC score close to 0, showing that the classifier is not able to generalize at all. \nThe problem of tissue prediction based on HLA-presented peptides is hard, but there may be specific cases for which it is possible.",
"_____no_output_____"
]
],
[
[
"predictor = mhcflurry.Class1PresentationPredictor.load()",
"_____no_output_____"
],
[
"results1 = predictor.predict([\"NLVPMVATV\", \"RANDMPEPTIDE\"], [\"A*02:01\", \"A*01:01\", \"A*03:01\"])",
"_____no_output_____"
],
[
"results1",
"_____no_output_____"
],
[
"dfs[\"sample_hits\"]",
"_____no_output_____"
],
[
"donor_sample_hits = dfs[\"sample_hits\"][dfs[\"sample_hits\"].donor == \"AUT01-DN02\"]",
"_____no_output_____"
],
[
"dfs[\"donors\"][dfs[\"donors\"].donor == \"AUT01-DN02\"]",
"_____no_output_____"
],
[
"hla_class_filtered_df = donor_sample_hits[donor_sample_hits.hla_class == \"HLA-I\"]",
"_____no_output_____"
],
[
"donor_filtered_df = dfs[\"aggregated\"].take(np.array(sorted(list(set(hla_class_filtered_df.peptide_sequence_id)))) - 1)",
"_____no_output_____"
],
[
"donor_filtered_df",
"_____no_output_____"
],
[
"donor_filtered_max_length_peptide = donor_filtered_df.peptide_sequence.apply(len).max()\ndonor_filtered_xd = sc.AnnData(shape=(donor_filtered_df.shape[0], donor_filtered_max_length_peptide * len(xomx.tl.aminoacids)))\ndonor_filtered_xd.obs_names = np.array(donor_filtered_df.peptide_sequence)\ndonor_filtered_xd.X = np.empty((donor_filtered_xd.n_obs, donor_filtered_xd.n_vars))\nfor i in range(xd.n_obs):\n donor_filtered_xd.X[i, :] = xomx.tl.onehot(donor_filtered_xd.obs_names[i], donor_filtered_max_length_peptide)\n# donor_filtered_xd.obs['labels'] = np.array(tissue_filtered_df.tissues.apply(lambda x: (tissue_1 if tissue_1 in x else \"\") + (tissue_2 if tissue_2 in x else \"\")))\n# donor_filtered_xd.uns['all_labels'] = xomx.tl.all_labels(donor_filtered_xd.obs['labels'])\n# donor_filtered_xd.uns['obs_indices_per_label'] = xomx.tl.indices_per_label(donor_filtered_xd.obs['labels'])",
"_____no_output_____"
],
[
"len(list(donor_filtered_xd.obs_names))",
"_____no_output_____"
],
[
"results1 = predictor.predict(list(donor_filtered_xd.obs_names), [alleles[21]])",
"_____no_output_____"
],
[
"results1[results1.presentation_score > 0.9]",
"_____no_output_____"
],
[
"results = {}\nfor al in alleles[:51]:\n results[al] = predictor.predict(list(donor_filtered_xd.obs_names), [al])",
"_____no_output_____"
],
[
"for i in range(51):\n print(f\"{i} ({alleles[i]}): {len(results[alleles[i]][results[alleles[i]].presentation_score > 0.98])}\")",
"_____no_output_____"
],
[
"predictor.predict(list(donor_filtered_xd.obs_names), [alleles[0]])",
"_____no_output_____"
],
[
"xomx.pl.plot_2d_embedding(donor_filtered_xd, trimap.TRIMAP(distance=\"manhattan\"))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec908b52d6c896b10225f5b3b3033a67f6737418 | 3,982 | ipynb | Jupyter Notebook | prototyping/auto-segmentation/sb/01-attempt-dataset-streaming-from-zenodo/28-automate-zenodo-upload.ipynb | dg1an3/pymedphys | bdca9c783aae8b5e1f231e6cb0bc69895e4b9329 | [
"Apache-2.0"
] | 2 | 2020-02-04T03:21:20.000Z | 2020-04-11T14:17:53.000Z | prototyping/auto-segmentation/sb/01-attempt-dataset-streaming-from-zenodo/28-automate-zenodo-upload.ipynb | SimonBiggs/pymedphys | 83f02eac6549ac155c6963e0a8d1f9284359b652 | [
"Apache-2.0"
] | null | null | null | prototyping/auto-segmentation/sb/01-attempt-dataset-streaming-from-zenodo/28-automate-zenodo-upload.ipynb | SimonBiggs/pymedphys | 83f02eac6549ac155c6963e0a8d1f9284359b652 | [
"Apache-2.0"
] | null | null | null | 24.133333 | 94 | 0.565043 | [
[
[
"import pathlib",
"_____no_output_____"
],
[
"# Makes it so any changes in pymedphys is automatically\n# propagated into the notebook without needing a kernel reset.\nfrom IPython.lib.deepreload import reload\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from pymedphys._data import zenodo, upload\nfrom pymedphys.labs.autosegmentation import indexing, filtering",
"_____no_output_____"
],
[
"data_path_root = pathlib.Path.home().joinpath('.data/dicom-ct-and-structures')\n\nname_mappings_path = data_path_root.joinpath('name_mappings.json')\nnames_map = filtering.load_names_mapping(name_mappings_path)",
"_____no_output_____"
],
[
"(\n ct_image_paths,\n structure_set_paths,\n ct_uid_to_structure_uid,\n structure_uid_to_ct_uids,\n) = indexing.get_uid_cache(data_path_root)",
"_____no_output_____"
],
[
"(\n structure_names_by_ct_uid,\n structure_names_by_structure_set_uid,\n) = indexing.get_cached_structure_names_by_uids(\n data_path_root, structure_set_paths, names_map\n)",
"_____no_output_____"
],
[
"stucture_uids = structure_uid_to_ct_uids.keys()",
"_____no_output_____"
],
[
"prep_for_zenodo_path = pathlib.Path().home().joinpath('Documents', 'prep-for-zenodo')\nzip_paths = list(prep_for_zenodo_path.glob('*'))\n\nlen(zip_paths)",
"_____no_output_____"
],
[
"def get_path(uid, zip_paths):\n filtered_path = [path for path in zip_paths if uid in path.name]\n assert len(filtered_path) == 1\n path = filtered_path[0]\n \n return path",
"_____no_output_____"
],
[
"author = 'PyMedPhys Contributors'\nuse_sandbox = True\n\nfor structure_uid in list(stucture_uids)[10::]:\n title = f'auto-segmentation-{structure_uid}'\n \n filepaths = [get_path(structure_uid, zip_paths)]\n \n ct_uids = structure_uid_to_ct_uids[structure_uid]\n for ct_uid in ct_uids:\n filepaths.append(get_path(ct_uid, zip_paths))\n \n print(len(set(filepaths)))\n print(len(filepaths))\n assert len(set(filepaths)) == len(filepaths)\n \n upload.upload_files_to_zenodo(filepaths, title, author, use_sandbox=use_sandbox)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec908b5c4b0699561d773e348a9a03e7a35ffdbe | 59,346 | ipynb | Jupyter Notebook | site/en-snapshot/tfx/tutorials/tfx/components.ipynb | wanggdnju/docs-l10n | 4775692c820ce24babcaf2f29f6130195f7ff509 | [
"Apache-2.0"
] | 1 | 2021-12-14T09:14:16.000Z | 2021-12-14T09:14:16.000Z | site/en-snapshot/tfx/tutorials/tfx/components.ipynb | wanggdnju/docs-l10n | 4775692c820ce24babcaf2f29f6130195f7ff509 | [
"Apache-2.0"
] | null | null | null | site/en-snapshot/tfx/tutorials/tfx/components.ipynb | wanggdnju/docs-l10n | 4775692c820ce24babcaf2f29f6130195f7ff509 | [
"Apache-2.0"
] | null | null | null | 37.920767 | 538 | 0.565396 | [
[
[
"##### Copyright 2021 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# TFX Estimator Component Tutorial\n\n***A Component-by-Component Introduction to TensorFlow Extended (TFX)***",
"_____no_output_____"
],
[
"Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click \"Run in Google Colab\".\n\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/tfx/components\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/components.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/components.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n<td><a target=\"_blank\" href=\"https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/components.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/download_logo_32px.png\">Download notebook</a></td>\n</table></div>",
"_____no_output_____"
],
[
"This Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).\n\nIt covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.\n\nWhen you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.\n\nNote: This notebook and its associated APIs are **experimental** and are\nin active development. Major changes in functionality, behavior, and\npresentation are expected.",
"_____no_output_____"
],
[
"## Background\nThis notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.\n\nWorking in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.\n\n### Orchestration\n\nIn a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.\n\n### Metadata\n\nIn a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the `/tmp` directory on the Jupyter notebook or Colab server.",
"_____no_output_____"
],
[
"## Setup\nFirst, we install and import the necessary packages, set up paths, and download data.",
"_____no_output_____"
],
[
"### Upgrade Pip\n\nTo avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.",
"_____no_output_____"
]
],
[
[
"try:\n import colab\n !pip install --upgrade pip\nexcept:\n pass",
"_____no_output_____"
]
],
[
[
"### Install TFX\n\n**Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).**",
"_____no_output_____"
]
],
[
[
"!pip install -q -U tfx",
"_____no_output_____"
]
],
[
[
"## Did you restart the runtime?\n\nIf you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.",
"_____no_output_____"
],
[
"### Import packages\nWe import necessary packages, including standard TFX component classes.",
"_____no_output_____"
]
],
[
[
"import os\nimport pprint\nimport tempfile\nimport urllib\n\nimport absl\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\ntf.get_logger().propagate = False\npp = pprint.PrettyPrinter()\n\nimport tfx\nfrom tfx.components import CsvExampleGen\nfrom tfx.components import Evaluator\nfrom tfx.components import ExampleValidator\nfrom tfx.components import Pusher\nfrom tfx.components import ResolverNode\nfrom tfx.components import SchemaGen\nfrom tfx.components import StatisticsGen\nfrom tfx.components import Trainer\nfrom tfx.components import Transform\nfrom tfx.dsl.experimental import latest_blessed_model_resolver\nfrom tfx.orchestration import metadata\nfrom tfx.orchestration import pipeline\nfrom tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext\nfrom tfx.proto import pusher_pb2\nfrom tfx.proto import trainer_pb2\nfrom tfx.proto.evaluator_pb2 import SingleSlicingSpec\nfrom tfx.types import Channel\nfrom tfx.types.standard_artifacts import Model\nfrom tfx.types.standard_artifacts import ModelBlessing\n\n%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip",
"_____no_output_____"
]
],
[
[
"Let's check the library versions.",
"_____no_output_____"
]
],
[
[
"print('TensorFlow version: {}'.format(tf.__version__))\nprint('TFX version: {}'.format(tfx.__version__))",
"_____no_output_____"
]
],
[
[
"### Set up pipeline paths",
"_____no_output_____"
]
],
[
[
"# This is the root directory for your TFX pip package installation.\n_tfx_root = tfx.__path__[0]\n\n# This is the directory containing the TFX Chicago Taxi Pipeline example.\n_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')\n\n# This is the path where your model will be pushed for serving.\n_serving_model_dir = os.path.join(\n tempfile.mkdtemp(), 'serving_model/taxi_simple')\n\n# Set up logging.\nabsl.logging.set_verbosity(absl.logging.INFO)",
"_____no_output_____"
]
],
[
[
"### Download example data\nWe download the example dataset for use in our TFX pipeline.\n\nThe dataset we're using is the [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew) released by the City of Chicago. The columns in this dataset are:\n\n<table>\n<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>\n<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>\n<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>\n<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>\n<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>\n<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>\n</table>\n\nWith this dataset, we will build a model that predicts the `tips` of a trip.",
"_____no_output_____"
]
],
[
[
"_data_root = tempfile.mkdtemp(prefix='tfx-data')\nDATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'\n_data_filepath = os.path.join(_data_root, \"data.csv\")\nurllib.request.urlretrieve(DATA_PATH, _data_filepath)",
"_____no_output_____"
]
],
[
[
"Take a quick look at the CSV file.",
"_____no_output_____"
]
],
[
[
"!head {_data_filepath}",
"_____no_output_____"
]
],
[
[
"*Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.*",
"_____no_output_____"
],
[
"### Create the InteractiveContext\nLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.",
"_____no_output_____"
]
],
[
[
"# Here, we create an InteractiveContext using default parameters. This will\n# use a temporary directory with an ephemeral ML Metadata database instance.\n# To use your own pipeline root or database, the optional properties\n# `pipeline_root` and `metadata_connection_config` may be passed to\n# InteractiveContext. Calls to InteractiveContext are no-ops outside of the\n# notebook.\ncontext = InteractiveContext()",
"_____no_output_____"
]
],
[
[
"## Run TFX components interactively\nIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.",
"_____no_output_____"
],
[
"### ExampleGen\n\nThe `ExampleGen` component is usually at the start of a TFX pipeline. It will:\n\n1. Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)\n2. Convert data into the `tf.Example` format\n3. Copy data into the `_tfx_root` directory for other components to access\n\n`ExampleGen` takes as input the path to your data source. In our case, this is the `_data_root` path that contains the downloaded CSV.\n\nNote: In this notebook, we can instantiate components one-by-one and run them with `InteractiveContext.run()`. By contrast, in a production setting, we would specify all the components upfront in a `Pipeline` to pass to the orchestrator (see the [Building a TFX Pipeline Guide](https://www.tensorflow.org/tfx/guide/build_tfx_pipeline)).",
"_____no_output_____"
]
],
[
[
"example_gen = CsvExampleGen(input_base=_data_root)\ncontext.run(example_gen)",
"_____no_output_____"
]
],
[
[
"Let's examine the output artifacts of `ExampleGen`. This component produces two artifacts, training examples and evaluation examples:",
"_____no_output_____"
]
],
[
[
"artifact = example_gen.outputs['examples'].get()[0]\nprint(artifact.split_names, artifact.uri)",
"_____no_output_____"
]
],
[
[
"We can also take a look at the first three training examples:",
"_____no_output_____"
]
],
[
[
"# Get the URI of the output artifact representing the training examples, which is a directory\ntrain_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')\n\n# Get the list of files in this directory (all compressed TFRecord files)\ntfrecord_filenames = [os.path.join(train_uri, name)\n for name in os.listdir(train_uri)]\n\n# Create a `TFRecordDataset` to read these files\ndataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n\n# Iterate over the first 3 records and decode them.\nfor tfrecord in dataset.take(3):\n serialized_example = tfrecord.numpy()\n example = tf.train.Example()\n example.ParseFromString(serialized_example)\n pp.pprint(example)",
"_____no_output_____"
]
],
[
[
"Now that `ExampleGen` has finished ingesting the data, the next step is data analysis.",
"_____no_output_____"
],
[
"### StatisticsGen\nThe `StatisticsGen` component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n\n`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen`.",
"_____no_output_____"
]
],
[
[
"statistics_gen = StatisticsGen(\n examples=example_gen.outputs['examples'])\ncontext.run(statistics_gen)",
"_____no_output_____"
]
],
[
[
"After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!",
"_____no_output_____"
]
],
[
[
"context.show(statistics_gen.outputs['statistics'])",
"_____no_output_____"
]
],
[
[
"### SchemaGen\n\nThe `SchemaGen` component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n\n`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.",
"_____no_output_____"
]
],
[
[
"schema_gen = SchemaGen(\n statistics=statistics_gen.outputs['statistics'],\n infer_feature_shape=False)\ncontext.run(schema_gen)",
"_____no_output_____"
]
],
[
[
"After `SchemaGen` finishes running, we can visualize the generated schema as a table.",
"_____no_output_____"
]
],
[
[
"context.show(schema_gen.outputs['schema'])",
"_____no_output_____"
]
],
[
[
"Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.\n\nTo learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen).",
"_____no_output_____"
],
[
"### ExampleValidator\nThe `ExampleValidator` component detects anomalies in your data, based on the expectations defined by the schema. It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n\n`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.",
"_____no_output_____"
]
],
[
[
"example_validator = ExampleValidator(\n statistics=statistics_gen.outputs['statistics'],\n schema=schema_gen.outputs['schema'])\ncontext.run(example_validator)",
"_____no_output_____"
]
],
[
[
"After `ExampleValidator` finishes running, we can visualize the anomalies as a table.",
"_____no_output_____"
]
],
[
[
"context.show(example_validator.outputs['anomalies'])",
"_____no_output_____"
]
],
[
[
"In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.",
"_____no_output_____"
],
[
"### Transform\nThe `Transform` component performs feature engineering for both training and serving. It uses the [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started) library.\n\n`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.\n\nLet's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)). First, we define a few constants for feature engineering:\n\nNote: The `%%writefile` cell magic will save the contents of the cell as a `.py` file on disk. This allows the `Transform` component to load your code as a module.\n",
"_____no_output_____"
]
],
[
[
"_taxi_constants_module_file = 'taxi_constants.py'",
"_____no_output_____"
],
[
"%%writefile {_taxi_constants_module_file}\n\n# Categorical features are assumed to each have a maximum value in the dataset.\nMAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]\n\nCATEGORICAL_FEATURE_KEYS = [\n 'trip_start_hour', 'trip_start_day', 'trip_start_month',\n 'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',\n 'dropoff_community_area'\n]\n\nDENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']\n\n# Number of buckets used by tf.transform for encoding each feature.\nFEATURE_BUCKET_COUNT = 10\n\nBUCKET_FEATURE_KEYS = [\n 'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',\n 'dropoff_longitude'\n]\n\n# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform\nVOCAB_SIZE = 1000\n\n# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.\nOOV_SIZE = 10\n\nVOCAB_FEATURE_KEYS = [\n 'payment_type',\n 'company',\n]\n\n# Keys\nLABEL_KEY = 'tips'\nFARE_KEY = 'fare'\n\ndef transformed_name(key):\n return key + '_xf'",
"_____no_output_____"
]
],
[
[
"Next, we write a `preprocessing_fn` that takes in raw data as input, and returns transformed features that our model can train on:",
"_____no_output_____"
]
],
[
[
"_taxi_transform_module_file = 'taxi_transform.py'",
"_____no_output_____"
],
[
"%%writefile {_taxi_transform_module_file}\n\nimport tensorflow as tf\nimport tensorflow_transform as tft\n\nimport taxi_constants\n\n_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS\n_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS\n_VOCAB_SIZE = taxi_constants.VOCAB_SIZE\n_OOV_SIZE = taxi_constants.OOV_SIZE\n_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT\n_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS\n_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS\n_FARE_KEY = taxi_constants.FARE_KEY\n_LABEL_KEY = taxi_constants.LABEL_KEY\n_transformed_name = taxi_constants.transformed_name\n\n\ndef preprocessing_fn(inputs):\n \"\"\"tf.transform's callback function for preprocessing inputs.\n Args:\n inputs: map from feature keys to raw not-yet-transformed features.\n Returns:\n Map from string feature key to transformed feature operations.\n \"\"\"\n outputs = {}\n for key in _DENSE_FLOAT_FEATURE_KEYS:\n # Preserve this feature as a dense float, setting nan's to the mean.\n outputs[_transformed_name(key)] = tft.scale_to_z_score(\n _fill_in_missing(inputs[key]))\n\n for key in _VOCAB_FEATURE_KEYS:\n # Build a vocabulary for this feature.\n outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(\n _fill_in_missing(inputs[key]),\n top_k=_VOCAB_SIZE,\n num_oov_buckets=_OOV_SIZE)\n\n for key in _BUCKET_FEATURE_KEYS:\n outputs[_transformed_name(key)] = tft.bucketize(\n _fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)\n\n for key in _CATEGORICAL_FEATURE_KEYS:\n outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])\n\n # Was this passenger a big tipper?\n taxi_fare = _fill_in_missing(inputs[_FARE_KEY])\n tips = _fill_in_missing(inputs[_LABEL_KEY])\n outputs[_transformed_name(_LABEL_KEY)] = tf.where(\n tf.math.is_nan(taxi_fare),\n tf.cast(tf.zeros_like(taxi_fare), tf.int64),\n # Test if the tip was > 20% of the fare.\n tf.cast(\n tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))\n\n return outputs\n\n\ndef _fill_in_missing(x):\n \"\"\"Replace missing values in a SparseTensor.\n Fills in missing values of `x` with '' or 0, and converts to a dense tensor.\n Args:\n x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1\n in the second dimension.\n Returns:\n A rank 1 tensor where missing values of `x` have been filled in.\n \"\"\"\n if not isinstance(x, tf.sparse.SparseTensor):\n return x\n\n default_value = '' if x.dtype == tf.string else 0\n return tf.squeeze(\n tf.sparse.to_dense(\n tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),\n default_value),\n axis=1)",
"_____no_output_____"
]
],
[
[
"Now, we pass in this feature engineering code to the `Transform` component and run it to transform your data.",
"_____no_output_____"
]
],
[
[
"transform = Transform(\n examples=example_gen.outputs['examples'],\n schema=schema_gen.outputs['schema'],\n module_file=os.path.abspath(_taxi_transform_module_file))\ncontext.run(transform)",
"_____no_output_____"
]
],
[
[
"Let's examine the output artifacts of `Transform`. This component produces two types of outputs:\n\n* `transform_graph` is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).\n* `transformed_examples` represents the preprocessed training and evaluation data.",
"_____no_output_____"
]
],
[
[
"transform.outputs",
"_____no_output_____"
]
],
[
[
"Take a peek at the `transform_graph` artifact. It points to a directory containing three subdirectories.",
"_____no_output_____"
]
],
[
[
"train_uri = transform.outputs['transform_graph'].get()[0].uri\nos.listdir(train_uri)",
"_____no_output_____"
]
],
[
[
"The `transformed_metadata` subdirectory contains the schema of the preprocessed data. The `transform_fn` subdirectory contains the actual preprocessing graph. The `metadata` subdirectory contains the schema of the original data.\n\nWe can also take a look at the first three transformed examples:",
"_____no_output_____"
]
],
[
[
"# Get the URI of the output artifact representing the transformed examples, which is a directory\ntrain_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')\n\n# Get the list of files in this directory (all compressed TFRecord files)\ntfrecord_filenames = [os.path.join(train_uri, name)\n for name in os.listdir(train_uri)]\n\n# Create a `TFRecordDataset` to read these files\ndataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n\n# Iterate over the first 3 records and decode them.\nfor tfrecord in dataset.take(3):\n serialized_example = tfrecord.numpy()\n example = tf.train.Example()\n example.ParseFromString(serialized_example)\n pp.pprint(example)",
"_____no_output_____"
]
],
[
[
"After the `Transform` component has transformed your data into features, and the next step is to train a model.",
"_____no_output_____"
],
[
"### Trainer\nThe `Trainer` component will train a model that you define in TensorFlow (either using the Estimator API or the Keras API with [`model_to_estimator`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator)).\n\n`Trainer` takes as input the schema from `SchemaGen`, the transformed data and graph from `Transform`, training parameters, as well as a module that contains user-defined model code.\n\nLet's see an example of user-defined model code below (for an introduction to the TensorFlow Estimator APIs, [see the tutorial](https://www.tensorflow.org/tutorials/estimator/premade)):",
"_____no_output_____"
]
],
[
[
"_taxi_trainer_module_file = 'taxi_trainer.py'",
"_____no_output_____"
],
[
"%%writefile {_taxi_trainer_module_file}\n\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\nimport tensorflow_transform as tft\nfrom tensorflow_transform.tf_metadata import schema_utils\nfrom tfx_bsl.tfxio import dataset_options\n\nimport taxi_constants\n\n_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS\n_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS\n_VOCAB_SIZE = taxi_constants.VOCAB_SIZE\n_OOV_SIZE = taxi_constants.OOV_SIZE\n_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT\n_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS\n_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS\n_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES\n_LABEL_KEY = taxi_constants.LABEL_KEY\n_transformed_name = taxi_constants.transformed_name\n\n\ndef _transformed_names(keys):\n return [_transformed_name(key) for key in keys]\n\n\n# Tf.Transform considers these features as \"raw\"\ndef _get_raw_feature_spec(schema):\n return schema_utils.schema_as_feature_spec(schema).feature_spec\n\n\ndef _build_estimator(config, hidden_units=None, warm_start_from=None):\n \"\"\"Build an estimator for predicting the tipping behavior of taxi riders.\n Args:\n config: tf.estimator.RunConfig defining the runtime environment for the\n estimator (including model_dir).\n hidden_units: [int], the layer sizes of the DNN (input layer first)\n warm_start_from: Optional directory to warm start from.\n Returns:\n A dict of the following:\n - estimator: The estimator that will be used for training and eval.\n - train_spec: Spec for training.\n - eval_spec: Spec for eval.\n - eval_input_receiver_fn: Input function for eval.\n \"\"\"\n real_valued_columns = [\n tf.feature_column.numeric_column(key, shape=())\n for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)\n ]\n categorical_columns = [\n tf.feature_column.categorical_column_with_identity(\n key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)\n for key in _transformed_names(_VOCAB_FEATURE_KEYS)\n ]\n categorical_columns += [\n tf.feature_column.categorical_column_with_identity(\n key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)\n for key in _transformed_names(_BUCKET_FEATURE_KEYS)\n ]\n categorical_columns += [\n tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension\n key,\n num_buckets=num_buckets,\n default_value=0) for key, num_buckets in zip(\n _transformed_names(_CATEGORICAL_FEATURE_KEYS),\n _MAX_CATEGORICAL_FEATURE_VALUES)\n ]\n return tf.estimator.DNNLinearCombinedClassifier(\n config=config,\n linear_feature_columns=categorical_columns,\n dnn_feature_columns=real_valued_columns,\n dnn_hidden_units=hidden_units or [100, 70, 50, 25],\n warm_start_from=warm_start_from)\n\n\ndef _example_serving_receiver_fn(tf_transform_graph, schema):\n \"\"\"Build the serving in inputs.\n Args:\n tf_transform_graph: A TFTransformOutput.\n schema: the schema of the input data.\n Returns:\n Tensorflow graph which parses examples, applying tf-transform to them.\n \"\"\"\n raw_feature_spec = _get_raw_feature_spec(schema)\n raw_feature_spec.pop(_LABEL_KEY)\n\n raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n raw_feature_spec, default_batch_size=None)\n serving_input_receiver = raw_input_fn()\n\n transformed_features = tf_transform_graph.transform_raw_features(\n serving_input_receiver.features)\n\n return tf.estimator.export.ServingInputReceiver(\n transformed_features, serving_input_receiver.receiver_tensors)\n\n\ndef _eval_input_receiver_fn(tf_transform_graph, schema):\n \"\"\"Build everything needed for the tf-model-analysis to run the model.\n Args:\n tf_transform_graph: A TFTransformOutput.\n schema: the schema of the input data.\n Returns:\n EvalInputReceiver function, which contains:\n - Tensorflow graph which parses raw untransformed features, applies the\n tf-transform preprocessing operators.\n - Set of raw, untransformed features.\n - Label against which predictions will be compared.\n \"\"\"\n # Notice that the inputs are raw features, not transformed features here.\n raw_feature_spec = _get_raw_feature_spec(schema)\n\n serialized_tf_example = tf.compat.v1.placeholder(\n dtype=tf.string, shape=[None], name='input_example_tensor')\n\n # Add a parse_example operator to the tensorflow graph, which will parse\n # raw, untransformed, tf examples.\n features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)\n\n # Now that we have our raw examples, process them through the tf-transform\n # function computed during the preprocessing step.\n transformed_features = tf_transform_graph.transform_raw_features(\n features)\n\n # The key name MUST be 'examples'.\n receiver_tensors = {'examples': serialized_tf_example}\n\n # NOTE: Model is driven by transformed features (since training works on the\n # materialized output of TFT, but slicing will happen on raw features.\n features.update(transformed_features)\n\n return tfma.export.EvalInputReceiver(\n features=features,\n receiver_tensors=receiver_tensors,\n labels=transformed_features[_transformed_name(_LABEL_KEY)])\n\n\ndef _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):\n \"\"\"Generates features and label for tuning/training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n tf_transform_output: A TFTransformOutput.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n dataset_options.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_transformed_name(_LABEL_KEY)),\n tf_transform_output.transformed_metadata.schema)\n\n\n# TFX will call this function\ndef trainer_fn(trainer_fn_args, schema):\n \"\"\"Build the estimator using the high level API.\n Args:\n trainer_fn_args: Holds args used to train the model as name/value pairs.\n schema: Holds the schema of the training examples.\n Returns:\n A dict of the following:\n - estimator: The estimator that will be used for training and eval.\n - train_spec: Spec for training.\n - eval_spec: Spec for eval.\n - eval_input_receiver_fn: Input function for eval.\n \"\"\"\n # Number of nodes in the first layer of the DNN\n first_dnn_layer_size = 100\n num_dnn_layers = 4\n dnn_decay_factor = 0.7\n\n train_batch_size = 40\n eval_batch_size = 40\n\n tf_transform_graph = tft.TFTransformOutput(trainer_fn_args.transform_output)\n\n train_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda\n trainer_fn_args.train_files,\n trainer_fn_args.data_accessor,\n tf_transform_graph,\n batch_size=train_batch_size)\n\n eval_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda\n trainer_fn_args.eval_files,\n trainer_fn_args.data_accessor,\n tf_transform_graph,\n batch_size=eval_batch_size)\n\n train_spec = tf.estimator.TrainSpec( # pylint: disable=g-long-lambda\n train_input_fn,\n max_steps=trainer_fn_args.train_steps)\n\n serving_receiver_fn = lambda: _example_serving_receiver_fn( # pylint: disable=g-long-lambda\n tf_transform_graph, schema)\n\n exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)\n eval_spec = tf.estimator.EvalSpec(\n eval_input_fn,\n steps=trainer_fn_args.eval_steps,\n exporters=[exporter],\n name='chicago-taxi-eval')\n\n run_config = tf.estimator.RunConfig(\n save_checkpoints_steps=999, keep_checkpoint_max=1)\n\n run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)\n\n estimator = _build_estimator(\n # Construct layers sizes with exponetial decay\n hidden_units=[\n max(2, int(first_dnn_layer_size * dnn_decay_factor**i))\n for i in range(num_dnn_layers)\n ],\n config=run_config,\n warm_start_from=trainer_fn_args.base_model)\n\n # Create an input receiver for TFMA processing\n receiver_fn = lambda: _eval_input_receiver_fn( # pylint: disable=g-long-lambda\n tf_transform_graph, schema)\n\n return {\n 'estimator': estimator,\n 'train_spec': train_spec,\n 'eval_spec': eval_spec,\n 'eval_input_receiver_fn': receiver_fn\n }",
"_____no_output_____"
]
],
[
[
"Now, we pass in this model code to the `Trainer` component and run it to train the model.",
"_____no_output_____"
]
],
[
[
"trainer = Trainer(\n module_file=os.path.abspath(_taxi_trainer_module_file),\n transformed_examples=transform.outputs['transformed_examples'],\n schema=schema_gen.outputs['schema'],\n transform_graph=transform.outputs['transform_graph'],\n train_args=trainer_pb2.TrainArgs(num_steps=10000),\n eval_args=trainer_pb2.EvalArgs(num_steps=5000))\ncontext.run(trainer)",
"_____no_output_____"
]
],
[
[
"#### Analyze Training with TensorBoard\nOptionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.",
"_____no_output_____"
]
],
[
[
"# Get the URI of the output artifact representing the training logs, which is a directory\nmodel_run_dir = trainer.outputs['model_run'].get()[0].uri\n\n%load_ext tensorboard\n%tensorboard --logdir {model_run_dir}",
"_____no_output_____"
]
],
[
[
"### Evaluator\nThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. The `Evaluator` can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the `Evaluator` automatically will label the model as \"good\". \n\n`Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:",
"_____no_output_____"
]
],
[
[
"eval_config = tfma.EvalConfig(\n model_specs=[\n # Using signature 'eval' implies the use of an EvalSavedModel. To use\n # a serving model remove the signature to defaults to 'serving_default'\n # and add a label_key.\n tfma.ModelSpec(signature_name='eval')\n ],\n metrics_specs=[\n tfma.MetricsSpec(\n # The metrics added here are in addition to those saved with the\n # model (assuming either a keras model or EvalSavedModel is used).\n # Any metrics added into the saved model (for example using\n # model.compile(..., metrics=[...]), etc) will be computed\n # automatically.\n metrics=[\n tfma.MetricConfig(class_name='ExampleCount')\n ],\n # To add validation thresholds for metrics saved with the model,\n # add them keyed by metric name to the thresholds map.\n thresholds = {\n 'accuracy': tfma.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={'value': 0.5}),\n # Change threshold will be ignored if there is no\n # baseline model resolved from MLMD (first run).\n change_threshold=tfma.GenericChangeThreshold(\n direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n absolute={'value': -1e-10}))\n }\n )\n ],\n slicing_specs=[\n # An empty slice spec means the overall slice, i.e. the whole dataset.\n tfma.SlicingSpec(),\n # Data can be sliced along a feature column. In this case, data is\n # sliced along feature column trip_start_hour.\n tfma.SlicingSpec(feature_keys=['trip_start_hour'])\n ])",
"_____no_output_____"
]
],
[
[
"Next, we give this configuration to `Evaluator` and run it.",
"_____no_output_____"
]
],
[
[
"# Use TFMA to compute a evaluation statistics over features of a model and\n# validate them against a baseline.\n\n# The model resolver is only required if performing model validation in addition\n# to evaluation. In this case we validate against the latest blessed model. If\n# no model has been blessed before (as in this case) the evaluator will make our\n# candidate the first blessed model.\nmodel_resolver = ResolverNode(\n instance_name='latest_blessed_model_resolver',\n resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,\n model=Channel(type=Model),\n model_blessing=Channel(type=ModelBlessing))\ncontext.run(model_resolver)\n\nevaluator = Evaluator(\n examples=example_gen.outputs['examples'],\n model=trainer.outputs['model'],\n #baseline_model=model_resolver.outputs['model'],\n eval_config=eval_config)\ncontext.run(evaluator)",
"_____no_output_____"
]
],
[
[
"Now let's examine the output artifacts of `Evaluator`. ",
"_____no_output_____"
]
],
[
[
"evaluator.outputs",
"_____no_output_____"
]
],
[
[
"Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.",
"_____no_output_____"
]
],
[
[
"context.show(evaluator.outputs['evaluation'])",
"_____no_output_____"
]
],
[
[
"To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.",
"_____no_output_____"
]
],
[
[
"import tensorflow_model_analysis as tfma\n\n# Get the TFMA output result path and load the result.\nPATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri\ntfma_result = tfma.load_eval_result(PATH_TO_RESULT)\n\n# Show data sliced along feature column trip_start_hour.\ntfma.view.render_slicing_metrics(\n tfma_result, slicing_column='trip_start_hour')",
"_____no_output_____"
]
],
[
[
"This visualization shows the same metrics, but computed at every feature value of `trip_start_hour` instead of on the entire evaluation set.\n\nTensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see [the tutorial](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic).",
"_____no_output_____"
],
[
"Since we added thresholds to our config, validation output is also available. The precence of a `blessing` artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.",
"_____no_output_____"
]
],
[
[
"blessing_uri = evaluator.outputs.blessing.get()[0].uri\n!ls -l {blessing_uri}",
"_____no_output_____"
]
],
[
[
"Now can also verify the success by loading the validation result record:",
"_____no_output_____"
]
],
[
[
"PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri\nprint(tfma.load_validation_result(PATH_TO_RESULT))",
"_____no_output_____"
]
],
[
[
"### Pusher\nThe `Pusher` component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to `_serving_model_dir`.",
"_____no_output_____"
]
],
[
[
"pusher = Pusher(\n model=trainer.outputs['model'],\n model_blessing=evaluator.outputs['blessing'],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=_serving_model_dir)))\ncontext.run(pusher)",
"_____no_output_____"
]
],
[
[
"Let's examine the output artifacts of `Pusher`. ",
"_____no_output_____"
]
],
[
[
"pusher.outputs",
"_____no_output_____"
]
],
[
[
"In particular, the Pusher will export your model in the SavedModel format, which looks like this:",
"_____no_output_____"
]
],
[
[
"push_uri = pusher.outputs['pushed_model'].get()[0].uri\nmodel = tf.saved_model.load(push_uri)\n\nfor item in model.signatures.items():\n pp.pprint(item)",
"_____no_output_____"
]
],
[
[
"We're finished our tour of built-in TFX components!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec90ab2167dd8d1e2cae56b92708400a0a68cfd6 | 7,110 | ipynb | Jupyter Notebook | aws_sagemaker_studio/sagemaker_neo_compilation_jobs/pytorch_torchvision/pytorch_torchvision_neo_studio.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2,610 | 2020-10-01T14:14:53.000Z | 2022-03-31T18:02:31.000Z | aws_sagemaker_studio/sagemaker_neo_compilation_jobs/pytorch_torchvision/pytorch_torchvision_neo_studio.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 1,959 | 2020-09-30T20:22:42.000Z | 2022-03-31T23:58:37.000Z | aws_sagemaker_studio/sagemaker_neo_compilation_jobs/pytorch_torchvision/pytorch_torchvision_neo_studio.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2,052 | 2020-09-30T22:11:46.000Z | 2022-03-31T23:02:51.000Z | 25.392857 | 338 | 0.564838 | [
[
[
"# Deploying pre-trained PyTorch vision models with Amazon SageMaker Neo",
"_____no_output_____"
],
[
"Neo is a capability of Amazon SageMaker that enables you to compile machine learning models to optimize them for our choice of hardward targets. Currently, Neo supports pre-trained PyTorch models from [TorchVision](https://pytorch.org/docs/stable/torchvision/models.html). General support for other PyTorch models is forthcoming.\n\nMake sure you selected Python 3 (Data Science) kernel.",
"_____no_output_____"
]
],
[
[
"%cd /root/amazon-sagemaker-examples/aws_sagemaker_studio/sagemaker_neo_compilation_jobs/pytorch_torchvision",
"_____no_output_____"
],
[
"import sys\n\n!{sys.executable} -m pip install torch==1.6.0 torchvision==0.7.0\n!{sys.executable} -m pip install --upgrade sagemaker",
"_____no_output_____"
]
],
[
[
"## Import ResNet18 from TorchVision",
"_____no_output_____"
],
[
"We'll import [ResNet18](https://arxiv.org/abs/1512.03385) model from TorchVision and create a model artifact `model.tar.gz`.",
"_____no_output_____"
]
],
[
[
"import sagemaker\nimport torch\nimport torchvision.models as models\nimport tarfile\n\nresnet18 = models.resnet18(pretrained=True)\ninput_shape = [1, 3, 224, 224]\ntrace = torch.jit.trace(resnet18.float().eval(), torch.zeros(input_shape).float())\ntrace.save(\"model.pth\")\n\nwith tarfile.open(\"model.tar.gz\", \"w:gz\") as f:\n f.add(\"model.pth\")",
"_____no_output_____"
]
],
[
[
"### Upload the model archive to S3",
"_____no_output_____"
]
],
[
[
"import boto3\nimport sagemaker\nimport time\nfrom sagemaker.utils import name_from_base\n\nrole = sagemaker.get_execution_role()\nsess = sagemaker.Session()\nregion = sess.boto_region_name\nbucket = sess.default_bucket()\n\ncompilation_job_name = name_from_base(\"TorchVision-ResNet18-Neo\")\nprefix = compilation_job_name + \"/model\"\n\nmodel_path = sess.upload_data(path=\"model.tar.gz\", key_prefix=prefix)\n\ndata_shape = '{\"input0\":[1,3,224,224]}'\ntarget_device = \"ml_c5\"\nframework = \"PYTORCH\"\nframework_version = \"1.6\"\ncompiled_model_path = \"s3://{}/{}/output\".format(bucket, compilation_job_name)",
"_____no_output_____"
]
],
[
[
"## Invoke Neo Compilation API",
"_____no_output_____"
],
[
"### Create a PyTorch SageMaker model",
"_____no_output_____"
]
],
[
[
"from sagemaker.pytorch.model import PyTorchModel\nfrom sagemaker.predictor import Predictor\n\nsagemaker_model = PyTorchModel(\n model_data=model_path,\n predictor_cls=Predictor,\n framework_version=framework_version,\n role=role,\n sagemaker_session=sess,\n entry_point=\"resnet18.py\",\n source_dir=\"code\",\n py_version=\"py3\",\n env={\"MMS_DEFAULT_RESPONSE_TIMEOUT\": \"500\"},\n)",
"_____no_output_____"
]
],
[
[
"### Use Neo compiler to compile the model",
"_____no_output_____"
]
],
[
[
"compiled_model = sagemaker_model.compile(\n target_instance_family=target_device,\n input_shape=data_shape,\n job_name=compilation_job_name,\n role=role,\n framework=framework.lower(),\n framework_version=framework_version,\n output_path=compiled_model_path,\n)",
"_____no_output_____"
]
],
[
[
"## Deploy the model",
"_____no_output_____"
]
],
[
[
"predictor = compiled_model.deploy(initial_instance_count=1, instance_type=\"ml.c5.9xlarge\")",
"_____no_output_____"
]
],
[
[
"## Send requests",
"_____no_output_____"
],
[
"Let's try to send a cat picture.\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport json\n\nwith open(\"cat.jpg\", \"rb\") as f:\n payload = f.read()\n payload = bytearray(payload)\n\nresponse = predictor.predict(payload)\nresult = json.loads(response.decode())\nprint(\"Most likely class: {}\".format(np.argmax(result)))",
"_____no_output_____"
],
[
"# Load names for ImageNet classes\nobject_categories = {}\nwith open(\"imagenet1000_clsidx_to_labels.txt\", \"r\") as f:\n for line in f:\n key, val = line.strip().split(\":\")\n object_categories[key] = val\nprint(\n \"Result: label - \"\n + object_categories[str(np.argmax(result))]\n + \" probability - \"\n + str(np.amax(result))\n)",
"_____no_output_____"
]
],
[
[
"## Delete the Endpoint\nHaving an endpoint running will incur some costs. Therefore as a clean-up job, we should delete the endpoint.",
"_____no_output_____"
]
],
[
[
"sess.delete_endpoint(predictor.endpoint_name)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec90ca06a893e793c4752efd138e905d5d6ed675 | 671 | ipynb | Jupyter Notebook | content/second-law/second-law.ipynb | msb002/computational-thermo | 9302288217a36e0ce29e320688a3f574921909a5 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | content/second-law/second-law.ipynb | msb002/computational-thermo | 9302288217a36e0ce29e320688a3f574921909a5 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | content/second-law/second-law.ipynb | msb002/computational-thermo | 9302288217a36e0ce29e320688a3f574921909a5 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 16.775 | 38 | 0.52161 | [
[
[
"# Second Law of Thermodynamics\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
ec90ffc01fe45fc74ee2f7e403639ac678d69ea1 | 675,000 | ipynb | Jupyter Notebook | project5/project5 updated.ipynb | wiggs555/cse7324project | 6bc6e51ccbdbf0b80abbb0e7f0a64ae150831abd | [
"Unlicense"
] | null | null | null | project5/project5 updated.ipynb | wiggs555/cse7324project | 6bc6e51ccbdbf0b80abbb0e7f0a64ae150831abd | [
"Unlicense"
] | null | null | null | project5/project5 updated.ipynb | wiggs555/cse7324project | 6bc6e51ccbdbf0b80abbb0e7f0a64ae150831abd | [
"Unlicense"
] | 1 | 2019-02-05T07:45:51.000Z | 2019-02-05T07:45:51.000Z | 120.213713 | 97,532 | 0.756921 | [
[
[
"# dependencies\nimport pandas as pd\nimport numpy as np\nimport missingno as msno \nimport matplotlib.pyplot as plt\nimport re\nfrom sklearn.model_selection import train_test_split\n\nfrom textwrap import wrap\nfrom sklearn.preprocessing import StandardScaler\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport math\n%matplotlib inline",
"_____no_output_____"
],
[
"# import data\nshelter_outcomes = pd.read_csv(\"C:/Users/sulem/OneDrive/Desktop/machin learnign/Project3/aac_shelter_outcomes.csv\")\n# filter animal type for just cats\n#cats = shelter_outcomes[shelter_outcomes['animal_type'] == 'Cat']\ncats = shelter_outcomes\n#print(cats.head())\n\n# remove age_upon_outcome and recalculate to standard units (days)\nage = cats.loc[:,['datetime', 'date_of_birth']]\n# convert to datetime\nage.loc[:,'datetime'] = pd.to_datetime(age['datetime'])\nage.loc[:,'date_of_birth'] = pd.to_datetime(age['date_of_birth'])\n# calculate cat age in days\ncats.loc[:,'age'] = (age.loc[:,'datetime'] - age.loc[:,'date_of_birth']).dt.days\n# get dob info\ncats['dob_month'] = age.loc[:, 'date_of_birth'].dt.month\ncats['dob_day'] = age.loc[:, 'date_of_birth'].dt.day\ncats['dob_dayofweek'] = age.loc[:, 'date_of_birth'].dt.dayofweek\n# get month from datetime\ncats['month'] = age.loc[:,'datetime'].dt.month\n# get day of month\ncats['day'] = age.loc[:,'datetime'].dt.day\n# get day of week\ncats['dayofweek'] = age.loc[:, 'datetime'].dt.dayofweek\n# get hour of day\ncats['hour'] = age.loc[:, 'datetime'].dt.hour\n# get quarter\ncats['quarter'] = age.loc[:, 'datetime'].dt.quarter\n\n# clean up breed attribute\n# get breed attribute for processing\n# convert to lowercase, remove mix and strip whitespace\n# remove space in 'medium hair' to match 'longhair' and 'shorthair'\n# split on either space or '/'\nbreed = cats.loc[:, 'breed'].str.lower().str.replace('mix', '').str.replace('medium hair', 'mediumhair').str.strip().str.split('/', expand=True)\ncats['breed'] = breed[0]\ncats['breed1'] = breed[1]\n\n# clean up color attribute\n# convert to lowercase\n# strip spaces\n# split on '/'\ncolor = cats.loc[:, 'color'].str.lower().str.strip().str.split('/', expand=True)\ncats['color'] = color[0]\ncats['color1'] = color[1]\n\n# clean up sex_upon_outcome\nsex = cats['sex_upon_outcome'].str.lower().str.strip().str.split(' ', expand=True)\nsex[0].replace('spayed', True, inplace=True)\nsex[0].replace('neutered', True, inplace=True)\nsex[0].replace('intact', False, inplace=True)\nsex[1].replace(np.nan, 'unknown', inplace=True)\ncats['spayed_neutered'] = sex[0]\ncats['sex'] = sex[1]\n\n# add in domesticated attribute\ncats['domestic'] = np.where(cats['breed'].str.contains('domestic'), 1, 0)\n\n# combine outcome and outcome subtype into a single attribute\ncats['outcome_subtype'] = cats['outcome_subtype'].str.lower().str.replace(' ', '-').fillna('unknown')\ncats['outcome_type'] = cats['outcome_type'].str.lower().str.replace(' ', '-').fillna('unknown')\ncats['outcome'] = cats['outcome_type'] + '_' + cats['outcome_subtype']\n\n# drop unnecessary columns\ncats.drop(columns=['animal_id', 'name', 'age_upon_outcome', 'date_of_birth', 'datetime', 'monthyear', 'sex_upon_outcome', 'outcome_subtype', 'outcome_type'], inplace=True)\n#print(cats['outcome'].value_counts())\n\ncats.head()\n",
"_____no_output_____"
],
[
"print(\"Default datatypes of shelter cat outcomes:\\n\")\nprint(cats.dtypes)\n\nprint(\"\\nBelow is a description of the attributes in the cats dataframe:\\n\")",
"Default datatypes of shelter cat outcomes:\n\nanimal_type object\nbreed object\ncolor object\nage int64\ndob_month int64\ndob_day int64\ndob_dayofweek int64\nmonth int64\nday int64\ndayofweek int64\nhour int64\nquarter int64\nbreed1 object\ncolor1 object\nspayed_neutered object\nsex object\ndomestic int32\noutcome object\ndtype: object\n\nBelow is a description of the attributes in the cats dataframe:\n\n"
],
[
"print('Below is a listing of the target classes and their distributions:')\ncats['outcome'].value_counts()",
"Below is a listing of the target classes and their distributions:\n"
],
[
"msno.matrix(cats )",
"_____no_output_____"
],
[
"cats.drop(columns=['breed1'], inplace=True)\n# Breed, Color, Color1, Spayed_Netured and Sex attributes need to be one hot encoded\ncats_ohe = pd.get_dummies(cats, columns=['breed', 'color', 'color1', 'spayed_neutered', 'sex','animal_type'])\ncats_ohe.head()\nout_t={'relocate_unknown':0,'euthanasia_court/investigation':0,'euthanasia_behavior':0,'euthanasia_suffering' : 0, 'died_in-kennel' : 0, 'return-to-owner_unknown' : 0, 'transfer_partner' : 0, 'euthanasia_at-vet' : 0, 'adoption_foster' : 1, 'died_in-foster' : 0, 'transfer_scrp' : 0, 'euthanasia_medical' : 0, 'transfer_snr' : 0, 'died_enroute' : 0, 'rto-adopt_unknown' : 1, 'missing_in-foster' : 0, 'adoption_offsite' : 1, 'adoption_unknown' :1,'euthanasia_rabies-risk' : 0, 'unknown_unknown' : 0, 'adoption_barn' : 0, 'died_unknown' : 0, 'died_in-surgery' : 0, 'euthanasia_aggressive' : 0, 'euthanasia_unknown' : 0, 'missing_unknown' : 0, 'missing_in-kennel' : 0, 'missing_possible-theft' : 0, 'died_at-vet' : 0, 'disposal_unknown' : 0, 'euthanasia_underage' : 0, 'transfer_barn' : 0}\n#output is converted from string to catogries 0 to 5 represent each output\n# separate outcome from data\noutcome = cats_ohe['outcome']\ncats_ohe.drop(columns=['outcome'])\n\nprint(cats_ohe.head())\n\n# split the data\nX_train, X_test, y_train, y_test = train_test_split(cats_ohe, outcome, test_size=0.2, random_state=0)\nX_train.drop(columns=['outcome'], inplace=True)\nX_test.drop(columns=['outcome'], inplace=True)\ny_train = np.asarray([out_t[item] for item in y_train])\ny_test = np.asarray([out_t[item] for item in y_test])\n#print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)\n",
" age dob_month dob_day dob_dayofweek month day dayofweek hour \\\n0 15 7 7 0 7 22 1 16 \n1 366 11 6 1 11 7 3 11 \n2 429 3 31 6 6 3 1 14 \n3 3300 6 2 3 6 15 6 15 \n4 181 1 7 1 7 7 0 14 \n\n quarter domestic ... spayed_neutered_True \\\n0 3 1 ... 0 \n1 4 0 ... 1 \n2 2 0 ... 1 \n3 2 0 ... 1 \n4 3 0 ... 0 \n\n spayed_neutered_unknown sex_female sex_male sex_unknown \\\n0 0 0 1 0 \n1 0 1 0 0 \n2 0 0 1 0 \n3 0 0 1 0 \n4 1 0 0 1 \n\n animal_type_Bird animal_type_Cat animal_type_Dog animal_type_Livestock \\\n0 0 1 0 0 \n1 0 0 1 0 \n2 0 0 1 0 \n3 0 0 1 0 \n4 0 0 0 0 \n\n animal_type_Other \n0 0 \n1 0 \n2 0 \n3 0 \n4 1 \n\n[5 rows x 474 columns]\n"
],
[
"from sklearn import metrics as mt\nfrom sklearn.preprocessing import OneHotEncoder\nimport keras\n# from keras.models import Sequential\nfrom keras.layers import Dense, Activation, Input\nfrom keras.layers import Embedding, Flatten, Concatenate\nfrom keras.models import Model\nkeras.__version__",
"Using TensorFlow backend.\n"
],
[
"x_train_ar=X_train.values\ny_target_ar=np.asarray(y_train)\nx_test_ar=X_test.values\ny_test_ar=np.asarray(y_test)\nx_train_ar = StandardScaler().fit(x_train_ar).transform(x_train_ar)\n\nprint(x_train_ar.shape)\nprint(y_target_ar.shape)\nunique, counts = np.unique(y_target_ar, return_counts=True)\nnp.asarray((unique, counts))",
"(62604, 473)\n(62604,)\n"
],
[
"\nfor i in range(78256):\n sex[0][i]=str(sex[0][i])\n",
"_____no_output_____"
],
[
"cats=cats.drop(columns=['color1'])\ncats['spayed_neutered'] = sex[0]\ncats=cats.dropna() \ncats=cats.drop(columns=['outcome'])\ncats",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import StandardScaler\ncategorical_headers = ['animal_type','breed','color',\n 'spayed_neutered','sex']\nnumeric_headers = [\"age\", \"dob_month\", \"dob_day\",\"dob_dayofweek\",\"month\",\"day\",\"dayofweek\",\"hour\",\"quarter\"]\nencoders = dict() \nfor col in categorical_headers:\n cats[col] = cats[col].str.strip()\n\n encoders[col] = LabelEncoder() # save the encoder\n cats[col+'_int'] = encoders[col].fit_transform(cats[col])\nfor col in numeric_headers:\n cats[col] = cats[col].astype(np.float)\n \n \n ss = StandardScaler()\n cats[col] = ss.fit_transform(cats[col].values.reshape(-1, 1))\n",
"_____no_output_____"
],
[
"from sklearn.model_selection import StratifiedShuffleSplit\nX_train, X_test, y_train, y_test=train_test_split(cats, outcome, test_size=0.2)\n\nprint(X_train.shape)\nprint(X_test.shape)\n",
"(62604, 20)\n(15652, 20)\n"
],
[
"\nohe = OneHotEncoder()\nX_train_ohe = ohe.fit_transform(X_train[categorical_headers].values)\nX_test_ohe = ohe.fit_transform(X_test[categorical_headers].values)\n\nprint(X_test_ohe.shape)\nprint(X_train_ohe.shape)",
"(15652, 319)\n(62604, 402)\n"
],
[
"y_train = np.asarray([out_t[item] for item in y_train])\ny_test = np.asarray([out_t[item] for item in y_test])\n\n",
"_____no_output_____"
],
[
"# let's start as simply as possible, without any feature preprocessing\ncategorical_headers_ints = [x+'_int' for x in categorical_headers]\n\n# we will forego one-hot encoding right now and instead just scale all inputs\n# this is just to get an example running in Keras (don't ever do this)\nfeature_columns = categorical_headers_ints+numeric_headers\nX_train_ar = ss.fit_transform(X_train[feature_columns].values).astype(np.float32)\nX_test_ar = ss.transform(X_test[feature_columns].values).astype(np.float32)\n\ny_train_ar = np.asarray(y_train)\ny_test_ar = np.asarray(y_test)\n\nprint(feature_columns)",
"['animal_type_int', 'breed_int', 'color_int', 'spayed_neutered_int', 'sex_int', 'age', 'dob_month', 'dob_day', 'dob_dayofweek', 'month', 'day', 'dayofweek', 'hour', 'quarter']\n"
],
[
"# create sparse input branch for ohe\nfrom keras.layers import concatenate\nfrom keras import backend as K\n\n\n\ndef recall_m(y_true, y_pred):\n true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))\n recall = true_positives / (possible_positives + K.epsilon())\n return recall\n\ndef precision_m(y_true, y_pred):\n true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))\n precision = true_positives / (predicted_positives + K.epsilon())\n return precision\n\ndef f1_m(y_true, y_pred):\n precision = precision_m(y_true, y_pred)\n recall = recall_m(y_true, y_pred)\n return 2*((precision*recall)/(precision+recall+K.epsilon()))\ninputsSparse = Input(shape=(X_train_ohe.shape[1],),sparse=True, name='X_ohe')\nxSparse = Dense(units=100, activation='relu', name='ohe_1')(inputsSparse)\nxSparse1 = Dense(units=50, activation='relu', name='ohe_2')(xSparse)\n# create dense input branch for numeric\ninputsDense = Input(shape=(X_train_ar.shape[1],),sparse=False, name='X_Numeric')\nxDense = Dense(units=100, activation='relu',name='num_1')(inputsDense)\nxDense1 = Dense(units=50, activation='relu',name='num_2')(xDense)\nx = concatenate([xSparse1, xDense1], name='concat')\npredictions = Dense(1,activation='sigmoid', name='combined')(x)\n\n# This creates a model that includes\n# the Input layer and Dense layers\nmodel = Model(inputs=[inputsSparse,inputsDense], outputs=predictions)\n\nmodel.summary()",
"WARNING:tensorflow:From C:\\Users\\sulem\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nX_ohe (InputLayer) (None, 402) 0 \n__________________________________________________________________________________________________\nX_Numeric (InputLayer) (None, 14) 0 \n__________________________________________________________________________________________________\nohe_1 (Dense) (None, 100) 40300 X_ohe[0][0] \n__________________________________________________________________________________________________\nnum_1 (Dense) (None, 100) 1500 X_Numeric[0][0] \n__________________________________________________________________________________________________\nohe_2 (Dense) (None, 50) 5050 ohe_1[0][0] \n__________________________________________________________________________________________________\nnum_2 (Dense) (None, 50) 5050 num_1[0][0] \n__________________________________________________________________________________________________\nconcat (Concatenate) (None, 100) 0 ohe_2[0][0] \n num_2[0][0] \n__________________________________________________________________________________________________\ncombined (Dense) (None, 1) 101 concat[0][0] \n==================================================================================================\nTotal params: 52,001\nTrainable params: 52,001\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"model.compile(optimizer='sgd',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\nmodel.fit([ X_train_ohe, X_train_ar ], # inputs for each branch are a list\n y_train, \n epochs=20, \n batch_size=50, \n verbose=0)\n",
"WARNING:tensorflow:From C:\\Users\\sulem\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\tensorflow\\python\\ops\\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\n"
],
[
"yhat = model.predict([X_train_ohe,\n X_train_ar]) # each branch has an input\n\nyhat = np.round(yhat)\nprint(mt.confusion_matrix(y_train_ar,yhat),mt.accuracy_score(y_train_ar,yhat))\n",
"[[28433 7597]\n [ 5777 20797]] 0.7863714778608396\n"
],
[
"recall = model.evaluate([ X_train_ohe, X_train_ar ], y_train_ar, verbose=0)\nrecall",
"_____no_output_____"
],
[
"\n# we need to create separate sequential models for each embedding\nembed_branches = []\nX_ints_train = [] # keep track of inputs for each branch\nX_ints_test = []# keep track of inputs for each branch\nall_inputs = [] # this is what we will give to keras.Model inputs\nall_branch_outputs = [] # this is where we will keep track of output of each branch\n\nfor col in categorical_headers_ints:\n X_ints_train.append( X_train[col].values )\n X_ints_test.append( X_test[col].values )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name=col)\n all_inputs.append( inputs ) # keep track of created inputs\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name=col+'_embed')(inputs)\n x = Flatten()(x)\n all_branch_outputs.append(x) \n \n# also get a dense branch of the numeric features\nall_inputs.append(Input(shape=(X_train_ar.shape[1],),sparse=False, name='numeric'))\nx = Dense(units=100, activation='relu',name='numeric_1')(all_inputs[-1])\nall_branch_outputs.append( Dense(units=50,activation='relu', name='numeric_2')(x) )\n\n# merge the branches together\nfinal_branch = concatenate(all_branch_outputs, name='concat_1')\nfinal_branch = Dense(units=1,activation='sigmoid', name='combined')(final_branch)\n\nmodel = Model(inputs=all_inputs, outputs=final_branch)\nmodel.summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nanimal_type_int (InputLayer) (None, 1) 0 \n__________________________________________________________________________________________________\nbreed_int (InputLayer) (None, 1) 0 \n__________________________________________________________________________________________________\ncolor_int (InputLayer) (None, 1) 0 \n__________________________________________________________________________________________________\nspayed_neutered_int (InputLayer (None, 1) 0 \n__________________________________________________________________________________________________\nsex_int (InputLayer) (None, 1) 0 \n__________________________________________________________________________________________________\nnumeric (InputLayer) (None, 14) 0 \n__________________________________________________________________________________________________\nanimal_type_int_embed (Embeddin (None, 1, 2) 10 animal_type_int[0][0] \n__________________________________________________________________________________________________\nbreed_int_embed (Embedding) (None, 1, 18) 6156 breed_int[0][0] \n__________________________________________________________________________________________________\ncolor_int_embed (Embedding) (None, 1, 7) 406 color_int[0][0] \n__________________________________________________________________________________________________\nspayed_neutered_int_embed (Embe (None, 1, 2) 8 spayed_neutered_int[0][0] \n__________________________________________________________________________________________________\nsex_int_embed (Embedding) (None, 1, 1) 3 sex_int[0][0] \n__________________________________________________________________________________________________\nnumeric_1 (Dense) (None, 100) 1500 numeric[0][0] \n__________________________________________________________________________________________________\nflatten_18 (Flatten) (None, 2) 0 animal_type_int_embed[0][0] \n__________________________________________________________________________________________________\nflatten_19 (Flatten) (None, 18) 0 breed_int_embed[0][0] \n__________________________________________________________________________________________________\nflatten_20 (Flatten) (None, 7) 0 color_int_embed[0][0] \n__________________________________________________________________________________________________\nflatten_21 (Flatten) (None, 2) 0 spayed_neutered_int_embed[0][0] \n__________________________________________________________________________________________________\nflatten_22 (Flatten) (None, 1) 0 sex_int_embed[0][0] \n__________________________________________________________________________________________________\nnumeric_2 (Dense) (None, 50) 5050 numeric_1[0][0] \n__________________________________________________________________________________________________\nconcat_1 (Concatenate) (None, 80) 0 flatten_18[0][0] \n flatten_19[0][0] \n flatten_20[0][0] \n flatten_21[0][0] \n flatten_22[0][0] \n numeric_2[0][0] \n__________________________________________________________________________________________________\ncombined (Dense) (None, 1) 81 concat_1[0][0] \n==================================================================================================\nTotal params: 13,214\nTrainable params: 13,214\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"model.compile(optimizer='sgd',\n loss='mean_squared_error',\n metrics=['accuracy'])\n\nmodel.fit(X_ints_train + [X_train_ar], # create a list of inputs for embeddings\n y_train, epochs=20, batch_size=32, verbose=1)",
"Epoch 1/20\n62604/62604 [==============================] - 5s 84us/step - loss: 0.2080 - acc: 0.6802\nEpoch 2/20\n62604/62604 [==============================] - 4s 69us/step - loss: 0.1692 - acc: 0.7576\nEpoch 3/20\n62604/62604 [==============================] - 5s 73us/step - loss: 0.1563 - acc: 0.7687\nEpoch 4/20\n62604/62604 [==============================] - 6s 89us/step - loss: 0.1527 - acc: 0.7728\nEpoch 5/20\n62604/62604 [==============================] - 5s 76us/step - loss: 0.1510 - acc: 0.7751\nEpoch 6/20\n62604/62604 [==============================] - 5s 80us/step - loss: 0.1498 - acc: 0.7771\nEpoch 7/20\n62604/62604 [==============================] - 5s 81us/step - loss: 0.1488 - acc: 0.7790\nEpoch 8/20\n62604/62604 [==============================] - 5s 81us/step - loss: 0.1480 - acc: 0.7809\nEpoch 9/20\n62604/62604 [==============================] - 5s 86us/step - loss: 0.1472 - acc: 0.7820\nEpoch 10/20\n62604/62604 [==============================] - 5s 84us/step - loss: 0.1466 - acc: 0.7834\nEpoch 11/20\n62604/62604 [==============================] - 5s 85us/step - loss: 0.1460 - acc: 0.7856\nEpoch 12/20\n62604/62604 [==============================] - 6s 91us/step - loss: 0.1455 - acc: 0.7860\nEpoch 13/20\n62604/62604 [==============================] - 5s 87us/step - loss: 0.1450 - acc: 0.7871\nEpoch 14/20\n62604/62604 [==============================] - 5s 86us/step - loss: 0.1446 - acc: 0.7884\nEpoch 15/20\n62604/62604 [==============================] - 6s 91us/step - loss: 0.1442 - acc: 0.7889\nEpoch 16/20\n62604/62604 [==============================] - 6s 88us/step - loss: 0.1438 - acc: 0.7902\nEpoch 17/20\n62604/62604 [==============================] - 6s 90us/step - loss: 0.1435 - acc: 0.7903\nEpoch 18/20\n62604/62604 [==============================] - 6s 94us/step - loss: 0.1431 - acc: 0.7909\nEpoch 19/20\n62604/62604 [==============================] - 6s 91us/step - loss: 0.1428 - acc: 0.7916\nEpoch 20/20\n62604/62604 [==============================] - 6s 94us/step - loss: 0.1425 - acc: 0.7923\n"
],
[
"yhat = np.round(model.predict(X_ints_test + [X_test_ar]))\nprint(mt.confusion_matrix(y_test,yhat),mt.accuracy_score(y_test,yhat))",
"[[7203 1796]\n [1503 5150]] 0.7892282136468183\n"
]
],
[
[
"## Model cross_columns1",
"_____no_output_____"
]
],
[
[
"# 'workclass','education','marital_status',\n# 'occupation','relationship','race',\n# 'sex','country'\ndef recall_m(y_true, y_pred):\n true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))\n recall = true_positives / (possible_positives + K.epsilon())\n return recall\n\ndef precision_m(y_true, y_pred):\n true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))\n precision = true_positives / (predicted_positives + K.epsilon())\n return precision\n\ndef f1_m(y_true, y_pred):\n precision = precision_m(y_true, y_pred)\n recall = recall_m(y_true, y_pred)\n return 2*((precision*recall)/(precision+recall+K.epsilon()))\n\ncross_columns = [['breed','animal_type'],\n ['color', 'sex'],\n ['spayed_neutered', 'sex'] ]\n\n#'workclass','education','marital_status','occupation','relationship','race','sex','country'\n\n# we need to create separate lists for each branch\nembed_branches = []\nX_ints_train = []\nX_ints_test = []\nall_inputs = []\nall_wide_branch_outputs = []\n\nfor cols in cross_columns:\n # encode crossed columns as ints for the embedding\n enc = LabelEncoder()\n \n # create crossed labels\n X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1)\n X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1)\n \n enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values)))\n X_crossed_train = enc.transform(X_crossed_train)\n X_crossed_test = enc.transform(X_crossed_test)\n X_ints_train.append( X_crossed_train )\n X_ints_test.append( X_crossed_test )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols))\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name = '_'.join(cols)+'_embed')(inputs)\n x = Flatten()(x)\n all_wide_branch_outputs.append(x)\n \n# merge the branches together\nwide_branch = concatenate(all_wide_branch_outputs, name='wide_concat')\nwide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch)\n\n# reset this input branch\nall_deep_branch_outputs = []\n# add in the embeddings\nfor col in categorical_headers_ints:\n # encode as ints for the embedding\n X_ints_train.append( X_train[col].values )\n X_ints_test.append( X_test[col].values )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name=col)\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name=col+'_embed')(inputs)\n x = Flatten()(x)\n all_deep_branch_outputs.append(x)\n \n# also get a dense branch of the numeric features\nall_inputs.append(Input(shape=(X_train_ar.shape[1],),\n sparse=False,\n name='numeric_data'))\n\nx = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1])\nall_deep_branch_outputs.append( x )\n\n# merge the deep branches together\ndeep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds')\ndeep_branch = Dense(units=100,activation='relu', name='deep1')(deep_branch)\ndeep_branch = Dense(units=50,activation='relu', name='deep2')(deep_branch)\ndeep_branch = Dense(units=25,activation='relu', name='deep3')(deep_branch)\n \nfinal_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide')\nfinal_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch)\n\nmodel = Model(inputs=all_inputs, outputs=final_branch)\n\n",
"_____no_output_____"
],
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\n# you will need to install pydot properly on your machine to get this running\nSVG(model_to_dot(model).create(prog='dot', format='svg'))",
"_____no_output_____"
],
[
"#%%time\n\nmodel.compile(optimizer='adagrad',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\n# lets also add the history variable to see how we are doing\n# and lets add a validation set to keep track of our progress\nhistory = model.fit(X_ints_train+ [X_train_ar],\n y_train, \n epochs=30, \n batch_size=50, \n verbose=1, \n validation_data = (X_ints_test + [X_test_ar], y_test))",
"Train on 62604 samples, validate on 15652 samples\nEpoch 1/30\n62604/62604 [==============================] - 5s 76us/step - loss: 0.1525 - acc: 0.7713 - f1_m: 0.7327 - val_loss: 0.1476 - val_acc: 0.7789 - val_f1_m: 0.7468\nEpoch 2/30\n62604/62604 [==============================] - 3s 49us/step - loss: 0.1446 - acc: 0.7845 - f1_m: 0.7442 - val_loss: 0.1455 - val_acc: 0.7802 - val_f1_m: 0.7506\nEpoch 3/30\n62604/62604 [==============================] - 3s 50us/step - loss: 0.1420 - acc: 0.7901 - f1_m: 0.7503 - val_loss: 0.1434 - val_acc: 0.7843 - val_f1_m: 0.7442\nEpoch 4/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1403 - acc: 0.7929 - f1_m: 0.7532 - val_loss: 0.1445 - val_acc: 0.7832 - val_f1_m: 0.7291\nEpoch 5/30\n62604/62604 [==============================] - 3s 53us/step - loss: 0.1388 - acc: 0.7953 - f1_m: 0.7570 - val_loss: 0.1419 - val_acc: 0.7887 - val_f1_m: 0.7520\nEpoch 6/30\n62604/62604 [==============================] - 4s 57us/step - loss: 0.1377 - acc: 0.7977 - f1_m: 0.7590 - val_loss: 0.1437 - val_acc: 0.7830 - val_f1_m: 0.7267\nEpoch 7/30\n62604/62604 [==============================] - 3s 54us/step - loss: 0.1368 - acc: 0.7985 - f1_m: 0.7596 - val_loss: 0.1415 - val_acc: 0.7880 - val_f1_m: 0.7455\nEpoch 8/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1357 - acc: 0.8012 - f1_m: 0.7635 - val_loss: 0.1405 - val_acc: 0.7923 - val_f1_m: 0.7563\nEpoch 9/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1351 - acc: 0.8015 - f1_m: 0.7636 - val_loss: 0.1414 - val_acc: 0.7887 - val_f1_m: 0.7378\nEpoch 10/30\n62604/62604 [==============================] - 4s 56us/step - loss: 0.1343 - acc: 0.8038 - f1_m: 0.7660 - val_loss: 0.1440 - val_acc: 0.7835 - val_f1_m: 0.7254\nEpoch 11/30\n62604/62604 [==============================] - 4s 57us/step - loss: 0.1338 - acc: 0.8037 - f1_m: 0.7657 - val_loss: 0.1398 - val_acc: 0.7920 - val_f1_m: 0.7568\nEpoch 12/30\n62604/62604 [==============================] - 3s 56us/step - loss: 0.1331 - acc: 0.8068 - f1_m: 0.7698 - val_loss: 0.1407 - val_acc: 0.7893 - val_f1_m: 0.7408\nEpoch 13/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1326 - acc: 0.8071 - f1_m: 0.7694 - val_loss: 0.1394 - val_acc: 0.7930 - val_f1_m: 0.7523\nEpoch 14/30\n62604/62604 [==============================] - 3s 56us/step - loss: 0.1321 - acc: 0.8080 - f1_m: 0.7701 - val_loss: 0.1392 - val_acc: 0.7918 - val_f1_m: 0.7542\nEpoch 15/30\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1316 - acc: 0.8080 - f1_m: 0.7714 - val_loss: 0.1392 - val_acc: 0.7937 - val_f1_m: 0.7509\nEpoch 16/30\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1311 - acc: 0.8100 - f1_m: 0.7733 - val_loss: 0.1394 - val_acc: 0.7927 - val_f1_m: 0.7609\nEpoch 17/30\n62604/62604 [==============================] - 4s 63us/step - loss: 0.1307 - acc: 0.8104 - f1_m: 0.7740 - val_loss: 0.1395 - val_acc: 0.7931 - val_f1_m: 0.7632\nEpoch 18/30\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1303 - acc: 0.8108 - f1_m: 0.7745 - val_loss: 0.1390 - val_acc: 0.7933 - val_f1_m: 0.7517\nEpoch 19/30\n62604/62604 [==============================] - 4s 67us/step - loss: 0.1298 - acc: 0.8115 - f1_m: 0.7751 - val_loss: 0.1391 - val_acc: 0.7935 - val_f1_m: 0.7491\nEpoch 20/30\n62604/62604 [==============================] - 4s 61us/step - loss: 0.1295 - acc: 0.8120 - f1_m: 0.7756 - val_loss: 0.1389 - val_acc: 0.7951 - val_f1_m: 0.7542\nEpoch 21/30\n62604/62604 [==============================] - 4s 67us/step - loss: 0.1290 - acc: 0.8137 - f1_m: 0.7779 - val_loss: 0.1391 - val_acc: 0.7939 - val_f1_m: 0.7496\nEpoch 22/30\n62604/62604 [==============================] - 4s 68us/step - loss: 0.1287 - acc: 0.8143 - f1_m: 0.7784 - val_loss: 0.1390 - val_acc: 0.7949 - val_f1_m: 0.7502\nEpoch 23/30\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1283 - acc: 0.8148 - f1_m: 0.7796 - val_loss: 0.1385 - val_acc: 0.7946 - val_f1_m: 0.7588\nEpoch 24/30\n62604/62604 [==============================] - 5s 73us/step - loss: 0.1279 - acc: 0.8156 - f1_m: 0.7798 - val_loss: 0.1386 - val_acc: 0.7954 - val_f1_m: 0.7617\nEpoch 25/30\n62604/62604 [==============================] - 5s 76us/step - loss: 0.1276 - acc: 0.8161 - f1_m: 0.7810 - val_loss: 0.1388 - val_acc: 0.7941 - val_f1_m: 0.7528\nEpoch 26/30\n62604/62604 [==============================] - 5s 72us/step - loss: 0.1273 - acc: 0.8165 - f1_m: 0.7814 - val_loss: 0.1383 - val_acc: 0.7957 - val_f1_m: 0.7598\nEpoch 27/30\n62604/62604 [==============================] - 4s 72us/step - loss: 0.1269 - acc: 0.8176 - f1_m: 0.7829 - val_loss: 0.1384 - val_acc: 0.7949 - val_f1_m: 0.7546\nEpoch 28/30\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1266 - acc: 0.8176 - f1_m: 0.7821 - val_loss: 0.1398 - val_acc: 0.7944 - val_f1_m: 0.7461\nEpoch 29/30\n62604/62604 [==============================] - 4s 69us/step - loss: 0.1263 - acc: 0.8190 - f1_m: 0.7842 - val_loss: 0.1386 - val_acc: 0.7961 - val_f1_m: 0.7633\nEpoch 30/30\n62604/62604 [==============================] - 4s 68us/step - loss: 0.1260 - acc: 0.8190 - f1_m: 0.7837 - val_loss: 0.1386 - val_acc: 0.7952 - val_f1_m: 0.7633\n"
],
[
"yhat = np.round(model.predict(X_ints_test + [X_test_ar]))\nprint(mt.confusion_matrix(y_test,yhat), mt.accuracy_score(y_test,yhat))",
"[[7180 1785]\n [1420 5267]] 0.7952338359315103\n"
]
],
[
[
"## Model cross_columns2",
"_____no_output_____"
]
],
[
[
"# 'workclass','education','marital_status',\n# 'occupation','relationship','race',\n# 'sex','country'\n\ncross_columns = [['breed','sex'],\n ['color', 'spayed_neutered']]\n\n#'workclass','education','marital_status','occupation','relationship','race','sex','country'\n\n# we need to create separate lists for each branch\nembed_branches = []\nX_ints_train = []\nX_ints_test = []\nall_inputs = []\nall_wide_branch_outputs = []\n\nfor cols in cross_columns:\n # encode crossed columns as ints for the embedding\n enc = LabelEncoder()\n \n # create crossed labels\n X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1)\n X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1)\n \n enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values)))\n X_crossed_train = enc.transform(X_crossed_train)\n X_crossed_test = enc.transform(X_crossed_test)\n X_ints_train.append( X_crossed_train )\n X_ints_test.append( X_crossed_test )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols))\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name = '_'.join(cols)+'_embed')(inputs)\n x = Flatten()(x)\n all_wide_branch_outputs.append(x)\n \n# merge the branches together\nwide_branch = concatenate(all_wide_branch_outputs, name='wide_concat')\nwide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch)\n\n# reset this input branch\nall_deep_branch_outputs = []\n# add in the embeddings\nfor col in categorical_headers_ints:\n # encode as ints for the embedding\n X_ints_train.append( X_train[col].values )\n X_ints_test.append( X_test[col].values )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name=col)\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name=col+'_embed')(inputs)\n x = Flatten()(x)\n all_deep_branch_outputs.append(x)\n \n# also get a dense branch of the numeric features\nall_inputs.append(Input(shape=(X_train_ar.shape[1],),\n sparse=False,\n name='numeric_data'))\n\nx = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1])\nall_deep_branch_outputs.append( x )\n\n# merge the deep branches together\ndeep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds')\ndeep_branch = Dense(units=100,activation='relu', name='deep1')(deep_branch)\ndeep_branch = Dense(units=50,activation='relu', name='deep2')(deep_branch)\ndeep_branch = Dense(units=25,activation='relu', name='deep3')(deep_branch)\n \nfinal_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide')\nfinal_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch)\n\nmodel1 = Model(inputs=all_inputs, outputs=final_branch)\n\n",
"_____no_output_____"
],
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\n# you will need to install pydot properly on your machine to get this running\nSVG(model_to_dot(model1).create(prog='dot', format='svg'))",
"_____no_output_____"
],
[
"%%time\n\nmodel1.compile(optimizer='adagrad',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\n# lets also add the history variable to see how we are doing\n# and lets add a validation set to keep track of our progress\nhistory1 = model1.fit(X_ints_train+ [X_train_ar],\n y_train, \n epochs=30, \n batch_size=50, \n verbose=1, \n validation_data = (X_ints_test + [X_test_ar], y_test))",
"Train on 62604 samples, validate on 15652 samples\nEpoch 1/30\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1526 - acc: 0.7718 - f1_m: 0.7304 - val_loss: 0.1473 - val_acc: 0.7800 - val_f1_m: 0.7376\nEpoch 2/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1445 - acc: 0.7866 - f1_m: 0.7465 - val_loss: 0.1452 - val_acc: 0.7837 - val_f1_m: 0.7415\nEpoch 3/30\n62604/62604 [==============================] - 3s 51us/step - loss: 0.1419 - acc: 0.7910 - f1_m: 0.7503 - val_loss: 0.1434 - val_acc: 0.7864 - val_f1_m: 0.7545\nEpoch 4/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1402 - acc: 0.7942 - f1_m: 0.7537 - val_loss: 0.1424 - val_acc: 0.7865 - val_f1_m: 0.7413\nEpoch 5/30\n62604/62604 [==============================] - 3s 56us/step - loss: 0.1389 - acc: 0.7960 - f1_m: 0.7554 - val_loss: 0.1414 - val_acc: 0.7885 - val_f1_m: 0.7512\nEpoch 6/30\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1379 - acc: 0.7986 - f1_m: 0.7589 - val_loss: 0.1418 - val_acc: 0.7878 - val_f1_m: 0.7388\nEpoch 7/30\n62604/62604 [==============================] - 4s 64us/step - loss: 0.1371 - acc: 0.7996 - f1_m: 0.7595 - val_loss: 0.1407 - val_acc: 0.7882 - val_f1_m: 0.7501\nEpoch 8/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1364 - acc: 0.7999 - f1_m: 0.7604 - val_loss: 0.1407 - val_acc: 0.7899 - val_f1_m: 0.7442\nEpoch 9/30\n62604/62604 [==============================] - 4s 57us/step - loss: 0.1356 - acc: 0.8013 - f1_m: 0.7616 - val_loss: 0.1401 - val_acc: 0.7902 - val_f1_m: 0.7495\nEpoch 10/30\n62604/62604 [==============================] - 4s 57us/step - loss: 0.1351 - acc: 0.8021 - f1_m: 0.7620 - val_loss: 0.1399 - val_acc: 0.7918 - val_f1_m: 0.7490\nEpoch 11/30\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1345 - acc: 0.8029 - f1_m: 0.7636 - val_loss: 0.1403 - val_acc: 0.7896 - val_f1_m: 0.7579\nEpoch 12/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1340 - acc: 0.8037 - f1_m: 0.7645 - val_loss: 0.1407 - val_acc: 0.7916 - val_f1_m: 0.7405\nEpoch 13/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1335 - acc: 0.8044 - f1_m: 0.7657 - val_loss: 0.1400 - val_acc: 0.7916 - val_f1_m: 0.7451\nEpoch 14/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1330 - acc: 0.8050 - f1_m: 0.7662 - val_loss: 0.1394 - val_acc: 0.7916 - val_f1_m: 0.7583\nEpoch 15/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1326 - acc: 0.8059 - f1_m: 0.7664 - val_loss: 0.1392 - val_acc: 0.7917 - val_f1_m: 0.7561\nEpoch 16/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1322 - acc: 0.8065 - f1_m: 0.7673 - val_loss: 0.1390 - val_acc: 0.7909 - val_f1_m: 0.7504\nEpoch 17/30\n62604/62604 [==============================] - 4s 61us/step - loss: 0.1318 - acc: 0.8080 - f1_m: 0.7698 - val_loss: 0.1388 - val_acc: 0.7928 - val_f1_m: 0.7553\nEpoch 18/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1313 - acc: 0.8084 - f1_m: 0.7705 - val_loss: 0.1392 - val_acc: 0.7907 - val_f1_m: 0.7473\nEpoch 19/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1311 - acc: 0.8084 - f1_m: 0.7704 - val_loss: 0.1385 - val_acc: 0.7938 - val_f1_m: 0.7540\nEpoch 20/30\n62604/62604 [==============================] - 4s 64us/step - loss: 0.1306 - acc: 0.8097 - f1_m: 0.7720 - val_loss: 0.1393 - val_acc: 0.7906 - val_f1_m: 0.7445\nEpoch 21/30\n62604/62604 [==============================] - 4s 66us/step - loss: 0.1303 - acc: 0.8099 - f1_m: 0.7722 - val_loss: 0.1385 - val_acc: 0.7918 - val_f1_m: 0.7528\nEpoch 22/30\n62604/62604 [==============================] - 4s 67us/step - loss: 0.1300 - acc: 0.8104 - f1_m: 0.7720 - val_loss: 0.1382 - val_acc: 0.7937 - val_f1_m: 0.7562\nEpoch 23/30\n62604/62604 [==============================] - 5s 74us/step - loss: 0.1297 - acc: 0.8112 - f1_m: 0.7738 - val_loss: 0.1388 - val_acc: 0.7914 - val_f1_m: 0.7596\nEpoch 24/30\n62604/62604 [==============================] - 5s 77us/step - loss: 0.1294 - acc: 0.8124 - f1_m: 0.7751 - val_loss: 0.1388 - val_acc: 0.7931 - val_f1_m: 0.7621\nEpoch 25/30\n62604/62604 [==============================] - 5s 78us/step - loss: 0.1290 - acc: 0.8129 - f1_m: 0.7757 - val_loss: 0.1385 - val_acc: 0.7930 - val_f1_m: 0.7593\nEpoch 26/30\n62604/62604 [==============================] - 5s 81us/step - loss: 0.1287 - acc: 0.8133 - f1_m: 0.7769 - val_loss: 0.1392 - val_acc: 0.7908 - val_f1_m: 0.7425\nEpoch 27/30\n62604/62604 [==============================] - 5s 79us/step - loss: 0.1284 - acc: 0.8132 - f1_m: 0.7761 - val_loss: 0.1383 - val_acc: 0.7941 - val_f1_m: 0.7587\nEpoch 28/30\n62604/62604 [==============================] - 5s 78us/step - loss: 0.1281 - acc: 0.8141 - f1_m: 0.7773 - val_loss: 0.1381 - val_acc: 0.7940 - val_f1_m: 0.7578\nEpoch 29/30\n62604/62604 [==============================] - 5s 79us/step - loss: 0.1278 - acc: 0.8145 - f1_m: 0.7774 - val_loss: 0.1380 - val_acc: 0.7946 - val_f1_m: 0.7575\nEpoch 30/30\n62604/62604 [==============================] - 5s 81us/step - loss: 0.1276 - acc: 0.8148 - f1_m: 0.7781 - val_loss: 0.1382 - val_acc: 0.7957 - val_f1_m: 0.7545\nWall time: 2min 3s\n"
],
[
"from matplotlib import pyplot as plt\n\n%matplotlib inline\n\nplt.figure(figsize=(15,11))\nplt.subplot(2,2,2)\nplt.ylabel('MSE Training acc and val_acc')\nplt.xlabel('epochs Model 2')\nplt.plot(history1.history['f1_m'])\nplt.plot(history1.history['val_f1_m'])\n\n\nplt.subplot(2,2,4)\nplt.plot(history1.history['loss'])\nplt.ylabel('MSE Training Loss')\nplt.xlabel('epochs Model 2')\nplt.plot(history1.history['val_loss'])\nplt.xlabel('epochs Model 2')\n\n\n\nplt.subplot(2,2,1)\nplt.ylabel('MSE Training acc and val_acc')\nplt.xlabel('epochs Model 1')\nplt.plot(history.history['f1_m'])\nplt.plot(history.history['val_f1_m'])\n\n\nplt.subplot(2,2,3)\nplt.plot(history.history['loss'])\nplt.ylabel('MSE Training Loss')\nplt.xlabel('epochs Model 1')\nplt.plot(history.history['val_loss'])\nplt.xlabel('epochs Model 1')",
"_____no_output_____"
]
],
[
[
"Model 1 corsing \n A-breed ,animal_type\n B-sex ,color\n C-sex, spayed_neutered\n\nmodel 2 is crossing :\n A-breed,sex\n B-color', spayed_neutered\n\n\nshown above the accuracy and the valdation for both model1 and model2 i want to argue that model 1 is slightly better than model2 because of the following reasons :\n\n1- the elements model 1 is crossing are highly corrlated, females always have more colors , breed is highly colrated with animal type so it provide a better genrliztion \n\n2- because of the first point , model 1 resulted in a better over all accurace and valdation accuracy than model 2\n\n3- best accracy for model1 is 0.7781 and validation is 0.7545\n",
"_____no_output_____"
],
[
"### Model A 5 layars",
"_____no_output_____"
]
],
[
[
"# 'workclass','education','marital_status',\n# 'occupation','relationship','race',\n# 'sex','country'\ncross_columns = [['breed','animal_type'],\n ['color', 'sex'],\n ['spayed_neutered', 'sex'] ]\n\n\n#'workclass','education','marital_status','occupation','relationship','race','sex','country'\n\n# we need to create separate lists for each branch\nembed_branches = []\nX_ints_train = []\nX_ints_test = []\nall_inputs = []\nall_wide_branch_outputs = []\n\nfor cols in cross_columns:\n # encode crossed columns as ints for the embedding\n enc = LabelEncoder()\n \n # create crossed labels\n X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1)\n X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1)\n \n enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values)))\n X_crossed_train = enc.transform(X_crossed_train)\n X_crossed_test = enc.transform(X_crossed_test)\n X_ints_train.append( X_crossed_train )\n X_ints_test.append( X_crossed_test )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols))\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name = '_'.join(cols)+'_embed')(inputs)\n x = Flatten()(x)\n all_wide_branch_outputs.append(x)\n \n# merge the branches together\nwide_branch = concatenate(all_wide_branch_outputs, name='wide_concat')\nwide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch)\n\n# reset this input branch\nall_deep_branch_outputs = []\n# add in the embeddings\nfor col in categorical_headers_ints:\n # encode as ints for the embedding\n X_ints_train.append( X_train[col].values )\n X_ints_test.append( X_test[col].values )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name=col)\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name=col+'_embed')(inputs)\n x = Flatten()(x)\n all_deep_branch_outputs.append(x)\n \n# also get a dense branch of the numeric features\nall_inputs.append(Input(shape=(X_train_ar.shape[1],),\n sparse=False,\n name='numeric_data'))\n\nx = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1])\nall_deep_branch_outputs.append( x )\n\n# merge the deep branches together\ndeep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds')\ndeep_branch = Dense(units=100,activation='relu', name='deep1')(deep_branch)\ndeep_branch = Dense(units=50,activation='relu', name='deep2')(deep_branch)\ndeep_branch = Dense(units=25,activation='relu', name='deep3')(deep_branch)\ndeep_branch = Dense(units=15,activation='relu', name='deep4')(deep_branch) \nfinal_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide')\nfinal_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch)\n\nmodelA = Model(inputs=all_inputs, outputs=final_branch)",
"_____no_output_____"
],
[
"%%time\n\nmodelA.compile(optimizer='adagrad',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\n# lets also add the history variable to see how we are doing\n# and lets add a validation set to keep track of our progress\nhistoryA = modelA.fit(X_ints_train+ [X_train_ar],\n y_train, \n epochs=30, \n batch_size=50, \n verbose=1, \n validation_data = (X_ints_test + [X_test_ar], y_test))",
"Train on 62604 samples, validate on 15652 samples\nEpoch 1/30\n62604/62604 [==============================] - 4s 63us/step - loss: 0.1534 - acc: 0.7702 - f1_m: 0.7311 - val_loss: 0.1506 - val_acc: 0.7751 - val_f1_m: 0.7518\nEpoch 2/30\n62604/62604 [==============================] - 3s 44us/step - loss: 0.1452 - acc: 0.7847 - f1_m: 0.7453 - val_loss: 0.1475 - val_acc: 0.7823 - val_f1_m: 0.7545\nEpoch 3/30\n62604/62604 [==============================] - 3s 45us/step - loss: 0.1425 - acc: 0.7905 - f1_m: 0.7518 - val_loss: 0.1445 - val_acc: 0.7842 - val_f1_m: 0.7364\nEpoch 4/30\n62604/62604 [==============================] - 3s 50us/step - loss: 0.1410 - acc: 0.7925 - f1_m: 0.7532 - val_loss: 0.1434 - val_acc: 0.7879 - val_f1_m: 0.7497\nEpoch 5/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1398 - acc: 0.7947 - f1_m: 0.7571 - val_loss: 0.1424 - val_acc: 0.7892 - val_f1_m: 0.7449\nEpoch 6/30\n62604/62604 [==============================] - 3s 56us/step - loss: 0.1386 - acc: 0.7968 - f1_m: 0.7591 - val_loss: 0.1422 - val_acc: 0.7920 - val_f1_m: 0.7581\nEpoch 7/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1377 - acc: 0.7987 - f1_m: 0.7612 - val_loss: 0.1420 - val_acc: 0.7883 - val_f1_m: 0.7396\nEpoch 8/30\n62604/62604 [==============================] - 3s 51us/step - loss: 0.1369 - acc: 0.7997 - f1_m: 0.7615 - val_loss: 0.1416 - val_acc: 0.7919 - val_f1_m: 0.7593\nEpoch 9/30\n62604/62604 [==============================] - 3s 53us/step - loss: 0.1361 - acc: 0.8014 - f1_m: 0.7639 - val_loss: 0.1426 - val_acc: 0.7903 - val_f1_m: 0.7616\nEpoch 10/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1355 - acc: 0.8018 - f1_m: 0.7642 - val_loss: 0.1425 - val_acc: 0.7885 - val_f1_m: 0.7621\nEpoch 11/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1349 - acc: 0.8036 - f1_m: 0.7666 - val_loss: 0.1402 - val_acc: 0.7929 - val_f1_m: 0.7546\nEpoch 12/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1343 - acc: 0.8039 - f1_m: 0.7668 - val_loss: 0.1401 - val_acc: 0.7927 - val_f1_m: 0.7575\nEpoch 13/30\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1338 - acc: 0.8054 - f1_m: 0.7689 - val_loss: 0.1400 - val_acc: 0.7936 - val_f1_m: 0.7550\nEpoch 14/30\n62604/62604 [==============================] - 4s 67us/step - loss: 0.1332 - acc: 0.8054 - f1_m: 0.7679 - val_loss: 0.1398 - val_acc: 0.7931 - val_f1_m: 0.7543\nEpoch 15/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1328 - acc: 0.8067 - f1_m: 0.7702 - val_loss: 0.1394 - val_acc: 0.7933 - val_f1_m: 0.7550\nEpoch 16/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1322 - acc: 0.8073 - f1_m: 0.7704 - val_loss: 0.1397 - val_acc: 0.7941 - val_f1_m: 0.7585\nEpoch 17/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1318 - acc: 0.8085 - f1_m: 0.7722 - val_loss: 0.1400 - val_acc: 0.7933 - val_f1_m: 0.7439\nEpoch 18/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1313 - acc: 0.8089 - f1_m: 0.7723 - val_loss: 0.1401 - val_acc: 0.7941 - val_f1_m: 0.7609\nEpoch 19/30\n62604/62604 [==============================] - 4s 57us/step - loss: 0.1309 - acc: 0.8095 - f1_m: 0.7736 - val_loss: 0.1394 - val_acc: 0.7945 - val_f1_m: 0.7504\nEpoch 20/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1305 - acc: 0.8102 - f1_m: 0.7742 - val_loss: 0.1392 - val_acc: 0.7950 - val_f1_m: 0.7504\nEpoch 21/30\n62604/62604 [==============================] - 3s 56us/step - loss: 0.1300 - acc: 0.8108 - f1_m: 0.7742 - val_loss: 0.1389 - val_acc: 0.7950 - val_f1_m: 0.7551\nEpoch 22/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1297 - acc: 0.8120 - f1_m: 0.7763 - val_loss: 0.1395 - val_acc: 0.7938 - val_f1_m: 0.7613\nEpoch 23/30\n62604/62604 [==============================] - 3s 54us/step - loss: 0.1293 - acc: 0.8128 - f1_m: 0.7769 - val_loss: 0.1391 - val_acc: 0.7950 - val_f1_m: 0.7575\nEpoch 24/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1289 - acc: 0.8129 - f1_m: 0.7773 - val_loss: 0.1399 - val_acc: 0.7915 - val_f1_m: 0.7611\nEpoch 25/30\n62604/62604 [==============================] - 3s 53us/step - loss: 0.1285 - acc: 0.8141 - f1_m: 0.7782 - val_loss: 0.1390 - val_acc: 0.7963 - val_f1_m: 0.7588\nEpoch 26/30\n62604/62604 [==============================] - 3s 53us/step - loss: 0.1281 - acc: 0.8148 - f1_m: 0.7790 - val_loss: 0.1388 - val_acc: 0.7949 - val_f1_m: 0.7508\nEpoch 27/30\n62604/62604 [==============================] - 3s 54us/step - loss: 0.1278 - acc: 0.8156 - f1_m: 0.7802 - val_loss: 0.1393 - val_acc: 0.7948 - val_f1_m: 0.7623\nEpoch 28/30\n62604/62604 [==============================] - 3s 54us/step - loss: 0.1274 - acc: 0.8158 - f1_m: 0.7807 - val_loss: 0.1388 - val_acc: 0.7950 - val_f1_m: 0.7557\nEpoch 29/30\n62604/62604 [==============================] - 3s 55us/step - loss: 0.1270 - acc: 0.8167 - f1_m: 0.7818 - val_loss: 0.1388 - val_acc: 0.7955 - val_f1_m: 0.7549\nEpoch 30/30\n62604/62604 [==============================] - 3s 52us/step - loss: 0.1267 - acc: 0.8177 - f1_m: 0.7823 - val_loss: 0.1392 - val_acc: 0.7937 - val_f1_m: 0.7461\nWall time: 1min 44s\n"
]
],
[
[
"### Model B 7 layers",
"_____no_output_____"
]
],
[
[
"# 'workclass','education','marital_status',\n# 'occupation','relationship','race',\n# 'sex','country'\n\ncross_columns = [['breed','animal_type'],\n ['color', 'sex'],\n ['spayed_neutered', 'sex'] ]\n\n\n\n\n# we need to create separate lists for each branch\nembed_branches = []\nX_ints_train = []\nX_ints_test = []\nall_inputs = []\nall_wide_branch_outputs = []\n\nfor cols in cross_columns:\n # encode crossed columns as ints for the embedding\n enc = LabelEncoder()\n \n # create crossed labels\n X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1)\n X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1)\n \n enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values)))\n X_crossed_train = enc.transform(X_crossed_train)\n X_crossed_test = enc.transform(X_crossed_test)\n X_ints_train.append( X_crossed_train )\n X_ints_test.append( X_crossed_test )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols))\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name = '_'.join(cols)+'_embed')(inputs)\n x = Flatten()(x)\n all_wide_branch_outputs.append(x)\n \n# merge the branches together\nwide_branch = concatenate(all_wide_branch_outputs, name='wide_concat')\n\nwide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch)\n\n# reset this input branch\nall_deep_branch_outputs = []\n# add in the embeddings\nfor col in categorical_headers_ints:\n # encode as ints for the embedding\n X_ints_train.append( X_train[col].values )\n X_ints_test.append( X_test[col].values )\n \n # get the number of categories\n N = max(X_ints_train[-1]+1) # same as the max(df_train[col])\n \n # create embedding branch from the number of categories\n inputs = Input(shape=(1,),dtype='int32', name=col)\n all_inputs.append(inputs)\n x = Embedding(input_dim=N, \n output_dim=int(np.sqrt(N)), \n input_length=1, name=col+'_embed')(inputs)\n x = Flatten()(x)\n all_deep_branch_outputs.append(x)\n \n# also get a dense branch of the numeric features\nall_inputs.append(Input(shape=(X_train_ar.shape[1],),\n sparse=False,\n name='numeric_data'))\n\nx = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1])\nall_deep_branch_outputs.append( x )\n\n# merge the deep branches together\ndeep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds')\ndeep_branch = Dense(units=200,activation='relu', name='deep1')(deep_branch)\ndeep_branch = Dense(units=100,activation='relu', name='deep2')(deep_branch)\ndeep_branch = Dense(units=50,activation='relu', name='deep3')(deep_branch)\ndeep_branch = Dense(units=25,activation='relu', name='deep4')(deep_branch) \ndeep_branch = Dense(units=15,activation='relu', name='deep5')(deep_branch)\ndeep_branch = Dense(units=10,activation='relu', name='deep6')(deep_branch)\n\nfinal_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide')\nfinal_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch)\n\nmodelB = Model(inputs=all_inputs, outputs=final_branch)\n\n",
"_____no_output_____"
],
[
"%%time\n\nmodelB.compile(optimizer='adagrad',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\n# lets also add the history variable to see how we are doing\n# and lets add a validation set to keep track of our progress\nhistoryB = modelB.fit(X_ints_train+ [X_train_ar],\n y_train, \n epochs=30, \n batch_size=50, \n verbose=1, \n validation_data = (X_ints_test + [X_test_ar], y_test))",
"Train on 62604 samples, validate on 15652 samples\nEpoch 1/30\n62604/62604 [==============================] - 5s 73us/step - loss: 0.1528 - acc: 0.7713 - f1_m: 0.7337 - val_loss: 0.1464 - val_acc: 0.7824 - val_f1_m: 0.7482\nEpoch 2/30\n62604/62604 [==============================] - 5s 83us/step - loss: 0.1454 - acc: 0.7861 - f1_m: 0.7491 - val_loss: 0.1466 - val_acc: 0.7857 - val_f1_m: 0.7313\nEpoch 3/30\n62604/62604 [==============================] - 5s 75us/step - loss: 0.1430 - acc: 0.7906 - f1_m: 0.7531 - val_loss: 0.1426 - val_acc: 0.7912 - val_f1_m: 0.7512\nEpoch 4/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1410 - acc: 0.7947 - f1_m: 0.7555 - val_loss: 0.1420 - val_acc: 0.7916 - val_f1_m: 0.7489\nEpoch 5/30\n62604/62604 [==============================] - 4s 63us/step - loss: 0.1395 - acc: 0.7972 - f1_m: 0.7587 - val_loss: 0.1414 - val_acc: 0.7907 - val_f1_m: 0.7499\nEpoch 6/30\n62604/62604 [==============================] - 4s 61us/step - loss: 0.1380 - acc: 0.7989 - f1_m: 0.7599 - val_loss: 0.1404 - val_acc: 0.7938 - val_f1_m: 0.7507\nEpoch 7/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1368 - acc: 0.8017 - f1_m: 0.7632 - val_loss: 0.1401 - val_acc: 0.7935 - val_f1_m: 0.7501\nEpoch 8/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1356 - acc: 0.8039 - f1_m: 0.7649 - val_loss: 0.1412 - val_acc: 0.7919 - val_f1_m: 0.7566\nEpoch 9/30\n62604/62604 [==============================] - 4s 58us/step - loss: 0.1346 - acc: 0.8059 - f1_m: 0.7670 - val_loss: 0.1404 - val_acc: 0.7932 - val_f1_m: 0.7543\nEpoch 10/30\n62604/62604 [==============================] - 4s 60us/step - loss: 0.1336 - acc: 0.8076 - f1_m: 0.7690 - val_loss: 0.1421 - val_acc: 0.7881 - val_f1_m: 0.7580\nEpoch 11/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1328 - acc: 0.8089 - f1_m: 0.7701 - val_loss: 0.1406 - val_acc: 0.7913 - val_f1_m: 0.7598\nEpoch 12/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1319 - acc: 0.8105 - f1_m: 0.7722 - val_loss: 0.1419 - val_acc: 0.7898 - val_f1_m: 0.7407\nEpoch 13/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1311 - acc: 0.8124 - f1_m: 0.7749 - val_loss: 0.1399 - val_acc: 0.7950 - val_f1_m: 0.7504\nEpoch 14/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1302 - acc: 0.8136 - f1_m: 0.7761 - val_loss: 0.1419 - val_acc: 0.7896 - val_f1_m: 0.7562\nEpoch 15/30\n62604/62604 [==============================] - 4s 61us/step - loss: 0.1295 - acc: 0.8155 - f1_m: 0.7782 - val_loss: 0.1402 - val_acc: 0.7938 - val_f1_m: 0.7483\nEpoch 16/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1286 - acc: 0.8169 - f1_m: 0.7793 - val_loss: 0.1402 - val_acc: 0.7927 - val_f1_m: 0.7505\nEpoch 17/30\n62604/62604 [==============================] - 4s 68us/step - loss: 0.1279 - acc: 0.8183 - f1_m: 0.7802 - val_loss: 0.1413 - val_acc: 0.7923 - val_f1_m: 0.7420\nEpoch 18/30\n62604/62604 [==============================] - 4s 66us/step - loss: 0.1271 - acc: 0.8201 - f1_m: 0.7827 - val_loss: 0.1416 - val_acc: 0.7929 - val_f1_m: 0.7458\nEpoch 19/30\n62604/62604 [==============================] - 4s 65us/step - loss: 0.1264 - acc: 0.8215 - f1_m: 0.7853 - val_loss: 0.1429 - val_acc: 0.7889 - val_f1_m: 0.7352\nEpoch 20/30\n62604/62604 [==============================] - 4s 59us/step - loss: 0.1256 - acc: 0.8238 - f1_m: 0.7873 - val_loss: 0.1415 - val_acc: 0.7911 - val_f1_m: 0.7475\nEpoch 21/30\n62604/62604 [==============================] - 4s 60us/step - loss: 0.1248 - acc: 0.8252 - f1_m: 0.7886 - val_loss: 0.1420 - val_acc: 0.7915 - val_f1_m: 0.7441\nEpoch 22/30\n62604/62604 [==============================] - 4s 60us/step - loss: 0.1240 - acc: 0.8259 - f1_m: 0.7903 - val_loss: 0.1428 - val_acc: 0.7897 - val_f1_m: 0.7369\nEpoch 23/30\n62604/62604 [==============================] - 4s 60us/step - loss: 0.1230 - acc: 0.8288 - f1_m: 0.7933 - val_loss: 0.1425 - val_acc: 0.7902 - val_f1_m: 0.7432\nEpoch 24/30\n62604/62604 [==============================] - 4s 61us/step - loss: 0.1224 - acc: 0.8293 - f1_m: 0.7939 - val_loss: 0.1428 - val_acc: 0.7899 - val_f1_m: 0.7445\nEpoch 25/30\n62604/62604 [==============================] - 4s 60us/step - loss: 0.1215 - acc: 0.8315 - f1_m: 0.7969 - val_loss: 0.1433 - val_acc: 0.7903 - val_f1_m: 0.7384\nEpoch 26/30\n62604/62604 [==============================] - 4s 61us/step - loss: 0.1207 - acc: 0.8328 - f1_m: 0.7974 - val_loss: 0.1445 - val_acc: 0.7899 - val_f1_m: 0.7574\nEpoch 27/30\n62604/62604 [==============================] - 4s 63us/step - loss: 0.1199 - acc: 0.8347 - f1_m: 0.7996 - val_loss: 0.1443 - val_acc: 0.7887 - val_f1_m: 0.7481\nEpoch 28/30\n62604/62604 [==============================] - 4s 60us/step - loss: 0.1190 - acc: 0.8363 - f1_m: 0.8015 - val_loss: 0.1488 - val_acc: 0.7816 - val_f1_m: 0.7551\nEpoch 29/30\n62604/62604 [==============================] - 5s 81us/step - loss: 0.1182 - acc: 0.8376 - f1_m: 0.8033 - val_loss: 0.1468 - val_acc: 0.7867 - val_f1_m: 0.7313\nEpoch 30/30\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1173 - acc: 0.8397 - f1_m: 0.8057 - val_loss: 0.1450 - val_acc: 0.7892 - val_f1_m: 0.7453\nWall time: 2min\n"
],
[
"from matplotlib import pyplot as plt\n\n%matplotlib inline\n\nplt.figure(figsize=(15,11))\nplt.subplot(2,2,1)\nplt.ylabel('MSE Training acc and val_acc')\nplt.xlabel('epochs Model A')\nplt.plot(historyA.history['f1_m'])\n\nplt.plot(historyA.history['val_f1_m'])\n\n\nplt.subplot(2,2,3)\nplt.plot(historyA.history['loss'])\nplt.ylabel('MSE Training Loss and val_loss')\nplt.plot(historyA.history['val_loss'])\nplt.xlabel('epochs Model A')\n\n\nplt.subplot(2,2,2)\nplt.ylabel('MSE Training acc and val_acc')\nplt.xlabel('epochs Model B')\nplt.plot(historyB.history['f1_m'])\nplt.plot(historyB.history['val_f1_m'])\n\n\nplt.subplot(2,2,4)\nplt.plot(historyB.history['loss'])\nplt.ylabel('MSE Training Loss')\nplt.plot(historyB.history['val_loss'])\nplt.xlabel('epochs Model B')",
"_____no_output_____"
]
],
[
[
"model A and model B has diffrent number of layers :\n\n1- Model A has 5 layers \n\n2- model B has 7 layers, \n\n\nmodel B seems to have a better accuracy that reached as high as 81 % but on the other hand the valdation loss got higher and higher and resulted in a loss valdation accuracy and the reasone behind that is an over fit accured because of the high number of layers ,\n\nso i would say Modle B provided a better F1 accurace with a minmum valdation loss for itration bellow 15 epcos thus model B is better than model A only if we reduced the number of epocs to 15 which in turns increase the time effecance \n\n",
"_____no_output_____"
],
[
"### MLP",
"_____no_output_____"
]
],
[
[
"inputs = Input(shape=(X_train_ar.shape[1],))\nx = Dense(units=100, activation='relu')(inputs)\nx = Dense(units=50, activation='relu')(x)\nx = Dense(units=25, activation='relu')(x)\nx = Dense(units=15, activation='relu')(x)\nx = Dense(units=10, activation='relu')(x)\npredictions = Dense(1,activation='sigmoid')(x)\nmodelMLP1 = Model(inputs=inputs, outputs=predictions)\n",
"_____no_output_____"
],
[
"modelMLP1.compile(optimizer='sgd',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\nmodelMLP1.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 14) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 100) 1500 \n_________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 \n_________________________________________________________________\ndense_3 (Dense) (None, 25) 1275 \n_________________________________________________________________\ndense_4 (Dense) (None, 15) 390 \n_________________________________________________________________\ndense_5 (Dense) (None, 10) 160 \n_________________________________________________________________\ndense_6 (Dense) (None, 1) 11 \n=================================================================\nTotal params: 8,386\nTrainable params: 8,386\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"historyMLP1 = modelMLP1.fit(X_train_ar,\n y_train, \n epochs=15, \n batch_size=50, \n verbose=1, \n validation_data = (X_test_ar, y_test))",
"Train on 62604 samples, validate on 15652 samples\nEpoch 1/15\n62604/62604 [==============================] - 15s 233us/step - loss: 0.2368 - acc: 0.5729 - f1_m: 0.0299 - val_loss: 0.2264 - val_acc: 0.5786 - val_f1_m: 0.0052\nEpoch 2/15\n62604/62604 [==============================] - 7s 104us/step - loss: 0.2131 - acc: 0.6643 - f1_m: 0.3930 - val_loss: 0.1960 - val_acc: 0.7285 - val_f1_m: 0.6287\nEpoch 3/15\n62604/62604 [==============================] - 6s 98us/step - loss: 0.1773 - acc: 0.7516 - f1_m: 0.6897 - val_loss: 0.1629 - val_acc: 0.7642 - val_f1_m: 0.7214\nEpoch 4/15\n62604/62604 [==============================] - 7s 107us/step - loss: 0.1577 - acc: 0.7700 - f1_m: 0.7357 - val_loss: 0.1546 - val_acc: 0.7696 - val_f1_m: 0.7336\nEpoch 5/15\n62604/62604 [==============================] - 7s 112us/step - loss: 0.1526 - acc: 0.7751 - f1_m: 0.7429 - val_loss: 0.1518 - val_acc: 0.7727 - val_f1_m: 0.7447\nEpoch 6/15\n62604/62604 [==============================] - 8s 130us/step - loss: 0.1501 - acc: 0.7789 - f1_m: 0.7468 - val_loss: 0.1503 - val_acc: 0.7744 - val_f1_m: 0.7475\nEpoch 7/15\n62604/62604 [==============================] - 11s 183us/step - loss: 0.1483 - acc: 0.7825 - f1_m: 0.7491 - val_loss: 0.1485 - val_acc: 0.7782 - val_f1_m: 0.7446\nEpoch 8/15\n62604/62604 [==============================] - 9s 142us/step - loss: 0.1470 - acc: 0.7850 - f1_m: 0.7495 - val_loss: 0.1476 - val_acc: 0.7779 - val_f1_m: 0.7417\nEpoch 9/15\n62604/62604 [==============================] - 9s 144us/step - loss: 0.1461 - acc: 0.7866 - f1_m: 0.7513 - val_loss: 0.1469 - val_acc: 0.7808 - val_f1_m: 0.7441\nEpoch 10/15\n62604/62604 [==============================] - 11s 169us/step - loss: 0.1453 - acc: 0.7880 - f1_m: 0.7514 - val_loss: 0.1468 - val_acc: 0.7806 - val_f1_m: 0.7466\nEpoch 11/15\n62604/62604 [==============================] - 9s 147us/step - loss: 0.1446 - acc: 0.7901 - f1_m: 0.7540 - val_loss: 0.1463 - val_acc: 0.7818 - val_f1_m: 0.7355\nEpoch 12/15\n62604/62604 [==============================] - 10s 161us/step - loss: 0.1441 - acc: 0.7911 - f1_m: 0.7544 - val_loss: 0.1458 - val_acc: 0.7830 - val_f1_m: 0.7472\nEpoch 13/15\n62604/62604 [==============================] - 9s 144us/step - loss: 0.1436 - acc: 0.7922 - f1_m: 0.7545 - val_loss: 0.1467 - val_acc: 0.7816 - val_f1_m: 0.7531\nEpoch 14/15\n62604/62604 [==============================] - 10s 160us/step - loss: 0.1431 - acc: 0.7915 - f1_m: 0.7535 - val_loss: 0.1451 - val_acc: 0.7846 - val_f1_m: 0.7495\nEpoch 15/15\n62604/62604 [==============================] - 10s 155us/step - loss: 0.1427 - acc: 0.7927 - f1_m: 0.7554 - val_loss: 0.1446 - val_acc: 0.7849 - val_f1_m: 0.7451\n"
],
[
"%%time\n\nmodelB.compile(optimizer='adagrad',\n loss='mean_squared_error',\n metrics=['acc', f1_m])\n\n# lets also add the history variable to see how we are doing\n# and lets add a validation set to keep track of our progress\nhistoryB = modelB.fit(X_ints_train+ [X_train_ar],\n y_train, \n epochs=15, \n batch_size=50, \n verbose=1, \n validation_data = (X_ints_test + [X_test_ar], y_test))",
"Train on 62604 samples, validate on 15652 samples\nEpoch 1/15\n62604/62604 [==============================] - 6s 89us/step - loss: 0.1525 - acc: 0.7716 - f1_m: 0.7288 - val_loss: 0.1476 - val_acc: 0.7801 - val_f1_m: 0.7392\nEpoch 2/15\n62604/62604 [==============================] - 4s 62us/step - loss: 0.1452 - acc: 0.7836 - f1_m: 0.7414 - val_loss: 0.1450 - val_acc: 0.7845 - val_f1_m: 0.7469\nEpoch 3/15\n62604/62604 [==============================] - 4s 67us/step - loss: 0.1425 - acc: 0.7885 - f1_m: 0.7486 - val_loss: 0.1438 - val_acc: 0.7855 - val_f1_m: 0.7522\nEpoch 4/15\n62604/62604 [==============================] - 4s 66us/step - loss: 0.1406 - acc: 0.7928 - f1_m: 0.7552 - val_loss: 0.1438 - val_acc: 0.7858 - val_f1_m: 0.7416\nEpoch 5/15\n62604/62604 [==============================] - 4s 68us/step - loss: 0.1391 - acc: 0.7951 - f1_m: 0.7571 - val_loss: 0.1435 - val_acc: 0.7844 - val_f1_m: 0.7576\nEpoch 6/15\n62604/62604 [==============================] - 4s 69us/step - loss: 0.1376 - acc: 0.7975 - f1_m: 0.7616 - val_loss: 0.1421 - val_acc: 0.7888 - val_f1_m: 0.7565\nEpoch 7/15\n62604/62604 [==============================] - 4s 72us/step - loss: 0.1363 - acc: 0.7993 - f1_m: 0.7622 - val_loss: 0.1420 - val_acc: 0.7926 - val_f1_m: 0.7513\nEpoch 8/15\n62604/62604 [==============================] - 4s 70us/step - loss: 0.1352 - acc: 0.8012 - f1_m: 0.7649 - val_loss: 0.1410 - val_acc: 0.7914 - val_f1_m: 0.7561\nEpoch 9/15\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1340 - acc: 0.8030 - f1_m: 0.7665 - val_loss: 0.1417 - val_acc: 0.7926 - val_f1_m: 0.7583\nEpoch 10/15\n62604/62604 [==============================] - 5s 75us/step - loss: 0.1330 - acc: 0.8047 - f1_m: 0.7677 - val_loss: 0.1412 - val_acc: 0.7916 - val_f1_m: 0.7484\nEpoch 11/15\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1320 - acc: 0.8071 - f1_m: 0.7706 - val_loss: 0.1408 - val_acc: 0.7931 - val_f1_m: 0.7610\nEpoch 12/15\n62604/62604 [==============================] - 4s 71us/step - loss: 0.1310 - acc: 0.8084 - f1_m: 0.7716 - val_loss: 0.1401 - val_acc: 0.7931 - val_f1_m: 0.7564\nEpoch 13/15\n62604/62604 [==============================] - 5s 76us/step - loss: 0.1301 - acc: 0.8097 - f1_m: 0.7737 - val_loss: 0.1415 - val_acc: 0.7901 - val_f1_m: 0.7431\nEpoch 14/15\n62604/62604 [==============================] - 5s 78us/step - loss: 0.1291 - acc: 0.8119 - f1_m: 0.7752 - val_loss: 0.1404 - val_acc: 0.7934 - val_f1_m: 0.7600\nEpoch 15/15\n62604/62604 [==============================] - 5s 76us/step - loss: 0.1281 - acc: 0.8130 - f1_m: 0.7766 - val_loss: 0.1407 - val_acc: 0.7922 - val_f1_m: 0.7520\nWall time: 1min 8s\n"
],
[
"from sklearn import metrics as mt\nyhat_proba = modelMLP1.predict(X_test_ar)\nyhatMLP = np.round(yhat_proba)\nprint(mt.confusion_matrix(y_test,yhatMLP),mt.accuracy_score(y_test,yhatMLP))\n\n\nfrom sklearn import metrics as mt\nyhat_proba1 = modelB.predict(X_ints_test+ [X_test_ar])\nyhatB = np.round(yhat_proba1)\nprint(mt.confusion_matrix(y_test,yhatB),mt.accuracy_score(y_test,yhatB))",
"[[7240 1810]\n [1557 5045]] 0.7848837209302325\n[[7442 1608]\n [1691 4911]] 0.7892282136468183\n"
],
[
"plt.figure(figsize=(15,11))\nplt.subplot(2,2,1)\nplt.ylabel('MSE Training acc and val_acc')\nplt.xlabel('epochs Model B')\nplt.plot(historyB.history['f1_m'])\n\nplt.plot(historyB.history['val_f1_m'])\n\n\nplt.subplot(2,2,3)\nplt.plot(historyB.history['loss'])\nplt.ylabel('MSE Training Loss and val_loss')\nplt.plot(historyB.history['val_loss'])\nplt.xlabel('epochs Model B')\n\n\nplt.subplot(2,2,2)\nplt.ylabel('MSE Training acc and val_acc')\nplt.xlabel('epochs MLP')\nplt.plot(historyMLP1.history['f1_m'])\n\nplt.plot(historyMLP1.history['val_f1_m'])\n\n\nplt.subplot(2,2,4)\nplt.plot(historyMLP1.history['loss'])\nplt.ylabel('MSE Training Loss and val_loss')\nplt.plot(historyMLP1.history['val_loss'])\nplt.xlabel('epochs MLP')",
"_____no_output_____"
],
[
"from sklearn import metrics\nfpr, tpr, thresholds = metrics.roc_curve(y_test, yhatMLP)\nAUCMLP=metrics.roc_auc_score(y_test, yhatMLP)\nAUCB=metrics.roc_auc_score(y_test, yhatB)\nplt.figure(figsize=(15,5))\nplt.subplot(1,2,1)\nplt.plot(fpr, tpr,label= ['area under the curve =',AUCMLP])\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.rcParams['font.size'] = 12\nplt.title('ROC curve for MLP classifier')\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.legend()\nplt.ylabel('True Positive Rate (Sensitivity)')\nplt.grid(True)\nfrom sklearn import metrics\nfpr, tpr, thresholds = metrics.roc_curve(y_test, yhatB)\nplt.subplot(1,2,2)\nplt.plot(fpr, tpr,label= ['area under the curve =',AUCB])\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.rcParams['font.size'] = 12\nplt.title('ROC curve for Deep and wide classifier')\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.ylabel('True Positive Rate (Sensitivity)')\nplt.legend()\nplt.grid(True)",
"_____no_output_____"
]
],
[
[
"the plots above compares MLP with our best Deep and wide model we found the fellowing observation :\n\n1- f1_ score for our Deep model is higher than the MLP\n\n2- the f1 valdation accuracy for deep is slightly better on avrage than the MLP\n\n3- the big difrance between both our deep model and the MLP, that there is big gab between the valdation accurace and the accuracy , in my opinion since deep network is better at genrlizing and wide network is better at memoriztion so : \n A- the deep and wide model did so good with the test data so its vary genral to it\n B- on the other hand the deep network had same results for both the training nd the test data which means that it better predicted our data set using only memoriztion \n \n i would argue that for this data set an Modle B did slightly better that the MLP as seen from the area under the curve value on the plots",
"_____no_output_____"
]
],
[
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\n# you will need to install pydot properly on your machine to get this running\nSVG(model_to_dot(modelB).create(prog='dot', format='svg'))",
"_____no_output_____"
],
[
"modelB.layers[0].get_weights()",
"_____no_output_____"
],
[
"for layer in modelB.layers: \n print(layer.get_config(), layer.get_weights())",
"{'batch_input_shape': (None, 1), 'dtype': 'int32', 'sparse': False, 'name': 'animal_type_int'} []\n{'batch_input_shape': (None, 1), 'dtype': 'int32', 'sparse': False, 'name': 'breed_int'} []\n{'batch_input_shape': (None, 1), 'dtype': 'int32', 'sparse': False, 'name': 'color_int'} []\n{'batch_input_shape': (None, 1), 'dtype': 'int32', 'sparse': False, 'name': 'spayed_neutered_int'} []\n{'batch_input_shape': (None, 1), 'dtype': 'int32', 'sparse': False, 'name': 'sex_int'} []\n{'name': 'animal_type_int_embed', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'input_dim': 5, 'output_dim': 2, 'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': -0.05, 'maxval': 0.05, 'seed': None}}, 'embeddings_regularizer': None, 'activity_regularizer': None, 'embeddings_constraint': None, 'mask_zero': False, 'input_length': 1} [array([[ 0.03084215, 0.07599948],\n [ 0.14617625, -0.03274135],\n [-0.11578513, -0.10719546],\n [ 0.00597006, -0.07511391],\n [ 0.12348299, 0.21717638]], dtype=float32)]\n{'name': 'breed_int_embed', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'input_dim': 342, 'output_dim': 18, 'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': -0.05, 'maxval': 0.05, 'seed': None}}, 'embeddings_regularizer': None, 'activity_regularizer': None, 'embeddings_constraint': None, 'mask_zero': False, 'input_length': 1} [array([[ 0.01163727, 0.05487053, 0.04674885, ..., -0.06290507,\n -0.01575211, 0.0538074 ],\n [-0.00341248, 0.01402617, 0.09990135, ..., 0.05006947,\n 0.12619399, -0.01184185],\n [ 0.13702019, -0.0205039 , -0.0466413 , ..., 0.07211873,\n 0.10621613, 0.03107849],\n ...,\n [-0.01798792, 0.00105948, 0.08460083, ..., 0.04293109,\n 0.04503689, 0.04105986],\n [-0.02671807, 0.052118 , -0.06154805, ..., 0.01333308,\n -0.06433485, -0.03117717],\n [ 0.0320892 , -0.07387385, 0.17230523, ..., -0.00626857,\n -0.07553861, 0.047876 ]], dtype=float32)]\n{'name': 'color_int_embed', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'input_dim': 58, 'output_dim': 7, 'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': -0.05, 'maxval': 0.05, 'seed': None}}, 'embeddings_regularizer': None, 'activity_regularizer': None, 'embeddings_constraint': None, 'mask_zero': False, 'input_length': 1} [array([[ 0.06060344, -0.0088663 , -0.0195372 , 0.08178973, 0.03152143,\n -0.07323933, -0.00832451],\n [-0.05086805, 0.19421004, -0.01909706, -0.02486734, 0.01973855,\n 0.03680499, -0.0304053 ],\n [-0.03324337, -0.05104075, -0.10808155, 0.02707266, 0.05707175,\n -0.00804392, 0.02374756],\n [ 0.02206821, 0.02876697, 0.02270751, -0.08140329, 0.01581065,\n -0.11456341, -0.07547621],\n [ 0.12350775, -0.0105349 , -0.11509724, -0.10315765, -0.04121017,\n -0.15387692, -0.11052182],\n [-0.12250163, -0.16959685, -0.05479298, -0.05152716, -0.0999243 ,\n 0.11543954, 0.06999865],\n [ 0.08907446, -0.08961114, 0.12596083, -0.11027682, 0.03059072,\n -0.09590236, -0.06193407],\n [ 0.0371504 , 0.00765064, -0.01670152, -0.04137389, 0.09094033,\n -0.05005562, 0.04736777],\n [ 0.13964668, 0.05686889, 0.00505258, -0.07924058, 0.10377341,\n -0.08001835, -0.17388047],\n [-0.06460548, 0.13706483, 0.13300814, 0.1402728 , -0.12152544,\n -0.03562415, 0.03065058],\n [-0.10718735, 0.16537702, 0.09736547, 0.12142899, 0.09645008,\n 0.175841 , 0.10031156],\n [-0.04366989, 0.05028155, -0.08123844, 0.08830626, -0.11677796,\n -0.00572657, -0.06382494],\n [ 0.13199876, 0.05746133, 0.01657481, -0.08599467, -0.03642538,\n 0.03411081, -0.01928883],\n [ 0.0449161 , -0.07197475, -0.00877823, -0.14239317, -0.01525195,\n 0.04715756, 0.01239085],\n [-0.01524753, -0.14905003, -0.0679155 , -0.03876454, 0.024215 ,\n 0.04261173, -0.02505232],\n [-0.08163016, 0.05738027, -0.05811404, -0.04434256, 0.03067863,\n 0.07034615, 0.04514282],\n [ 0.05178378, -0.20060918, 0.21404874, -0.08203548, -0.05450596,\n 0.01257385, 0.02151691],\n [ 0.09482145, -0.14190835, 0.04579961, -0.02044739, 0.03191136,\n -0.05599282, -0.09179935],\n [ 0.07501842, -0.01594252, 0.08071829, 0.07886873, 0.08602004,\n 0.0310751 , 0.03240583],\n [ 0.0085766 , 0.02087902, 0.12861824, 0.00362527, 0.08122192,\n 0.05388121, 0.04592258],\n [-0.06927463, 0.1445249 , 0.00280489, 0.09262036, -0.1376761 ,\n -0.0378453 , -0.00625657],\n [ 0.03212183, -0.00661294, 0.09022876, 0.02585861, -0.06079128,\n 0.01446123, -0.05431906],\n [-0.09336826, -0.03356498, 0.19880432, 0.01340129, 0.01406379,\n 0.0702197 , 0.08795124],\n [-0.03156585, -0.05858542, 0.0206171 , 0.02469392, -0.02738148,\n 0.09331005, -0.04055242],\n [-0.08779901, -0.07551677, 0.0116257 , 0.07006744, -0.10275756,\n 0.07135359, -0.00263525],\n [-0.00629829, 0.13882434, 0.08590774, 0.01402083, -0.00795571,\n 0.09432704, 0.07450465],\n [ 0.00787956, 0.13487798, -0.02245967, -0.0411248 , 0.09630624,\n 0.02637842, -0.07706872],\n [-0.0405859 , 0.03649453, -0.00126323, 0.02005717, 0.02464639,\n 0.03456759, 0.06991173],\n [ 0.12771352, 0.02048657, -0.1180312 , -0.12055424, 0.01765107,\n -0.02016109, -0.05584916],\n [-0.11908214, -0.09270992, 0.21431585, 0.03025258, 0.08839365,\n 0.11293564, 0.16138111],\n [-0.1643571 , -0.06186528, 0.12238342, 0.06436978, 0.1708079 ,\n 0.08842017, 0.12122795],\n [-0.11754657, -0.12835838, 0.12402612, 0.14833832, -0.10301262,\n 0.04338726, 0.09410538],\n [-0.03096149, -0.05872471, 0.03261814, 0.07794613, -0.00235981,\n -0.02221892, 0.08796272],\n [-0.02009054, -0.12517163, -0.01603428, 0.01011967, -0.08584817,\n 0.0224216 , 0.13456991],\n [-0.00174314, 0.03546394, 0.07965291, 0.11799306, 0.02775041,\n 0.13983871, 0.02283063],\n [-0.03614927, 0.09764021, 0.00877537, -0.03190264, -0.00415659,\n -0.00777268, -0.07491966],\n [ 0.06973174, -0.15425359, 0.00026743, -0.05164612, -0.0855748 ,\n -0.09482759, 0.04592597],\n [-0.26134938, -0.04979877, 0.12839667, 0.06384686, -0.10151124,\n 0.19817759, 0.23044352],\n [-0.07422084, 0.0474165 , 0.07884527, 0.06771977, 0.12453945,\n 0.01477885, 0.02542068],\n [-0.08960683, -0.11060932, 0.08646996, 0.12526932, -0.0161246 ,\n 0.11313467, 0.1407291 ],\n [-0.0688414 , -0.06146583, 0.01616996, 0.05493104, -0.03788362,\n 0.07949371, 0.04722675],\n [-0.05351316, 0.01581701, -0.10683493, -0.0123761 , -0.00054187,\n 0.0661741 , 0.09332174],\n [ 0.0819662 , 0.17017707, -0.07092008, 0.01699905, 0.19195867,\n 0.12852156, 0.05980189],\n [ 0.02640343, -0.05357154, 0.06931859, -0.14692594, -0.0442812 ,\n -0.06661265, -0.112087 ],\n [ 0.00536818, -0.10368208, -0.03929893, -0.09600814, -0.00997531,\n -0.02456849, 0.02282895],\n [ 0.02890953, -0.03050763, 0.07087316, 0.06625295, -0.14258914,\n -0.02619195, -0.16012914],\n [-0.01788726, 0.0472657 , -0.03561039, 0.05508963, -0.11365283,\n 0.00588385, 0.07099444],\n [-0.03867033, -0.09483144, 0.02507092, 0.09120845, -0.07748684,\n -0.04006028, 0.07106655],\n [-0.12006317, 0.14056028, 0.07393636, -0.00242366, -0.01569519,\n 0.09039673, 0.0831922 ],\n [-0.01572421, 0.0494923 , 0.05913397, 0.11353546, 0.06484634,\n 0.02176904, -0.01494277],\n [ 0.01968202, 0.05366064, -0.12566356, -0.05754266, -0.05843135,\n -0.0174397 , 0.09548631],\n [ 0.00244224, -0.02865642, 0.00593238, 0.0504504 , 0.05271602,\n 0.03712216, -0.01091785],\n [-0.00682016, 0.08987752, -0.09194119, 0.03333924, 0.149728 ,\n 0.12180183, 0.02477739],\n [-0.01984347, 0.04184692, 0.0160354 , -0.01439852, 0.10884934,\n -0.06490041, 0.00252378],\n [-0.0197259 , 0.01707143, 0.08521917, 0.09336528, -0.08877822,\n -0.07139982, 0.09035515],\n [-0.02090304, 0.05408398, 0.0132482 , 0.05077755, -0.0684072 ,\n 0.0003418 , -0.0185528 ],\n [-0.1015705 , -0.02408786, 0.02239074, 0.16550581, 0.16758803,\n 0.03743689, 0.04857086],\n [ 0.08664908, -0.08270772, -0.08429676, -0.06435005, -0.23502341,\n -0.14538558, -0.04858314]], dtype=float32)]\n{'name': 'spayed_neutered_int_embed', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'input_dim': 4, 'output_dim': 2, 'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': -0.05, 'maxval': 0.05, 'seed': None}}, 'embeddings_regularizer': None, 'activity_regularizer': None, 'embeddings_constraint': None, 'mask_zero': False, 'input_length': 1} [array([[ 0.22568303, -0.24030262],\n [-0.0499761 , 0.15075395],\n [ 0.04715545, -0.04498946],\n [ 0.24141689, -0.18722083]], dtype=float32)]\n{'name': 'sex_int_embed', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'input_dim': 3, 'output_dim': 1, 'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': -0.05, 'maxval': 0.05, 'seed': None}}, 'embeddings_regularizer': None, 'activity_regularizer': None, 'embeddings_constraint': None, 'mask_zero': False, 'input_length': 1} [array([[-0.05266905],\n [ 0.11210247],\n [-0.26020366]], dtype=float32)]\n{'batch_input_shape': (None, 14), 'dtype': 'float32', 'sparse': False, 'name': 'numeric_data'} []\n{'name': 'flatten_4', 'trainable': True, 'data_format': 'channels_last'} []\n{'name': 'flatten_5', 'trainable': True, 'data_format': 'channels_last'} []\n{'name': 'flatten_6', 'trainable': True, 'data_format': 'channels_last'} []\n{'name': 'flatten_7', 'trainable': True, 'data_format': 'channels_last'} []\n{'name': 'flatten_8', 'trainable': True, 'data_format': 'channels_last'} []\n"
],
[
"np.unique(cats.color)",
"_____no_output_____"
],
[
"np.unique(cats.animal_type)",
"_____no_output_____"
],
[
"modelB.get_weights()[0].T",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\nax.scatter(modelB.get_weights()[0].T[0],modelB.get_weights()[0].T[1])\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec91036519210cc7d2c896841b702403b5445ae5 | 40,949 | ipynb | Jupyter Notebook | Untitled.ipynb | timothydmorton/gaia-explore | 873a1fe7af78504c9bd661a428a19bc92f6e6f77 | [
"MIT"
] | null | null | null | Untitled.ipynb | timothydmorton/gaia-explore | 873a1fe7af78504c9bd661a428a19bc92f6e6f77 | [
"MIT"
] | null | null | null | Untitled.ipynb | timothydmorton/gaia-explore | 873a1fe7af78504c9bd661a428a19bc92f6e6f77 | [
"MIT"
] | null | null | null | 30.581777 | 455 | 0.441256 | [
[
[
"from __future__ import print_function, division\nimport os\nGAIADIR = os.path.expanduser('~/gaia')",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv(os.path.join(GAIADIR, 'TgasSource_000-000-001.csv.gz'))\nfor i in range(2,16):\n filename = os.path.join(GAIADIR, 'TgasSource_000-000-0{:02.0f}.csv.gz'.format(i))\n #print(filename)\n df = df.append(pd.read_csv(filename))\ndf.to_hdf(os.path.join(GAIADIR, 'TgasSource.h5'), 'df')",
"_____no_output_____"
],
[
"hdu = fits.open('data/tycho2.fits')\n\ndf_tycho = pd.DataFrame(hdu[1].data)\ntycho2_id = ['{}-{}-{}'.format(t1,t2,t3) for t1, t2, t3 in zip(df_tycho.TYC1, df_tycho.TYC2, df_tycho.TYC3)]\ndf_tycho.index = tycho2_id\ndf_tycho.to_hdf('data/tycho2.h5', 'df')",
"_____no_output_____"
]
],
[
[
"df_gaia = pd.read_hdf(os.path.join(GAIADIR, 'TgasSource.h5'), 'df')\ndf_tycho = pd.read_hdf('data/tycho2.h5')",
"_____no_output_____"
],
[
"df_k2 = pd.read_csv('data/k2candidates.csv', comment='#')\n\nok = df_k2.ra.notnull()\n\ndf_k2 = df_k2[ok]\nc = SkyCoord(df_k2.ra, df_k2.dec, unit='deg')",
"_____no_output_____"
]
],
[
[
"#This takes awhile.\ncatalog = SkyCoord(df_gaia.ra, df_gaia.dec, unit='deg')\n\nimport cPickle as pickle\npickle.dump(catalog, open('gaia_coords.pkl', 'wb'))",
"_____no_output_____"
]
],
[
[
"catalog = pickle.load(open('gaia_coords.pkl', 'rb'))",
"_____no_output_____"
],
[
"idx, d2d, d3d = c.match_to_catalog_sky(catalog)",
"_____no_output_____"
],
[
"from astropy import units as u\nclose = d2d < 10*u.arcsec\nidx[close]\nnp.where(close)",
"_____no_output_____"
],
[
"df_k2.iloc[655][['epic_name','ra','dec']]",
"_____no_output_____"
],
[
"get_gaia('EPIC 212782836')[['ra','dec']]",
"/Users/tdm/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:6: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n/Users/tdm/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n/Users/tdm/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:8: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n"
],
[
"epic_ids = [df_k2.epic_name]\n",
"_____no_output_____"
],
[
"epic_ids = df_k2.iloc[close].epic_name.unique()",
"_____no_output_____"
],
[
"len(epic_ids)",
"_____no_output_____"
],
[
"# Connect EPIC ID with GAIA record, with epic info\nepic_cols = ['epic_name','ra','dec'] + [c for c in df_k2.columns if c.startswith('st_')]\ndef get_ix(epic_id):\n i_k2 = np.where(df_k2.epic_name==epic_id)[0][0]\n i_gaia = idx[i_k2]\n return i_k2, i_gaia ",
"_____no_output_____"
],
[
"inds = [get_ix(e) for e in epic_ids]",
"_____no_output_____"
],
[
"k2_inds, gaia_inds = np.array(inds)[:,0], np.array(inds)[:,1]",
"_____no_output_____"
],
[
"k2_match = df_k2.iloc[k2_inds].rename(columns={'ra':'epic_ra', 'dec':'epic_dec'})\ngaia_match = df_gaia.iloc[gaia_inds].copy()",
"_____no_output_____"
],
[
"k2_match.index = gaia_match.index",
"_____no_output_____"
],
[
"df_match = pd.concat([k2_match, gaia_match], axis=1)",
"_____no_output_____"
],
[
"c_gaia = SkyCoord(df_match.ra, df_match.dec, unit='deg')\nc_epic = SkyCoord(df_match.epic_ra, df_match.epic_dec, unit='deg')",
"_____no_output_____"
],
[
"df_match.loc[:,'separation'] = c_gaia.separation(c_epic).arcsec",
"_____no_output_____"
],
[
"for c in df_match.columns:\n print(c)",
"rowid\nepic_name\ntm_name\nepic_candname\npl_name\nk2c_refdisp\nk2c_reflink\nk2c_disp\nk2c_note\nk2_campaign\nk2c_recentflag\nra_str\nepic_ra\ndec_str\nepic_dec\npl_orbper\npl_orbpererr1\npl_orbpererr2\npl_orbperlim\npl_tranmid\npl_tranmiderr1\npl_tranmiderr2\npl_tranmidlim\npl_trandep\npl_trandeperr1\npl_trandeperr2\npl_trandeplim\npl_trandur\npl_trandurerr1\npl_trandurerr2\npl_trandurlim\npl_imppar\npl_impparerr1\npl_impparerr2\npl_impparlim\npl_orbincl\npl_orbinclerr1\npl_orbinclerr2\npl_orbincllim\npl_ratdor\npl_ratdorerr1\npl_ratdorerr2\npl_ratdorlim\npl_ratror\npl_ratrorerr1\npl_ratrorerr2\npl_ratrorlim\npl_rade\npl_radeerr1\npl_radeerr2\npl_radelim\npl_radj\npl_radjerr1\npl_radjerr2\npl_radjlim\npl_eqt\npl_eqterr1\npl_eqterr2\npl_eqtlim\npl_fppprob\npl_fppproblim\nst_plx\nst_plxerr1\nst_plxerr2\nst_plxlim\nst_dist\nst_disterr1\nst_disterr2\nst_distlim\nst_teff\nst_tefferr1\nst_tefferr2\nst_tefflim\nst_logg\nst_loggerr1\nst_loggerr2\nst_logglim\nst_metfe\nst_metfeerr1\nst_metfeerr2\nst_metfelim\nst_metratio\nst_rad\nst_raderr1\nst_raderr2\nst_radlim\nst_vsini\nst_vsinierr1\nst_vsinierr2\nst_vsinilim\nst_kep\nst_keperr\nst_keplim\nst_bj\nst_bjerr\nst_bjlim\nst_vj\nst_vjerr\nst_vjlim\nst_us\nst_userr\nst_uslim\nst_gs\nst_gserr\nst_gslim\nst_rs\nst_rserr\nst_rslim\nst_is\nst_iserr\nst_islim\nst_zs\nst_zserr\nst_zslim\nst_j2\nst_j2err\nst_j2lim\nst_h2\nst_h2err\nst_h2lim\nst_k2\nst_k2err\nst_k2lim\nst_wise1\nst_wise1err\nst_wise1lim\nst_wise2\nst_wise2err\nst_wise2lim\nst_wise3\nst_wise3err\nst_wise3lim\nst_wise4\nst_wise4err\nst_wise4lim\nst_bmvj\nst_bmvjerr\nst_bmvjlim\nst_jmh2\nst_jmh2err\nst_jmh2lim\nst_hmk2\nst_hmk2err\nst_hmk2lim\nst_jmk2\nst_jmk2err\nst_jmk2lim\nhip\ntycho2_id\nsolution_id\nsource_id\nrandom_index\nref_epoch\nra\nra_error\ndec\ndec_error\nparallax\nparallax_error\npmra\npmra_error\npmdec\npmdec_error\nra_dec_corr\nra_parallax_corr\nra_pmra_corr\nra_pmdec_corr\ndec_parallax_corr\ndec_pmra_corr\ndec_pmdec_corr\nparallax_pmra_corr\nparallax_pmdec_corr\npmra_pmdec_corr\nastrometric_n_obs_al\nastrometric_n_obs_ac\nastrometric_n_good_obs_al\nastrometric_n_good_obs_ac\nastrometric_n_bad_obs_al\nastrometric_n_bad_obs_ac\nastrometric_delta_q\nastrometric_excess_noise\nastrometric_excess_noise_sig\nastrometric_primary_flag\nastrometric_relegation_factor\nastrometric_weight_al\nastrometric_weight_ac\nastrometric_priors_used\nmatched_observations\nduplicated_source\nscan_direction_strength_k1\nscan_direction_strength_k2\nscan_direction_strength_k3\nscan_direction_strength_k4\nscan_direction_mean_k1\nscan_direction_mean_k2\nscan_direction_mean_k3\nscan_direction_mean_k4\nphot_g_n_obs\nphot_g_mean_flux\nphot_g_mean_flux_error\nphot_g_mean_mag\nphot_variable_flag\nl\nb\necl_lon\necl_lat\nseparation\n"
],
[
"df_match[['epic_name', 'parallax', 'parallax_error', 'separation', 'pmra', 'pmdec']]",
"_____no_output_____"
],
[
"df_match.to_hdf('data/merged_matches.h5', 'df')",
"/Users/tdm/anaconda/lib/python2.7/site-packages/pandas/core/generic.py:1101: PerformanceWarning: \nyour performance may suffer as PyTables will pickle object types that it cannot\nmap directly to c-types [inferred_type->mixed,key->block3_values] [items->['epic_name', 'tm_name', 'epic_candname', 'pl_name', 'k2c_refdisp', 'k2c_reflink', 'k2c_disp', 'k2c_note', 'ra_str', 'dec_str', 'st_metratio', 'tycho2_id', 'phot_variable_flag']]\n\n return pytables.to_hdf(path_or_buf, key, self, **kwargs)\n"
],
[
"def get_catalog_photometry(epic_name, min_unc=0.03):\n directory = os.path.join('data',epic_name)\n mags = {}\n tm = pd.read_csv(os.path.join(directory, 'II_246_out.csv'))\n tm.sort_values(by='Jmag', inplace=True)\n for b in ['J', 'H', 'K']:\n unc = max(min_unc, tm['e_{}mag'.format(b)][0])\n mags[b] = (tm['{}mag'.format(b)][0], unc)\n \n if False:\n try:\n sdss = pd.read_csv(os.path.join(directory, 'V_139_sdss9.csv'))\n sdss.sort_values(by='rpmag', inplace=True)\n for b in ['u', 'g', 'r', 'i', 'z']:\n unc = max(min_unc, sdss['e_{}pmag'.format(b)][0])\n mags[b] = (sdss['{}pmag'.format(b)][0], unc)\n except IOError:\n pass\n \n wise = pd.read_csv(os.path.join(directory, 'II_328_allwise.csv'))\n wise.sort_values(by='W1mag', inplace=True)\n for b in ['W1', 'W2', 'W3']:\n unc = max(min_unc, wise['e_{}mag'.format(b)][0])\n mags[b] = (wise['{}mag'.format(b)][0], unc)\n \n return mags\n\nfrom configobj import ConfigObj\n\nimport shutil\n\ndef write_ini(epic_name):\n epic_id = int(epic_name[4:])\n directory = os.path.join('starmodels',str(epic_id))\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n ini_file = os.path.join(directory, 'star.ini')\n if os.path.exists(ini_file):\n os.remove(ini_file)\n c = ConfigObj(ini_file)\n \n mags = get_catalog_photometry(epic_name)\n for b, m in mags.items():\n c[b] = m\n \n s = df_match[df_match.epic_name==epic_name]\n c['ra'], c['dec'] = s.ra.iloc[0], s.dec.iloc[0]\n c['parallax'] = s.parallax.iloc[0], s.parallax_error.iloc[0]\n \n c['name'] = epic_name\n c.write()",
"_____no_output_____"
],
[
"[write_ini(n) for n in df_match.epic_name];",
"_____no_output_____"
],
[
"get_catalog_photometry(df_match.epic_name.iloc[0])",
"_____no_output_____"
]
]
] | [
"code",
"raw",
"code",
"raw",
"code"
] | [
[
"code"
],
[
"raw",
"raw"
],
[
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9106842ebd9f241bf15edd2a7fc481f50a0ddc | 4,108 | ipynb | Jupyter Notebook | hw5/genome_association.ipynb | ppham27/stat570 | 859832aed3ae172abc8b6fbcd2221eb552291a00 | [
"MIT"
] | 2 | 2019-04-22T11:05:54.000Z | 2022-02-25T22:46:11.000Z | hw5/genome_association.ipynb | ppham27/stat570 | 859832aed3ae172abc8b6fbcd2221eb552291a00 | [
"MIT"
] | null | null | null | hw5/genome_association.ipynb | ppham27/stat570 | 859832aed3ae172abc8b6fbcd2221eb552291a00 | [
"MIT"
] | null | null | null | 21.066667 | 112 | 0.490993 | [
[
[
"import numpy as np\nfrom scipy import stats\n\nnp.set_printoptions(suppress=True)",
"_____no_output_____"
],
[
"def compute_prior_variance(l, u, density):\n l = np.log(l)\n u = np.log(u)\n mid = l + (u - l)/2\n delta = u - mid\n return np.square(delta/stats.norm.isf((1 - density)/2))\n \nW = compute_prior_variance(2/3, 3/2, 0.95)\nV1 = compute_prior_variance(1.16, 1.37, 0.95)\nV2 = compute_prior_variance(1.09, 1.23, 0.95)\nPI1 = 1/5000\ntheta1 = np.log(1.27)\ntheta2 = np.log(1.15)\nV1, V2, W",
"_____no_output_____"
],
[
"mean, variance = theta1*W/(V1 + W), V1*W/(V1 + W)\nprint(mean, variance)\nprint(stats.norm.interval(0.95, mean, np.sqrt(variance)))",
"0.22936061370234512 0.001728989425928838\n(0.1478631187646404, 0.3108581086400498)\n"
],
[
"K = 1/np.sqrt(1 - W/(V1+W))*np.exp(-np.square(theta1/np.sqrt(V1))*W/(V1+W)/2)\nK",
"_____no_output_____"
],
[
"PI1/(K*(1-PI1) + PI1)",
"_____no_output_____"
],
[
"mean, variance = (theta1*V2*W + theta2*V1*W)/(V1*V2 + V1*W + V2*W), V1*V2*W/(V1*V2 + V1*W + V2*W)\nprint(mean, variance)\nprint(stats.norm.interval(0.95, mean, np.sqrt(variance)))",
"0.17154013788761782 0.0006132252097299771\n(0.12300479621797233, 0.2200754795572633)\n"
],
[
"PRECISION = np.array([[1/V1, 0], [0, 1/V2]]) + 1/(V1*V2+V1*W + V2*W)*np.array([[V2+W, -W], [-W, V1+W]])\nTHETA = np.array([theta1, theta2])\nK = np.sqrt((V1*V2 + V1*W + V2*W)/(V1*V2))*np.exp(-THETA.dot(PRECISION).dot(THETA)/2)\nK",
"_____no_output_____"
],
[
"PI1/(K*(1-PI1) + PI1)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec911e947316ed57ee40916ca8668afb5db196ef | 198,653 | ipynb | Jupyter Notebook | web3.ipynb | kiview/ethereum-network-analysis | 7a11ba1eb1427ece41b88c6fbeacd9c717f6918f | [
"MIT"
] | null | null | null | web3.ipynb | kiview/ethereum-network-analysis | 7a11ba1eb1427ece41b88c6fbeacd9c717f6918f | [
"MIT"
] | 4 | 2021-06-08T22:29:44.000Z | 2022-01-13T03:23:38.000Z | web3.ipynb | kiview/ethereum-network-analysis | 7a11ba1eb1427ece41b88c6fbeacd9c717f6918f | [
"MIT"
] | null | null | null | 541.288828 | 95,192 | 0.757139 | [
[
[
"from web3 import Web3\nfrom ipywidgets import IntProgress\nfrom IPython.display import display\nimport pandas as pd\nimport teneto\nfrom teneto import TemporalNetwork\nimport matplotlib.pyplot as plt\nfrom tqdm.notebook import trange",
"_____no_output_____"
],
[
"w3 = Web3(Web3.IPCProvider('./ipc/jsonrpc.ipc'))\nw3.isConnected()",
"_____no_output_____"
],
[
"start_block = 6000000\nend_block = 6110050\n\nminers = []\nfor idx in trange(start_block, end_block): # trange gives us a neat progress bar\n b = w3.eth.getBlock(idx, full_transactions=True)\n miners.append(b.miner)",
"_____no_output_____"
],
[
"df = pd.DataFrame(miners, columns=['miners'])\nblock_per_miner = df['miners'].value_counts()\nblock_per_miner",
"_____no_output_____"
],
[
"print(df['miners'].unique().size)\nprint(df.loc[df['miners'] == '0x841C25A1b2bA723591c14636Dc13E4deeb65A79b'].tail())",
"23\n miners\n63051 0x841C25A1b2bA723591c14636Dc13E4deeb65A79b\n63073 0x841C25A1b2bA723591c14636Dc13E4deeb65A79b\n63095 0x841C25A1b2bA723591c14636Dc13E4deeb65A79b\n63117 0x841C25A1b2bA723591c14636Dc13E4deeb65A79b\n63139 0x841C25A1b2bA723591c14636Dc13E4deeb65A79b\n"
],
[
"block_per_miner.plot(kind='bar')",
"_____no_output_____"
]
],
[
[
"Get a number of transactions and try to identiy suspicious interactions with the account backing the Faucet app.",
"_____no_output_____"
]
],
[
[
"start_block = 6000000\nend_block = 6300000\n\naddresses = set()\ntransactions = []\nfor idx in trange(start_block, end_block): # trange gives us a neat progress bar\n b = w3.eth.getBlock(idx, full_transactions=True)\n for tx in b.transactions:\n addresses.add(tx['to'])\n addresses.add(tx['from'])\n transactions.append([b.timestamp, idx, tx['to'], tx['from']])",
"_____no_output_____"
],
[
"df = pd.DataFrame(transactions, columns=['timestamp', 'block', 'to', 'from'])\ndf",
"_____no_output_____"
],
[
"df['from'].value_counts()",
"_____no_output_____"
],
[
"df['to'].value_counts()",
"_____no_output_____"
],
[
"faucet_acc = '0xaB59A1ea1aC9af9F77518b9B4AD80942adE35088'\ncerticy_sc = '0xE5a9654C7e190701016EBf18206020bf16D8Beab'\ndf2 = df.loc[(df['from'] == faucet_acc) & (df['to'] != certicy_sc)]\n\n\ndf2['to'].value_counts()",
"_____no_output_____"
],
[
"exploit_acc = '0x8730584dCDd4550F335e1ccfb32Fa80252B9b02C'\n\nexploit_df = df.loc[(df['to'] == exploit_acc)]\n\n(exploit_df['block'] - 6051191) / 5 / 60",
"_____no_output_____"
]
],
[
[
"Collect SC interactions in transactions.",
"_____no_output_____"
]
],
[
[
"\n\nstart_block = 6000000\nend_block = 6001100\n\naddresses = set()\nsmart_contracts = []\nfor idx in trange(start_block, end_block): \n b = w3.eth.getBlock(idx, full_transactions=True)\n for tx in b.transactions:\n code = w3.eth.getCode(tx['to']);\n if code:\n smart_contracts.append(tx['to'])\n\nsmart_contracts",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec914dbfe6938ba369044340d1af8734ee6ff4e9 | 1,273 | ipynb | Jupyter Notebook | helloworld_20200009.ipynb | Temple2001/helloworld | ec4d6efc1291108b24526339d439eff3f1b66b68 | [
"MIT"
] | null | null | null | helloworld_20200009.ipynb | Temple2001/helloworld | ec4d6efc1291108b24526339d439eff3f1b66b68 | [
"MIT"
] | null | null | null | helloworld_20200009.ipynb | Temple2001/helloworld | ec4d6efc1291108b24526339d439eff3f1b66b68 | [
"MIT"
] | null | null | null | 24.480769 | 237 | 0.494108 | [
[
[
"<a href=\"https://colab.research.google.com/github/Temple2001/helloworld/blob/main/helloworld_20200009.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"print('helloworld', 20200009)",
"helloworld 20200009\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
ec914ead14cb258356ad78d9872d969734b13682 | 566 | ipynb | Jupyter Notebook | HW10.ipynb | bro278911/Leetcode | 10033bd633f6638aa3bbb1a4b8c6a0fbd6677195 | [
"MIT"
] | 1 | 2019-07-05T10:32:58.000Z | 2019-07-05T10:32:58.000Z | HW10.ipynb | bro278911/LeetCode | 10033bd633f6638aa3bbb1a4b8c6a0fbd6677195 | [
"MIT"
] | null | null | null | HW10.ipynb | bro278911/LeetCode | 10033bd633f6638aa3bbb1a4b8c6a0fbd6677195 | [
"MIT"
] | null | null | null | 566 | 566 | 0.575972 | [
[
[
"class Solution(object):\n def defangIPaddr(self, address):\n \"\"\"\n :type address: str\n :rtype: str\n \"\"\"\n\n return address.replace(\".\",\"[.]\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ec916d170358b07ac10a28e25f970311e67859ec | 23,753 | ipynb | Jupyter Notebook | notebooks/examples/K-Nearest.ipynb | ohadravid/ml-tutorial | 5b196a80290ca443c079cf0a32dd38d149a9ef34 | [
"MIT"
] | null | null | null | notebooks/examples/K-Nearest.ipynb | ohadravid/ml-tutorial | 5b196a80290ca443c079cf0a32dd38d149a9ef34 | [
"MIT"
] | null | null | null | notebooks/examples/K-Nearest.ipynb | ohadravid/ml-tutorial | 5b196a80290ca443c079cf0a32dd38d149a9ef34 | [
"MIT"
] | null | null | null | 107.968182 | 10,072 | 0.875174 | [
[
[
"## Imports & Dataset",
"_____no_output_____"
]
],
[
[
"from numpy import *\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"dataset = array([[104, 3],[100,2],[81,1],[10,101],[5, 99],[2, 98]])",
"_____no_output_____"
],
[
"labels = ['Romance', 'Romance', 'Romance', 'Action', 'Action', 'Action']",
"_____no_output_____"
]
],
[
[
"## Helper Functions",
"_____no_output_____"
]
],
[
[
"def plot_dataset(dataset):\n fig = plt.figure()\n ax = fig.add_subplot(111)\n ax.set_ylabel('Number of Kisses')\n ax.set_xlabel('Number of Kicks')\n ax.scatter(dataset[:,0], dataset[:,1])\n plt.show()",
"_____no_output_____"
],
[
"def plot_with_inX(dataset, inX):\n new_dataset = list(dataset[:])\n new_dataset.append(inX)\n plot_dataset(array(new_dataset))",
"_____no_output_____"
],
[
"def calc_closest_points(dataset, inX):\n dataset_size = dataset.shape[0]\n diff_mat = tile(inX, (dataset_size, 1)) - dataset\n sq_diffmat = diff_mat**2\n summed = sq_diffmat.sum(axis=1)\n distances = summed**0.5\n return distances.argsort()",
"_____no_output_____"
],
[
"def classify(labels, closest_points, k):\n closest_labels=[labels[i] for i in closest_points[:k]]\n return max(set(closest_labels), key=closest_labels.count)",
"_____no_output_____"
]
],
[
[
"## Code",
"_____no_output_____"
]
],
[
[
"plot_dataset(dataset)",
"_____no_output_____"
],
[
"plot_with_inX(dataset, [30,130])",
"_____no_output_____"
],
[
"calc_closest_points(dataset, [30, 130])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec91719b21993b44c1d734a29e672a23064a0847 | 846,136 | ipynb | Jupyter Notebook | tp-1/TP 1.ipynb | riadh26/Image-Processing-Course | 3a9c452b9173345a247dfb8ce7e0443a41064e61 | [
"MIT"
] | null | null | null | tp-1/TP 1.ipynb | riadh26/Image-Processing-Course | 3a9c452b9173345a247dfb8ce7e0443a41064e61 | [
"MIT"
] | null | null | null | tp-1/TP 1.ipynb | riadh26/Image-Processing-Course | 3a9c452b9173345a247dfb8ce7e0443a41064e61 | [
"MIT"
] | null | null | null | 1,587.497186 | 182,754 | 0.962501 | [
[
[
"# Exercice 1\n## Reading, showing and saving images",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\nfrom PIL import Image, ImageOps\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"# Reading the image\nlena = Image.open('../resources/lena.png')",
"_____no_output_____"
],
[
"# Image format, size and mode\nprint(f'Image Format: {lena.format}')\nprint(f'Image Size: {lena.size}')\nprint(f'Image Mode: {lena.mode}')",
"Image Format: PNG\nImage Size: (512, 512)\nImage Mode: RGB\n"
],
[
"# Image dimensions\nprint(f'Dimensions :')\nprint(f'width: {lena.height}')\nprint(f'height: {lena.width}')",
"Dimensions :\nwidth: 512\nheight: 512\n"
],
[
"# Showing the image \nplt.imshow(lena)",
"_____no_output_____"
],
[
"# Value of pixel [20, 32] \n\n# Using Pillow\nr, g, b = lena.getpixel((20, 32))\nprint('Valeur du pixel [20, 32]: ')\nprint(f'R: {r} \\nG: {g} \\nB: {b}')\n\n# Using Numpy\nr, g, b = np.array(lena)[32, 20]",
"Valeur du pixel [20, 32]: \nR: 171 \nG: 86 \nB: 57\n"
],
[
"# Number of pixels\npixels = lena.width * lena.height\nprint(f'Lena image has {pixels} pixels')",
"Lena image has 262144 pixels\n"
],
[
"# Resizing the image to 300*200\nlena_resized = lena.resize((300, 200))\nlena_resized.save('lena_resized.png')\nplt.imshow(lena_resized)",
"_____no_output_____"
],
[
"# Converting the image to greyscale\nlena_grey = lena.convert('L')\nlena_grey.save('lena_greyscale.png')\nplt.imshow(lena_grey, cmap='gray')",
"_____no_output_____"
],
[
"# Min and Max intensity and intensity mean \n\n# Using Pillow\nmin_intensity, max_intensity = lena_grey.getextrema()\n\nmean = 0\nfor i in range(lena_grey.width): \n for j in range(lena_grey.height):\n mean += lena_grey.getpixel((i, j))\n\nmean /= lena_grey.width * lena_grey.height\n\n# Using Numpy\nlena_array = np.array(lena_grey)\nprint(f'Min intensity: {lena_array.min()}')\nprint(f'Max intensity: {lena_array.max()}')\nprint(f'Mean: {round(lena_array.mean(), 2)}')\n",
"Min intensity: 10\nMax intensity: 241\nMean: 95.66\n"
],
[
"# Binary image\nlena_grey = cv2.imread('../resources/lena.png', cv2.IMREAD_GRAYSCALE)\nret, thresh = cv2.threshold(lena_grey, mean, 255, cv2.THRESH_BINARY)\nplt.title(\"Binary from Greyscale\")\nplt.imshow(thresh, cmap='gray')\ncv2.imwrite('lena_binary.png', thresh)",
"_____no_output_____"
]
],
[
[
"# Exercice 2\n## Converting images",
"_____no_output_____"
]
],
[
[
"lena = cv2.imread('../resources/lena.png')",
"_____no_output_____"
],
[
"# Converting to Greyscale, RGB, HSV, LAB\nlena_greyscale = cv2.cvtColor(lena, cv2.COLOR_BGR2GRAY)\nlena_rgb = cv2.cvtColor(lena, cv2.COLOR_BGR2RGB)\nlena_hsv = cv2.cvtColor(lena, cv2.COLOR_BGR2HSV)\nlena_lab = cv2.cvtColor(lena, cv2.COLOR_BGR2LAB)",
"_____no_output_____"
],
[
"# Showing converted images \nfigure, axes = plt.subplots(2, 2)\n\naxes[0][0].set_title('Greyscale')\naxes[0][0].imshow(lena_greyscale, cmap='gray')\n\naxes[0][1].set_title('RGB')\naxes[0][1].imshow(lena_rgb)\n\naxes[1][0].set_title('HSV')\naxes[1][0].imshow(lena_hsv)\n\naxes[1][1].set_title('LAB')\naxes[1][1].imshow(lena_lab)\n\nplt.subplots_adjust(hspace=.5)\nplt.show()\n\n# Saving the figure\nfigure.savefig('converted_images.png', dpi=200)",
"_____no_output_____"
]
],
[
[
"# Exercice 3\n## Geometric transformation",
"_____no_output_____"
]
],
[
[
"# Rotating the images \nlena = Image.open('../resources/lena.png')\n\nlena_rotated_90 = lena.rotate(90.0)\nlena_rotated_90.save('lena_rotated_90.png')\n\nlena_rotated_45 = lena.rotate(45.0)\nlena_rotated_45.save('lena_rotated_45.png')\n\nlena_rotated_m90 = lena.rotate(-90.0)\nlena_rotated_m90.save('lena_rotated_m90.png')",
"_____no_output_____"
],
[
"# Showing rotated images\nfigure, axes = plt.subplots(2, 2)\n\naxes[0][0].set_title('Original')\naxes[0][0].imshow(lena)\n\naxes[0][1].set_title('90° Rotation')\naxes[0][1].imshow(lena_rotated_90)\n\naxes[1][0].set_title('45° Rotation')\naxes[1][0].imshow(lena_rotated_45)\n\naxes[1][1].set_title('-90° Rotation')\naxes[1][1].imshow(lena_rotated_m90)\n\nplt.subplots_adjust(hspace=.5)\nplt.show()",
"_____no_output_____"
],
[
"# Flipping horizontally and vertically\nlena_hflipped = ImageOps.mirror(lena)\nlena_hvflipped = ImageOps.flip(lena_hflipped)\nlena_hvflipped.save('lena_hvflipped.png')\n\nfigure, axes = plt.subplots(1, 2)\n\naxes[0].set_title('Original')\naxes[0].imshow(lena)\n\naxes[1].set_title('Horizontally and Vertically flipped')\naxes[1].imshow(lena_hvflipped)\n\nplt.subplots_adjust(wspace=.5)\nplt.show()",
"_____no_output_____"
],
[
"# Image translation along (100, 200)\nlena = cv2.imread('../resources/lena.png')\ntranslation_matrix = np.float32([[1, 0, 100], [0, 1, 200]])\nlena_translated = cv2.warpAffine(lena, translation_matrix, lena.shape[:2])\ncv2.imwrite('lena_translated.png', lena_translated)\n\nfigure, axes = plt.subplots(1, 2)\n\naxes[0].set_title('Original')\naxes[0].imshow(cv2.cvtColor(lena, cv2.COLOR_BGR2RGB))\n\naxes[1].set_title('Translated (100, 200)')\naxes[1].imshow(cv2.cvtColor(lena_translated, cv2.COLOR_BGR2RGB))\n\nplt.subplots_adjust(wspace=.5)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ec91756aa9a714fe9fc7ed33bbc8c8811998f2d8 | 9,348 | ipynb | Jupyter Notebook | index.ipynb | nmarincic/gfparser | 1f7abf5678dbf5f71b05fec9469af3c977c2f290 | [
"Apache-2.0"
] | null | null | null | index.ipynb | nmarincic/gfparser | 1f7abf5678dbf5f71b05fec9469af3c977c2f290 | [
"Apache-2.0"
] | null | null | null | index.ipynb | nmarincic/gfparser | 1f7abf5678dbf5f71b05fec9469af3c977c2f290 | [
"Apache-2.0"
] | null | null | null | 36.948617 | 217 | 0.479461 | [
[
[
"#hide\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"# GF Parser \n\n> Pulls useful information from the web, helping you make flashcards for learning the German language.",
"_____no_output_____"
],
[
"Welcome to GF parser, an app to help you make flashcards for learning German faster!",
"_____no_output_____"
],
[
"## Install",
"_____no_output_____"
],
[
"`pip install gfparser`",
"_____no_output_____"
],
[
"## How to use",
"_____no_output_____"
],
[
"First, import the `WikiParser`",
"_____no_output_____"
]
],
[
[
"from gfparser import WikiParser",
"_____no_output_____"
]
],
[
[
"Create the parser object",
"_____no_output_____"
]
],
[
[
"parser = WikiParser()",
"_____no_output_____"
],
[
"parser.parse([\"Hund\", \"arbeiten\", \"schön\"])",
"Downloading words\n0% [███] 100% | ETA: 00:00:00 | Item ID: schön \nTotal time elapsed: 00:00:01\nDownloading words\n0% [█] 100% | ETA: 00:00:00 | Item ID: schönen \nTotal time elapsed: 00:00:00\n"
]
],
[
[
"Iterate over `parser.words` and print the words:",
"_____no_output_____"
]
],
[
[
"for w in parser.words:\n print (w)",
"=======================================================\n= Hund =\n=======================================================\n \n Substantiv \n \n[hʊnt]\n\nder Hund\ndie Hunde\n\n1: Die mustergültige Definition, dass ein Hund ein von Flöhen bewohnter Organismus ist, der bellt, hat Kurt Tucholsky in seinem Traktat über den Hund dem Philosophen Gottfried Wilhelm Leibniz zugeschrieben.\n2: „Schatz, hab ich heute nacht das ganze Rindfleisch gefressen oder der Hund?…Gott! Ich hab'n Kopf wie'n Sieb!! Wir haben ja gar kein' Hund.“\n3: Bei neueren Umfragen bestätigen immerhin ein Viertel der Hundehalter, den Hund ins Bett zu lassen bzw. ihn dorthin auch des Nachts mitzunehmen.\n4: „In Gemeinschaft mit Eseln, Pferden und Hunden sitzen die Handwerker bei ihrer Arbeit auf der Straße.“\n5: „Der Hund gehorcht aufs Wort und geht links neben Elsa.“\n6: Er ist ein krummer Hund.\n7: „…; wenn die Beine müde werden, legst du dich in einem engen, schrägen Schacht, der wie ein zur Hälfte aufgestellter Abzugskanal wirkt, in einen der »Hunde« und läßt dich an das Tageslicht ziehen,…“\n8: „Erst mußte er die schweren Hunde schieben und Obacht geben, daß er sich nicht den Schädel einrammte in dem niedrigen Gang oder überfahren wurde.“\n9: Ich heb' an, und du schiebst den Hund drunter.\n10: Zur Familie der Hunde gehören Arten wie der Rotfuchs und der Wolf (mit der Unterart Haushund).\n\n \n Substantiv/Eigenname \n \n[hʊnt]\n\n\n1: {{Beispiele fehlen|spr=de}}\n\n \n Substantiv/Nachname \n \n[hʊnt]\n\n\n1: Frau Hund ist ein Genie im Verkauf.\n2: Herr Hund wollte uns kein Interview geben.\n3: Die Hunds fliegen heute nach Sri Lanka.\n4: Der Hund trägt nie die Pullover, die die Hund ihm strickt.\n5: Das kann ich dir aber sagen: „Wenn die Frau Hund kommt, geht der Herr Hund.“\n6: Hund kommt und geht.\n7: Hunds kamen, sahen und siegten.\n\n\n=======================================================\n= arbeiten =\n=======================================================\n \n Verb \n \n[ˈaʁbaɪ̯tn̩]\n\narbeite\narbeitete\nhaben gearbeitet\n\n1: Wir arbeiten gemeinsam an einem Wörterbuch.\n2: „Der Vater arbeitete in einer Kleiderfabrik und betrieb zu Hause mit seiner Frau in der Wohnung eine Schneiderei.“\n3: Er arbeitet als Lektor in einem bekannten Verlag.\n4: Was macht das Studium? Ich arbeite daran.\n5: Was macht die Reparatur? Wir arbeiten mit Hochdruck an der Hauptleitung.\n6: Seit der Reparatur arbeitet die Maschine ohne Unterbrechung.\n7: Die Anlage arbeitet wieder vorschriftsmäßig und im Takt.\n8: Viele Schmähungen musste er ertragen, fortan arbeitete es in seinem Herzen.\n9: Die Erlebnisse von gestern Abend arbeiten noch immer in mir, ich kann mich gar nicht auf die Arbeit konzentrieren.\n10: „Was ist der Unterschied zwischen einem Beamten und einem Stück Holz?“ – „Holz arbeitet!“\n11: Vergiss nicht die Fuge am Rand zu lassen, mindestens 2 cm, damit das Holz arbeiten kann!\n12: Ab Montag arbeite ich Teilzeit.\n\n\n=======================================================\n= schön =\n=======================================================\n \n Adjektiv \n \n[ʃøːn]\n\nschön\nschöner\nam schönsten\n\n1: Sie hat schönes Haar. Das Musikstück ist schön.\n2: Sie sang schön, schöner als gewöhnlich, weil die Instrumentalisten ihr so vertraut waren. Am schönsten sang sie, als Viktor am Klavier saß.\n3: „Nik Wallenda, Urenkel eines deutschen Zirkusakrobaten, hat als erster Mensch die Niagarafälle an ihrer schönsten und gefährlichsten Stelle überquert.“\n4: Das hat er aber schön gemacht. Wir hatten schöne Ferientage. Es wäre schön, wenn wir uns wieder treffen. Es war schön von ihm, seiner Frau Blumen zu schenken.\n5: Das ist ja eine schöne Geschichte! Oder anders gesagt: Das ist aber wirklich schlimm!\n6: Du bist mir ja ein schöner Freund! Oder anders gesagt: Du bist wahrlich ein schlechter Freund!\n7: Da wird sie ganz schön staunen. Also, da wird sie aber überrascht sein.\n8: Das wird eine schöne Stange Geld kosten. Also, das wird wohl ziemlich teuer werden.\n9: Lass uns doch mal wieder im Kino einen Film ansehen! – Schön, dann komm!\n10: So, jetzt gehen wir schön ins Bett.\n11: Schön aufpassen, wenn du über die Straße gehst!\n\n\n=======================================================\n= schönen =\n=======================================================\n \n Verb \n \n[ˈʃøːnən]\n\nschöne\nschönte\nhaben geschönt\n\n1: Das Ergebnis: Angestellte der Politiker hatten in Hunderten Fällen die Biografien ihrer Arbeitgeber geschönt, Kritik entfernt oder den politischen Gegner verleumdet.\n2: Zudem wurden statistische Angaben in der UdSSR regelmäßig geschönt.\n3: Dieser Wein wurde mit Bentonit geschönt.\n\n\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec918ccc527d2a4b1905364f3b92376ca0a5cd7f | 116,474 | ipynb | Jupyter Notebook | tests/sens_label_prior.ipynb | slanglab/freq-e | d4cc08f95dc519b343a95b0fe39e68bed1e1541c | [
"MIT"
] | 14 | 2019-03-22T17:02:34.000Z | 2020-02-06T01:33:27.000Z | tests/sens_label_prior.ipynb | slanglab/freq-e | d4cc08f95dc519b343a95b0fe39e68bed1e1541c | [
"MIT"
] | 1 | 2019-09-09T15:24:55.000Z | 2019-09-09T15:24:55.000Z | tests/sens_label_prior.ipynb | slanglab/freq-e | d4cc08f95dc519b343a95b0fe39e68bed1e1541c | [
"MIT"
] | 2 | 2019-06-17T04:49:46.000Z | 2020-02-06T01:33:34.000Z | 333.73639 | 16,556 | 0.937093 | [
[
[
"counter-intuitive behavior when all the predictions are one-sided (>0.5, or <0.5) and the label prior is less than PCC ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport freq_e",
"_____no_output_____"
],
[
"import matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"#set-up: all the test pred probs are <0.5, training prev also <0.5\n#we get weird predictions when the training prev is less than \n#the mean of the test_pred_probs \n# I think this has to do with the ratios p(y=1|x)/pi and p(y=0|x)/(1-pi)\nLABEL_PRIOR = 0.25\ntest_pred_probs = np.array([0.2, 0.3, 0.4])\nlog_odds = freq_e.estimate.calc_log_odds(test_pred_probs)\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('PCC=', np.mean(test_pred_probs))\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()",
"LABEL_PRIOR= 0.25\nPCC= 0.3\npoint= 1.0\n"
],
[
"#if we set the label prior equal to PCC or greater than we're fine \nLABEL_PRIOR = 0.3\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()\nLABEL_PRIOR = 0.5\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()\nLABEL_PRIOR = 0.9\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()",
"LABEL_PRIOR= 0.3\npoint= 0.3\n"
],
[
"#this same phenomena holds even with an extremely large number of docs \nLABEL_PRIOR = 0.25\ntest_pred_probs = np.random.uniform(low=0.1, high=0.5, size=10**6)\nlog_odds = freq_e.estimate.calc_log_odds(test_pred_probs)\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('PCC=', np.mean(test_pred_probs))\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()",
"LABEL_PRIOR= 0.25\nPCC= 0.30017845418213585\npoint= 0.984\n"
]
],
[
[
"for test pred probs that are centered around 0.5, the label prior completely shifts predictions in the opposite way as the label prior. \n\ndoes this \"label prior\" still make sense even when you've trained a classifier with a balanced objective function (takes into account class imbalance?) ",
"_____no_output_____"
]
],
[
[
"LABEL_PRIOR = 0.5\ntest_pred_probs = np.random.uniform(low=0.4, high=0.6, size=10**6)\nlog_odds = freq_e.estimate.calc_log_odds(test_pred_probs)\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('PCC=', np.mean(test_pred_probs))\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()",
"LABEL_PRIOR= 0.5\nPCC= 0.5000157383529716\npoint= 0.501\n"
],
[
"LABEL_PRIOR = 0.2\nlog_odds = freq_e.estimate.calc_log_odds(test_pred_probs)\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('PCC=', np.mean(test_pred_probs))\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()",
"LABEL_PRIOR= 0.2\nPCC= 0.5000329078706477\npoint= 1.0\n"
],
[
"LABEL_PRIOR = 0.8\nlog_odds = freq_e.estimate.calc_log_odds(test_pred_probs)\nlog_post_probs = freq_e.estimate.mll_curve(log_odds, LABEL_PRIOR)\nprint('LABEL_PRIOR=', LABEL_PRIOR)\nprint('PCC=', np.mean(test_pred_probs))\nprint('point=', freq_e.estimate.generative_get_map_est(log_post_probs))\nplt.plot(freq_e.estimate.DEFAULT_THETA_GRID, log_post_probs)\nplt.show()",
"LABEL_PRIOR= 0.8\nPCC= 0.5000329078706477\npoint= 0.0\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec918d9b2fac5ebffa12da037592c687c7d6773b | 18,901 | ipynb | Jupyter Notebook | pytorch_grad_cam_layer_vis.ipynb | zelkourban/BP | e15d69fdf52bf58bc479e4242b16c1c510ba9057 | [
"MIT"
] | null | null | null | pytorch_grad_cam_layer_vis.ipynb | zelkourban/BP | e15d69fdf52bf58bc479e4242b16c1c510ba9057 | [
"MIT"
] | null | null | null | pytorch_grad_cam_layer_vis.ipynb | zelkourban/BP | e15d69fdf52bf58bc479e4242b16c1c510ba9057 | [
"MIT"
] | null | null | null | 39.377083 | 107 | 0.478123 | [
[
[
"## Vrstvova vizualizácia",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\nimport copy\nfrom PIL import Image\nimport matplotlib.cm as mpl_color_map\nimport torch\nfrom torch.autograd import Variable\nfrom torchvision import models\nfrom torch.optim import Adam\nimport cv2\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"device = torch.device(\"cuda:0\")",
"_____no_output_____"
],
[
"def format_np_output(np_arr):\n \"\"\"\n This is a (kind of) bandaid fix to streamline saving procedure.\n It converts all the outputs to the same format which is 3xWxH\n with using sucecssive if clauses.\n Args:\n im_as_arr (Numpy array): Matrix of shape 1xWxH or WxH or 3xWxH\n \"\"\"\n # Phase/Case 1: The np arr only has 2 dimensions\n # Result: Add a dimension at the beginning\n if len(np_arr.shape) == 2:\n np_arr = np.expand_dims(np_arr, axis=0)\n # Phase/Case 2: Np arr has only 1 channel (assuming first dim is channel)\n # Result: Repeat first channel and convert 1xWxH to 3xWxH\n if np_arr.shape[0] == 1:\n np_arr = np.repeat(np_arr, 3, axis=0)\n # Phase/Case 3: Np arr is of shape 3xWxH\n # Result: Convert it to WxHx3 in order to make it saveable by PIL\n if np_arr.shape[0] == 3:\n np_arr = np_arr.transpose(1, 2, 0)\n # Phase/Case 4: NP arr is normalized between 0-1\n # Result: Multiply with 255 and change type to make it saveable by PIL\n if np.max(np_arr) <= 1:\n np_arr = (np_arr*255).astype(np.uint8)\n return np_arr\n\ndef save_image(im, path):\n \"\"\"\n Saves a numpy matrix or PIL image as an image\n Args:\n im_as_arr (Numpy array): Matrix of shape DxWxH\n path (str): Path to the image\n \"\"\"\n if isinstance(im, (np.ndarray, np.generic)):\n im = format_np_output(im)\n im = Image.fromarray(im)\n \n im.save(path)\n\ndef preprocess_image(pil_im, resize_im=True):\n \"\"\"\n Processes image for CNNs\n Args:\n PIL_img (PIL_img): Image to process\n resize_im (bool): Resize to 224 or not\n returns:\n im_as_var (torch variable): Variable that contains processed float tensor\n \"\"\"\n # mean and std list for channels (Imagenet)\n mean = [0.485, 0.456, 0.406]\n std = [0.229, 0.224, 0.225]\n # Resize image\n if resize_im:\n pil_im.thumbnail((224, 224))\n im_as_arr = np.float32(pil_im)\n im_as_arr = im_as_arr.transpose(2, 0, 1) # Convert array to D,W,H\n # Normalize the channels\n for channel, _ in enumerate(im_as_arr):\n im_as_arr[channel] /= 255\n im_as_arr[channel] -= mean[channel]\n im_as_arr[channel] /= std[channel]\n # Convert to float tensor\n im_as_ten = torch.from_numpy(im_as_arr).float()\n # Add one more channel to the beginning. Tensor shape = 1,3,224,224\n im_as_ten.unsqueeze_(0)\n # Convert to Pytorch variable\n im_as_var = Variable(im_as_ten, requires_grad=True)\n return im_as_var\n\ndef recreate_image(im_as_var):\n \"\"\"\n Recreates images from a torch variable, sort of reverse preprocessing\n Args:\n im_as_var (torch variable): Image to recreate\n returns:\n recreated_im (numpy arr): Recreated image in array\n \"\"\"\n reverse_mean = [-0.485, -0.456, -0.406]\n reverse_std = [1/0.229, 1/0.224, 1/0.225]\n recreated_im = copy.copy(im_as_var.data.numpy()[0])\n for c in range(3):\n recreated_im[c] /= reverse_std[c]\n recreated_im[c] -= reverse_mean[c]\n recreated_im[recreated_im > 1] = 1\n recreated_im[recreated_im < 0] = 0\n recreated_im = np.round(recreated_im * 255)\n\n recreated_im = np.uint8(recreated_im).transpose(1, 2, 0)\n return recreated_im\n",
"_____no_output_____"
],
[
"class CNNLayerVisualization():\n \"\"\"\n Produces an image that minimizes the loss of a convolution\n operation for a specific layer and filter\n \"\"\"\n def __init__(self, model, selected_layer, selected_filter):\n self.model = model\n self.model.eval()\n self.selected_layer = selected_layer\n self.selected_filter = selected_filter\n self.conv_output = 0\n # Create the folder to export images if not exists\n if not os.path.exists('../generated'):\n os.makedirs('../generated')\n\n def hook_layer(self):\n def hook_function(module, grad_in, grad_out):\n # Gets the conv output of the selected filter (from selected layer)\n self.conv_output = grad_out[0, self.selected_filter]\n # Hook the selected layer\n self.model[self.selected_layer].register_forward_hook(hook_function)\n\n def visualise_layer_with_hooks(self):\n # Hook the selected layer\n self.hook_layer()\n # Generate a random image\n random_image = np.uint8(np.random.uniform(150, 180, (56, 56, 3)))\n # Process image and return variable\n processed_image = preprocess_image(random_image, False)\n # Define optimizer for the image\n optimizer = Adam([processed_image], lr=0.1, weight_decay=1e-6)\n for i in range(1, 31):\n optimizer.zero_grad()\n # Assign create image to a variable to move forward in the model\n x = processed_image\n for index, layer in enumerate(self.model):\n # Forward pass layer by layer\n # x is not used after this point because it is only needed to trigger\n # the forward hook function\n x = layer(x)\n # Only need to forward until the selected layer is reached\n if index == self.selected_layer:\n # (forward hook function triggered)\n break\n # Loss function is the mean of the output of the selected layer/filter\n # We try to minimize the mean of the output of that specific filter\n loss = -torch.mean(self.conv_output)\n print('Iteration:', str(i), 'Loss:', \"{0:.2f}\".format(loss.data.numpy()))\n # Backward\n loss.backward()\n # Update image\n optimizer.step()\n # Recreate image\n self.created_image = recreate_image(processed_image)\n # Save image\n if i % 5 == 0:\n print(\"saving image at iter \", i)\n im_path = '/content/' + str(self.selected_layer) + \\\n '_f' + str(self.selected_filter) + '_iter' + str(i) + '_with_hooks.jpg'\n save_image(self.created_image, im_path)\n\n def visualise_layer_without_hooks(self,scale=1):\n # Process image and return variable\n # Generate a random image\n sz = 256\n random_image = np.uint8(np.random.uniform(150, 180, (sz, sz, 3)))\n # Process image and return variable\n processed_image = preprocess_image(random_image, False)\n # Define optimizer for the image\n for i in range(scale):\n processed_image = preprocess_image(random_image, False)\n optimizer = Adam([processed_image], lr=0.1, weight_decay=1e-6)\n for i in range(1, 20):\n \n optimizer.zero_grad()\n # Assign create image to a variable to move forward in the model\n x = processed_image\n for index, layer in enumerate(self.model):\n # Forward pass layer by layer\n x = layer(x)\n if index == self.selected_layer:\n # Only need to forward until the selected layer is reached\n # Now, x is the output of the selected layer\n break\n # Here, we get the specific filter from the output of the convolution operation\n # x is a tensor of shape 1x512x28x28.(For layer 17)\n # So there are 512 unique filter outputs\n # Following line selects a filter from 512 filters so self.conv_output will become\n # a tensor of shape 28x28\n self.conv_output = x[0, self.selected_filter]\n # Loss function is the mean of the output of the selected layer/filter\n # We try to minimize the mean of the output of that specific filter\n loss = -torch.mean(self.conv_output)\n ##print('Iteration:', str(i), 'Loss:', \"{0:.2f}\".format(loss.data.numpy()))\n # Backward\n loss.backward()\n # Update image\n optimizer.step()\n # Recreate image\n #random_image = val_tfms.denorm(img_var.data.cpu().numpy()[0].transpose(1,2,0))\n sz = int(1.2 * sz) # calculate new image size\n random_image = cv2.resize(random_image, (sz, sz), interpolation = cv2.INTER_CUBIC)\n #random_image = cv2.blur(random_image,(5,5))\n self.created_image = recreate_image(processed_image)\n # Save image\n \n print(\"saving image\")\n im_path = '/content/generated' + str(self.selected_layer) + \\\n '_f' + str(self.selected_filter) + '_iter' + str(i) + '_features_new.jpg'\n save_image(self.created_image, im_path)\n",
"_____no_output_____"
],
[
"pretrained_model = models.vgg16(pretrained=True).features",
"_____no_output_____"
],
[
"cnn_layer = 5\nfilter_pos = 1\nfor i in tqdm(range(5)):\n layer_vis = CNNLayerVisualization(pretrained_model, cnn_layer, filter_pos)\n layer_vis.visualise_layer_without_hooks(scale=2)\n cnn_layer+=2\n#layer_vis.visualise_layer_with_hooks()\n",
"_____no_output_____"
],
[
"!sudo apt install imagemagick\n!convert -append /content/generated/*.jpg -append /content/layers.jpg",
"_____no_output_____"
]
],
[
[
"## GRAD-CAM",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/kazuto1011/grad-cam-pytorch.git\n!sudo apt install imagemagick",
"_____no_output_____"
],
[
"!rm /content/grad-cam-pytorch/results/*",
"_____no_output_____"
],
[
"#%cd grad-cam-pytorch/\nimage = \"photo.jpg\" #@param {type:\"string\"}\nmodel = \"vgg16\" #@param [\"vgg16\",\"alexnet\"]\nlayer = 'features' #@param [\"features\", \"avgpool\", \"classifier\"]\n!python main.py demo1 -i /content/$image -a vgg16 -t $layer --cuda\n!convert -append /content/grad-cam-pytorch/results/*.png +append /content/visualization.png",
"_____no_output_____"
]
],
[
[
"### Web kamera",
"_____no_output_____"
]
],
[
[
"from IPython.display import display, Javascript\nfrom google.colab.output import eval_js\nfrom base64 import b64decode\n\ndef take_photo(filename='photo.jpg', quality=0.8):\n js = Javascript('''\n async function takePhoto(quality) {\n const div = document.createElement('div');\n const capture = document.createElement('button');\n capture.textContent = 'Capture';\n div.appendChild(capture);\n\n const video = document.createElement('video');\n video.style.display = 'block';\n const stream = await navigator.mediaDevices.getUserMedia({video: true});\n\n document.body.appendChild(div);\n div.appendChild(video);\n video.srcObject = stream;\n await video.play();\n\n // Resize the output to fit the video element.\n google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);\n\n // Wait for Capture to be clicked.\n await new Promise((resolve) => capture.onclick = resolve);\n\n const canvas = document.createElement('canvas');\n canvas.width = video.videoWidth;\n canvas.height = video.videoHeight;\n canvas.getContext('2d').drawImage(video, 0, 0);\n stream.getVideoTracks()[0].stop();\n div.remove();\n return canvas.toDataURL('image/jpeg', quality);\n }\n ''')\n display(js)\n data = eval_js('takePhoto({})'.format(quality))\n binary = b64decode(data.split(',')[1])\n with open(filename, 'wb') as f:\n f.write(binary)\n return filename",
"_____no_output_____"
],
[
"from IPython.display import Image\ntry:\n filename = take_photo()\n print('Saved to {}'.format(filename))\n \n # Show the image which was just taken.\n display(Image(filename))\nexcept Exception as err:\n # Errors will be thrown if the user does not have a webcam or if they do not\n # grant the page permission to access it.\n print(str(err))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec919c88eba77229879645ec6e1fe4f8428ee5bd | 591,880 | ipynb | Jupyter Notebook | Model/6M_336H_ST.ipynb | Muiiya/research | f68e5d3cb881f66f808889d5b512df3569b9e4df | [
"MIT"
] | 1 | 2021-07-31T01:29:29.000Z | 2021-07-31T01:29:29.000Z | Model/6M_336H_ST.ipynb | Muiiya/research | f68e5d3cb881f66f808889d5b512df3569b9e4df | [
"MIT"
] | null | null | null | Model/6M_336H_ST.ipynb | Muiiya/research | f68e5d3cb881f66f808889d5b512df3569b9e4df | [
"MIT"
] | 1 | 2021-10-31T22:12:39.000Z | 2021-10-31T22:12:39.000Z | 591,880 | 591,880 | 0.888962 | [
[
[
"#Transformer",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive') ",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"# informer, ARIMA, Prophet, LSTMa와는 다른 형식의 CSV를 사용한다.(Version2)\n\n!pip install pandas\n\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\ndf = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_ST_Version2.csv', encoding='cp949')\ndf.head()",
"Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (1.1.5)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas) (2018.9)\nRequirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from pandas) (1.19.5)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2.8.2)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\n"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1 entries, 0 to 0\nColumns: 4322 entries, 날짜 to 2021-07-31 0:00\ndtypes: float64(4321), object(1)\nmemory usage: 33.9+ KB\n"
],
[
"data_start_date = df.columns[1]\ndata_end_date = df.columns[-1]\nprint('Data ranges from %s to %s' % (data_start_date, data_end_date))",
"Data ranges from 2021-02-01 0:00 to 2021-07-31 0:00\n"
]
],
[
[
"### Train and Validation Series Partioning\n",
"_____no_output_____"
]
],
[
[
"######################## CHECK #########################\n# 기준시간이 hour이므로, 7일 예측한다면 7*24로 설정한다.\n\n\nfrom datetime import timedelta\n\npred_steps = 24*14+23\npred_length=timedelta(hours = pred_steps)\n\nfirst_day = pd.to_datetime(data_start_date)\nlast_day = pd.to_datetime(data_end_date)\n\nval_pred_start = last_day - pred_length + timedelta(1)\nval_pred_end = last_day\nprint(val_pred_start, val_pred_end)\n\ntrain_pred_start = val_pred_start - pred_length\ntrain_pred_end = val_pred_start - timedelta(days=1)\nprint(train_pred_start, train_pred_end)\n",
"2021-07-17 01:00:00 2021-07-31 00:00:00\n2021-07-02 02:00:00 2021-07-16 01:00:00\n"
],
[
"enc_length = train_pred_start - first_day\nprint(enc_length)\n\ntrain_enc_start = first_day\ntrain_enc_end = train_enc_start + enc_length - timedelta(1)\n\nval_enc_start = train_enc_start + pred_length\nval_enc_end = val_enc_start + enc_length - timedelta(1)\nprint(train_enc_start, train_enc_end)\nprint(val_enc_start, val_enc_end)",
"151 days 02:00:00\n2021-02-01 00:00:00 2021-07-01 02:00:00\n2021-02-15 23:00:00 2021-07-16 01:00:00\n"
],
[
"# 최종적으로 Val prediction 구간을 예측하게 된다.\n\nprint('Train encoding:', train_enc_start, '-', train_enc_end)\nprint('Train prediction:', train_pred_start, '-', train_pred_end, '\\n')\nprint('Val encoding:', val_enc_start, '-', val_enc_end)\nprint('Val prediction:', val_pred_start, '-', val_pred_end)\n\nprint('\\nEncoding interval:', enc_length.days)\nprint('Prediction interval:', pred_length.days)",
"Train encoding: 2021-02-01 00:00:00 - 2021-07-01 02:00:00\nTrain prediction: 2021-07-02 02:00:00 - 2021-07-16 01:00:00 \n\nVal encoding: 2021-02-15 23:00:00 - 2021-07-16 01:00:00\nVal prediction: 2021-07-17 01:00:00 - 2021-07-31 00:00:00\n\nEncoding interval: 151\nPrediction interval: 14\n"
]
],
[
[
"## Data Formatting",
"_____no_output_____"
]
],
[
[
"#np.log 1p 해준다.\n\ndate_to_index = pd.Series(index=pd.Index([pd.to_datetime(c) for c in df.columns[1:]]),\n data=[i for i in range(len(df.columns[1:]))])\n\nseries_array = df[df.columns[1:]].values.astype(np.float32)\nprint(series_array)\n\ndef get_time_block_series(series_array, date_to_index, start_date, end_date):\n inds = date_to_index[start_date:end_date]\n return series_array[:,inds]\n\ndef transform_series_encode(series_array):\n series_array = np.nan_to_num(series_array) # filling NaN with 0\n series_mean = series_array.mean(axis=1).reshape(-1,1)\n series_array = series_array - series_mean\n series_array = series_array.reshape((series_array.shape[0],series_array.shape[1], 1))\n\n return series_array, series_mean\n\ndef transform_series_decode(series_array, encode_series_mean):\n series_array = np.nan_to_num(series_array) # filling NaN with 0\n series_array = series_array - encode_series_mean\n series_array = series_array.reshape((series_array.shape[0],series_array.shape[1], 1)) \n \n return series_array",
"[[ 0.7290401 0.7290401 0.7290401 ... -0.12706481 -0.12706481\n -0.12706481]]\n"
],
[
"# sample of series from train_enc_start to train_enc_end \nencoder_input_data = get_time_block_series(series_array, date_to_index, \n train_enc_start, train_enc_end)\n\n\nencoder_input_data, encode_series_mean = transform_series_encode(encoder_input_data)\n\n\n# sample of series from train_pred_start to train_pred_end \ndecoder_target_data = get_time_block_series(series_array, date_to_index, \n train_pred_start, train_pred_end)\n\ndecoder_target_data = transform_series_decode(decoder_target_data, encode_series_mean)\n\n\nencoder_input_val_data = get_time_block_series(series_array, date_to_index, val_enc_start, val_enc_end)\nencoder_input_val_data, encode_series_mean = transform_series_encode(encoder_input_val_data)\n\ndecoder_target_val_data = get_time_block_series(series_array, date_to_index, val_pred_start, val_pred_end)\ndecoder_target_val_data = transform_series_decode(decoder_target_val_data, encode_series_mean)\n\n#for d in encoder_input_data:\n# print(d.shape)\n\n#train_dataset = tf.data.Dataset.from_tensor_slices((encoder_input_data, decoder_target_data))\n#train_dataset = train_dataset.batch(54)\n\n#for d in train_dataset:\n# #print(f'features:{features_tensor} target:{target_tensor}')\n# print(\"-----\")\n# print(d)",
"_____no_output_____"
]
],
[
[
"### Transformer model",
"_____no_output_____"
]
],
[
[
"!pip install tensorflow_datasets\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Requirement already satisfied: tensorflow_datasets in /usr/local/lib/python3.7/dist-packages (4.0.1)\nRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (0.3.4)\nRequirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (21.2.0)\nRequirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (5.2.2)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (1.19.5)\nRequirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (3.17.3)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (4.62.2)\nRequirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (2.3)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (1.15.0)\nRequirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (1.1.0)\nRequirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (0.16.0)\nRequirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (2.23.0)\nRequirement already satisfied: absl-py in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (0.12.0)\nRequirement already satisfied: dm-tree in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (0.1.6)\nRequirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tensorflow_datasets) (1.2.0)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->tensorflow_datasets) (2021.5.30)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->tensorflow_datasets) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->tensorflow_datasets) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->tensorflow_datasets) (3.0.4)\nRequirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from importlib-resources->tensorflow_datasets) (3.5.0)\nRequirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-metadata->tensorflow_datasets) (1.53.0)\n"
],
[
"train_dataset = tf.data.Dataset.from_tensor_slices((encoder_input_data, decoder_target_data))\nval_dataset = tf.data.Dataset.from_tensor_slices((encoder_input_val_data, decoder_target_val_data))",
"_____no_output_____"
],
[
"### position\ndef get_angles(pos, i, d_model):\n angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))\n return pos * angle_rates\n\n\ndef positional_encoding(position, d_model):\n angle_rads = get_angles(np.arange(position)[:, np.newaxis],\n np.arange(d_model)[np.newaxis, :],\n d_model)\n \n # apply sin to even indices in the array; 2i\n sines = np.sin(angle_rads[:, 0::2])\n \n # apply cos to odd indices in the array; 2i+1\n cosines = np.cos(angle_rads[:, 1::2])\n \n pos_encoding = np.concatenate([sines, cosines], axis=-1)\n \n pos_encoding = pos_encoding[np.newaxis, ...]\n \n return tf.cast(pos_encoding, dtype=tf.float32)\n",
"_____no_output_____"
],
[
"# Masking\ndef create_padding_mask(seq):\n seq = tf.cast(tf.math.equal(seq, 0), tf.float32)\n \n # add extra dimensions so that we can add the padding\n # to the attention logits.\n return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)\n\nx = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])\nprint(create_padding_mask(x))\n\ndef create_look_ahead_mask(size):\n mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)\n return mask # (seq_len, seq_len)\n\nx = tf.random.uniform((1, 4))\ntemp = create_look_ahead_mask(x.shape[1])\nprint(temp)\n",
"tf.Tensor(\n[[[[0. 0. 1. 1. 0.]]]\n\n\n [[[0. 0. 0. 1. 1.]]]\n\n\n [[[1. 1. 1. 0. 0.]]]], shape=(3, 1, 1, 5), dtype=float32)\ntf.Tensor(\n[[0. 1. 1. 1.]\n [0. 0. 1. 1.]\n [0. 0. 0. 1.]\n [0. 0. 0. 0.]], shape=(4, 4), dtype=float32)\n"
],
[
"# Scaled dot product attention\ndef scaled_dot_product_attention(q, k, v, mask):\n \"\"\"Calculate the attention weights.\n q, k, v must have matching leading dimensions.\n The mask has different shapes depending on its type(padding or look ahead) \n but it must be broadcastable for addition.\n \n Args:\n q: query shape == (..., seq_len_q, depth)\n k: key shape == (..., seq_len_k, depth)\n v: value shape == (..., seq_len_v, depth)\n mask: Float tensor with shape broadcastable \n to (..., seq_len_q, seq_len_k). Defaults to None.\n \n Returns:\n output, attention_weights\n \"\"\"\n\n matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)\n \n # scale matmul_qk\n dk = tf.cast(tf.shape(k)[-1], tf.float32)\n scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)\n\n # add the mask to the scaled tensor.\n if mask is not None:\n scaled_attention_logits += (mask * -1e9)\n\n # softmax is normalized on the last axis (seq_len_k) so that the scores\n # add up to 1.\n attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)\n\n output = tf.matmul(attention_weights, v) # (..., seq_len_v, depth)\n\n return output, attention_weights",
"_____no_output_____"
],
[
"# scaled dot product attetion test\ndef print_out(q, k, v):\n temp_out, temp_attn = scaled_dot_product_attention(\n q, k, v, None)\n print ('Attention weights are:')\n print (temp_attn)\n print ('Output is:')\n print (temp_out)\n\nnp.set_printoptions(suppress=True)\n\ntemp_k = tf.constant([[10,0,0],\n [0,10,0],\n [0,0,10],\n [0,0,10]], dtype=tf.float32) # (4, 3)\n\ntemp_v = tf.constant([[ 1,0],\n [ 10,0],\n [ 100,5],\n [1000,6]], dtype=tf.float32) # (4, 3)\n\n# This `query` aligns with the second `key`,\n# so the second `value` is returned.\ntemp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)\nprint_out(temp_q, temp_k, temp_v)",
"Attention weights are:\ntf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)\nOutput is:\ntf.Tensor([[10. 0.]], shape=(1, 2), dtype=float32)\n"
],
[
"# Multi Head Attention\n\nclass MultiHeadAttention(tf.keras.layers.Layer):\n def __init__(self, d_model, num_heads):\n super(MultiHeadAttention, self).__init__()\n self.num_heads = num_heads\n self.d_model = d_model\n \n assert d_model % self.num_heads == 0\n \n self.depth = d_model // self.num_heads\n \n self.wq = tf.keras.layers.Dense(d_model)\n self.wk = tf.keras.layers.Dense(d_model)\n self.wv = tf.keras.layers.Dense(d_model)\n \n self.dense = tf.keras.layers.Dense(d_model)\n \n def split_heads(self, x, batch_size):\n x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))\n return tf.transpose(x, perm=[0, 2, 1, 3])\n \n def call(self, v, k, q, mask):\n batch_size = tf.shape(q)[0]\n \n q = self.wq(q)\n k = self.wk(k)\n v = self.wv(v) # (batch_size, seq_len, d_model)\n \n q = self.split_heads(q, batch_size)\n k = self.split_heads(k, batch_size)\n v = self.split_heads(v, batch_size) #(batch_size, num_head, seq_len_v, depth)\n # scaled_attention.shape == (batch_size, num_heads, seq_len_v, depth)\n # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)\n scaled_attention, attention_weights = scaled_dot_product_attention(\n q, k, v, mask)\n \n scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_v, num_heads, depth)\n\n concat_attention = tf.reshape(scaled_attention, \n (batch_size, -1, self.d_model)) # (batch_size, seq_len_v, d_model)\n\n output = self.dense(concat_attention) # (batch_size, seq_len_v, d_model)\n \n return output, attention_weights\n ",
"_____no_output_____"
],
[
"# multhead attention test\ntemp_mha = MultiHeadAttention(d_model=512, num_heads=8)\ny = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)\nout, attn = temp_mha(y, k=y, q=y, mask=None)\nout.shape, attn.shape\n",
"_____no_output_____"
],
[
"# activation – the activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).\n\n# Point wise feed forward network\ndef point_wise_feed_forward_network(d_model, dff):\n return tf.keras.Sequential([\n tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)\n tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)\n ])\n",
"_____no_output_____"
],
[
"# Point wise feed forward network test\nsample_ffn = point_wise_feed_forward_network(512, 2048)\nsample_ffn(tf.random.uniform((64, 50, 512))).shape",
"_____no_output_____"
]
],
[
[
"### Encoder and Decoder",
"_____no_output_____"
]
],
[
[
"# Encoder Layer\nclass EncoderLayer(tf.keras.layers.Layer):\n def __init__(self, d_model, num_heads, dff, rate=0.1):\n super(EncoderLayer, self).__init__()\n \n self.mha = MultiHeadAttention(d_model, num_heads)\n self.ffn = point_wise_feed_forward_network(d_model, dff)\n \n self.layernorm1 = tf.keras.layers.BatchNormalization(epsilon=1e-6)\n self.layernorm2 = tf.keras.layers.BatchNormalization(epsilon=1e-6)\n \n self.dropout1 = tf.keras.layers.Dropout(rate)\n self.dropout2 = tf.keras.layers.Dropout(rate)\n \n def call(self, x, training, mask):\n attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)\n attn_output = self.dropout1(attn_output, training=training)\n out1 = self.layernorm1(x + attn_output)\n \n ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)\n ffn_output = self.dropout2(ffn_output, training=training)\n out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)\n \n return out2",
"_____no_output_____"
],
[
"# Encoder Layer Test\nsample_encoder_layer = EncoderLayer(512, 8, 2048)\n\nsample_encoder_layer_output = sample_encoder_layer(\n tf.random.uniform((64, 43, 512)), False, None)\n\nsample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model)\n",
"_____no_output_____"
],
[
"# Decoder Layer\nclass DecoderLayer(tf.keras.layers.Layer):\n def __init__(self, d_model, num_heads, dff, rate=0.1):\n super(DecoderLayer, self).__init__()\n \n self.mha1 = MultiHeadAttention(d_model, num_heads)\n self.mha2 = MultiHeadAttention(d_model, num_heads)\n \n self.ffn = point_wise_feed_forward_network(d_model, dff)\n \n self.layernorm1 = tf.keras.layers.BatchNormalization(epsilon=1e-6)\n self.layernorm2 = tf.keras.layers.BatchNormalization(epsilon=1e-6)\n self.layernorm3 = tf.keras.layers.BatchNormalization(epsilon=1e-6)\n \n self.dropout1 = tf.keras.layers.Dropout(rate)\n self.dropout2 = tf.keras.layers.Dropout(rate)\n self.dropout3 = tf.keras.layers.Dropout(rate)\n \n def call(self, x, enc_output, training,\n look_ahead_mask, padding_mask):\n # enc_output.shape == (batch_size, input_seq_len, d_model)\n attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)\n attn1 = self.dropout1(attn1, training=training)\n out1 = self.layernorm1(attn1 + x)\n \n attn2, attn_weights_block2 = self.mha2(\n enc_output, enc_output, out1, padding_mask)\n attn2 = self.dropout2(attn2, training=training)\n out2 = self.layernorm2(attn2 + out1)\n \n ffn_output = self.ffn(out2)\n ffn_output = self.dropout3(ffn_output, training=training)\n out3 = self.layernorm3(ffn_output + out2)\n \n return out3, attn_weights_block1, attn_weights_block2\n \n ",
"_____no_output_____"
],
[
"# Decoder layer test\nsample_decoder_layer = DecoderLayer(512, 8, 2048)\n\nsample_decoder_layer_output, _, _ = sample_decoder_layer(\n tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, \n False, None, None)\n\nsample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model)",
"_____no_output_____"
],
[
"# Encoder\n\nclass Encoder(tf.keras.layers.Layer):\n def __init__(self, num_layers, d_model, num_heads, dff, max_len=5000,\n rate=0.1):\n super(Encoder, self).__init__()\n self.d_model = d_model\n self.num_layers = num_layers\n self.embedding = tf.keras.layers.Dense(d_model, use_bias=False)\n self.pos_encoding = positional_encoding(max_len, self.d_model)\n \n self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) \n for _ in range(num_layers)]\n \n self.dropout = tf.keras.layers.Dropout(rate)\n \n def call(self, x, training, mask):\n seq_len = tf.shape(x)[1]\n \n # adding embedding and position encoding\n x = self.embedding(x)\n # (batch_size, input_seq_len, d_model)\n x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))\n x += self.pos_encoding[:, :seq_len, :]\n \n x = self.dropout(x, training=training)\n \n for i in range(self.num_layers):\n x = self.enc_layers[i](x, training, mask)\n \n return x\n ",
"_____no_output_____"
],
[
"sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, \n dff=2048)\n\nsample_encoder_output = sample_encoder(tf.random.uniform((64, 62,1)), \n training=False, mask=None)\n\nprint (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model)",
"(64, 62, 512)\n"
],
[
"# Decoder\nclass Decoder(tf.keras.layers.Layer):\n def __init__(self, num_layers, d_model, num_heads, dff, max_len=5000, rate=0.1):\n super(Decoder, self).__init__()\n \n self.d_model = d_model\n self.num_layers = num_layers\n \n self.embedding = tf.keras.layers.Dense(d_model, use_bias=False)\n self.pos_encoding = positional_encoding(max_len, self.d_model)\n \n self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) \n for _ in range(num_layers)]\n self.dropout = tf.keras.layers.Dropout(rate)\n \n def call(self, x, enc_output, training,\n look_ahead_mask, padding_mask):\n \n seq_len = tf.shape(x)[1]\n attention_weights = {}\n \n x = self.embedding(x)\n x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))\n x += self.pos_encoding[:, :seq_len, :]\n \n x = self.dropout(x, training=training)\n \n for i in range(self.num_layers):\n x, block1, block2 = self.dec_layers[i](x, enc_output, training,\n look_ahead_mask, padding_mask)\n attention_weights['decoder_layer{}_block1'.format(i+1)] = block1\n attention_weights['decoder_layer{}_block2'.format(i+1)] = block2\n \n \n \n return x, attention_weights\n \n",
"_____no_output_____"
],
[
"sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, \n dff=2048)\n\noutput, attn = sample_decoder(tf.random.uniform((64, 26,3)), \n enc_output=sample_encoder_output, \n training=False, look_ahead_mask=None, \n padding_mask=None)\n\noutput.shape, attn['decoder_layer2_block2'].shape",
"_____no_output_____"
]
],
[
[
"### Transfomer for TS\n",
"_____no_output_____"
]
],
[
[
"class Transformer(tf.keras.Model):\n def __init__(self, num_layers, d_model, num_heads, dff, out_dim, max_len=5000,\n rate=0.1):\n super(Transformer, self).__init__()\n \n self.encoder = Encoder(num_layers, d_model, num_heads, dff,\n max_len, rate)\n self.decoder = Decoder(num_layers, d_model, num_heads, dff,\n max_len, rate)\n \n self.final_layer = tf.keras.layers.Dense(out_dim)\n \n def call(self, inp, tar, training, enc_padding_mask,\n look_ahead_mask, dec_padding_mask):\n enc_output = self.encoder(inp, training, enc_padding_mask)\n \n dec_output, attention_weights = self.decoder(\n tar, enc_output, training, look_ahead_mask, dec_padding_mask)\n final_output = self.final_layer(dec_output)\n \n return final_output, attention_weights\n \n ",
"_____no_output_____"
],
[
"sample_transformer = Transformer(\n num_layers=2, d_model=512, num_heads=8, dff=2048, \n out_dim=1)\n\ntemp_input = tf.random.uniform((64, 62,1))\ntemp_target = tf.random.uniform((64, 23,1))\n\nfn_out, _ = sample_transformer(temp_input, temp_target,training=False, \n enc_padding_mask=None, \n look_ahead_mask=None,\n dec_padding_mask=None)\n\nfn_out.shape",
"_____no_output_____"
],
[
"# Set hyperparameters\n# 트랜스포머 기준으로 바꿔볼까? \n# d_model – the number of expected features in the encoder/decoder inputs (default=512).\n# nhead – the number of heads in the multiheadattention models (default=8).\n# num_encoder_layers – the number of sub-encoder-layers in the encoder & decoder (default=6).\n# num_decoder_layers – the number of sub-decoder-layers in the decoder (default=6).\n# dff(dim_feedforward) – the dimension of the feedforward network model (default=2048).\n# dropout – the dropout value (default=0.1).\n\n\nnum_layers = 1\nd_model = 64\ndff = 256\nnum_heads = 4\n\ndropout_rate = 0.1\ninput_sequence_length = 4320-(24*14+23) # Length of the sequence used by the encoder\ntarget_sequence_length = 24*14+23 # Length of the sequence predicted by the decoder\nbatch_size = 2**11\n\ntrain_dataset = train_dataset.batch(batch_size)\nval_dataset = val_dataset.batch(batch_size)",
"_____no_output_____"
],
[
"# Optimizizer\nclass CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):\n def __init__(self, d_model, warmup_steps=4000):\n super(CustomSchedule, self).__init__()\n \n self.d_model = d_model\n self.d_model = tf.cast(self.d_model, tf.float32)\n\n self.warmup_steps = warmup_steps\n \n def __call__(self, step):\n arg1 = tf.math.rsqrt(step)\n arg2 = step * (self.warmup_steps ** -1.5)\n \n return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)",
"_____no_output_____"
],
[
"learning_rate = CustomSchedule(64)\n\noptimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, \n epsilon=1e-9)",
"_____no_output_____"
],
[
"temp_learning_rate_schedule = CustomSchedule(512)\n\nplt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))\nplt.ylabel(\"Learning Rate\")\nplt.xlabel(\"Train Step\")",
"_____no_output_____"
],
[
"# Loss and metrics\nloss_object = tf.keras.losses.MeanAbsoluteError()",
"_____no_output_____"
],
[
"def loss_function(real, pred):\n mask = tf.math.logical_not(tf.math.equal(real, 0))\n loss_ = loss_object(real, pred)\n\n mask = tf.cast(mask, dtype=loss_.dtype)\n loss_ *= mask\n \n return tf.reduce_mean(loss_)\n",
"_____no_output_____"
],
[
"train_loss = tf.keras.metrics.Mean(name='train_loss')\n#train_accuracy = tf.keras.metrics.mean_absolute_error()\n\ntest_loss = tf.keras.metrics.Mean(name='test_loss')",
"_____no_output_____"
],
[
"# Training and checkpoint\ntransformer = Transformer(num_layers, d_model, num_heads, dff,\n out_dim=1, rate=dropout_rate)",
"_____no_output_____"
],
[
"def create_masks(inp, tar):\n inp = inp.reshape()\n # Encoder padding mask\n enc_padding_mask = create_padding_mask(inp)\n \n # Used in the 2nd attention block in the decoder.\n # This padding mask is used to mask the encoder outputs.\n dec_padding_mask = create_padding_mask(inp)\n \n # Used in the 1st attention block in the decoder.\n # It is used to pad and mask future tokens in the input received by \n # the decoder.\n look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])\n dec_target_padding_mask = create_padding_mask(tar)\n combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)\n \n return enc_padding_mask, combined_mask, dec_padding_mask",
"_____no_output_____"
],
[
"# check point\ncheckpoint_path = \"./checkpoints/train\"\n\nckpt = tf.train.Checkpoint(transformer=transformer,\n optimizer=optimizer)\n\nckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)\n\n# if a checkpoint exists, restore the latest checkpoint.\nif ckpt_manager.latest_checkpoint:\n ckpt.restore(ckpt_manager.latest_checkpoint)\n print ('Latest checkpoint restored!!')\n ",
"Latest checkpoint restored!!\n"
],
[
"# EPOCHS\nEPOCHS=100",
"_____no_output_____"
],
[
"@tf.function\ndef train_step(inp, tar):\n last_inp = tf.expand_dims(inp[:,0,:],-1)\n tar_inp = tf.concat([last_inp, tar[:,:-1,:]], axis=1)\n tar_real = tar\n \n #enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)\n #print(enc_padding_mask)\n look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])\n \n with tf.GradientTape() as tape:\n predictions, _ = transformer(inp, tar_inp, \n True, \n None, \n look_ahead_mask, \n None)\n loss = loss_function(tar_real, predictions)\n\n gradients = tape.gradient(loss, transformer.trainable_variables) \n optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))\n \n train_loss(loss)\n #train_accuracy(tar_real, predictions)",
"_____no_output_____"
],
[
"@tf.function\ndef test_step(inp, tar):\n #print(inp)\n #print(tar)\n last_inp = tf.expand_dims(inp[:,0,:],-1)\n #print(last_inp)\n tar_inp = tf.concat([last_inp, tar[:,:-1,:]], axis=1)\n tar_real = tar\n \n look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])\n \n with tf.GradientTape() as tape:\n predictions, _ = transformer(inp, tar_inp, \n False, \n None, \n look_ahead_mask, \n None)\n loss = loss_function(tar_real, predictions)\n\n gradients = tape.gradient(loss, transformer.trainable_variables) \n optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))\n\n test_loss(loss)",
"_____no_output_____"
],
[
"# Val_dataset을 돌려서 Val_prediction 구간을 예측한다\n\nfor epoch in range(EPOCHS):\n start = time.time()\n\n train_loss.reset_states()\n test_loss.reset_states()\n \n # validation:\n for (batch, (inp, tar)) in enumerate(val_dataset):\n #print(inp, tar)\n test_step(inp, tar)\n \n if (epoch + 1) % 5 == 0:\n ckpt_save_path = ckpt_manager.save()\n print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,\n ckpt_save_path))\n \n #print ('Epoch {} Train Loss {:.4f}'.format(epoch + 1, \n #train_loss.result())) \n #train_accuracy.result()))\n print ('Epoch {} Test Loss {:.4f}'.format(epoch + 1, \n test_loss.result())) \n print ('Time taken for 1 epoch: {} secs\\n'.format(time.time() - start))\n \n",
"Epoch 1 Test Loss 0.1398\nTime taken for 1 epoch: 1.842735767364502 secs\n\nEpoch 2 Test Loss 0.1200\nTime taken for 1 epoch: 0.027747154235839844 secs\n\nEpoch 3 Test Loss 0.1001\nTime taken for 1 epoch: 0.027561664581298828 secs\n\nEpoch 4 Test Loss 0.1258\nTime taken for 1 epoch: 0.02731609344482422 secs\n\nSaving checkpoint for epoch 5 at ./checkpoints/train/ckpt-101\nEpoch 5 Test Loss 0.0739\nTime taken for 1 epoch: 0.08694267272949219 secs\n\nEpoch 6 Test Loss 0.1171\nTime taken for 1 epoch: 0.026193857192993164 secs\n\nEpoch 7 Test Loss 0.0483\nTime taken for 1 epoch: 0.026002883911132812 secs\n\nEpoch 8 Test Loss 0.0651\nTime taken for 1 epoch: 0.026217937469482422 secs\n\nEpoch 9 Test Loss 0.0793\nTime taken for 1 epoch: 0.025745868682861328 secs\n\nSaving checkpoint for epoch 10 at ./checkpoints/train/ckpt-102\nEpoch 10 Test Loss 0.0604\nTime taken for 1 epoch: 0.0914757251739502 secs\n\nEpoch 11 Test Loss 0.0862\nTime taken for 1 epoch: 0.026814699172973633 secs\n\nEpoch 12 Test Loss 0.0658\nTime taken for 1 epoch: 0.026105642318725586 secs\n\nEpoch 13 Test Loss 0.0828\nTime taken for 1 epoch: 0.026221513748168945 secs\n\nEpoch 14 Test Loss 0.0628\nTime taken for 1 epoch: 0.026430130004882812 secs\n\nSaving checkpoint for epoch 15 at ./checkpoints/train/ckpt-103\nEpoch 15 Test Loss 0.0600\nTime taken for 1 epoch: 0.08582401275634766 secs\n\nEpoch 16 Test Loss 0.0623\nTime taken for 1 epoch: 0.026145219802856445 secs\n\nEpoch 17 Test Loss 0.0940\nTime taken for 1 epoch: 0.027568340301513672 secs\n\nEpoch 18 Test Loss 0.0790\nTime taken for 1 epoch: 0.025665998458862305 secs\n\nEpoch 19 Test Loss 0.0681\nTime taken for 1 epoch: 0.0259706974029541 secs\n\nSaving checkpoint for epoch 20 at ./checkpoints/train/ckpt-104\nEpoch 20 Test Loss 0.0575\nTime taken for 1 epoch: 0.08983325958251953 secs\n\nEpoch 21 Test Loss 0.0707\nTime taken for 1 epoch: 0.026350736618041992 secs\n\nEpoch 22 Test Loss 0.0359\nTime taken for 1 epoch: 0.026292800903320312 secs\n\nEpoch 23 Test Loss 0.0764\nTime taken for 1 epoch: 0.026610374450683594 secs\n\nEpoch 24 Test Loss 0.0632\nTime taken for 1 epoch: 0.026620864868164062 secs\n\nSaving checkpoint for epoch 25 at ./checkpoints/train/ckpt-105\nEpoch 25 Test Loss 0.0700\nTime taken for 1 epoch: 0.08509159088134766 secs\n\nEpoch 26 Test Loss 0.1103\nTime taken for 1 epoch: 0.02648472785949707 secs\n\nEpoch 27 Test Loss 0.0969\nTime taken for 1 epoch: 0.02586078643798828 secs\n\nEpoch 28 Test Loss 0.0813\nTime taken for 1 epoch: 0.02686285972595215 secs\n\nEpoch 29 Test Loss 0.0811\nTime taken for 1 epoch: 0.025514602661132812 secs\n\nSaving checkpoint for epoch 30 at ./checkpoints/train/ckpt-106\nEpoch 30 Test Loss 0.1238\nTime taken for 1 epoch: 0.08920955657958984 secs\n\nEpoch 31 Test Loss 0.1426\nTime taken for 1 epoch: 0.028003931045532227 secs\n\nEpoch 32 Test Loss 0.0593\nTime taken for 1 epoch: 0.026304006576538086 secs\n\nEpoch 33 Test Loss 0.1248\nTime taken for 1 epoch: 0.025287866592407227 secs\n\nEpoch 34 Test Loss 0.1497\nTime taken for 1 epoch: 0.026267528533935547 secs\n\nSaving checkpoint for epoch 35 at ./checkpoints/train/ckpt-107\nEpoch 35 Test Loss 0.1241\nTime taken for 1 epoch: 0.08585691452026367 secs\n\nEpoch 36 Test Loss 0.1363\nTime taken for 1 epoch: 0.02617192268371582 secs\n\nEpoch 37 Test Loss 0.0587\nTime taken for 1 epoch: 0.026009559631347656 secs\n\nEpoch 38 Test Loss 0.1278\nTime taken for 1 epoch: 0.027210474014282227 secs\n\nEpoch 39 Test Loss 0.1010\nTime taken for 1 epoch: 0.02635931968688965 secs\n\nSaving checkpoint for epoch 40 at ./checkpoints/train/ckpt-108\nEpoch 40 Test Loss 0.0911\nTime taken for 1 epoch: 0.09284353256225586 secs\n\nEpoch 41 Test Loss 0.0981\nTime taken for 1 epoch: 0.026525020599365234 secs\n\nEpoch 42 Test Loss 0.0963\nTime taken for 1 epoch: 0.026443958282470703 secs\n\nEpoch 43 Test Loss 0.1360\nTime taken for 1 epoch: 0.027270793914794922 secs\n\nEpoch 44 Test Loss 0.0667\nTime taken for 1 epoch: 0.026276826858520508 secs\n\nSaving checkpoint for epoch 45 at ./checkpoints/train/ckpt-109\nEpoch 45 Test Loss 0.0979\nTime taken for 1 epoch: 0.08502650260925293 secs\n\nEpoch 46 Test Loss 0.0887\nTime taken for 1 epoch: 0.026467561721801758 secs\n\nEpoch 47 Test Loss 0.1192\nTime taken for 1 epoch: 0.0264890193939209 secs\n\nEpoch 48 Test Loss 0.0713\nTime taken for 1 epoch: 0.02620387077331543 secs\n\nEpoch 49 Test Loss 0.1066\nTime taken for 1 epoch: 0.025141477584838867 secs\n\nSaving checkpoint for epoch 50 at ./checkpoints/train/ckpt-110\nEpoch 50 Test Loss 0.0798\nTime taken for 1 epoch: 0.08430886268615723 secs\n\nEpoch 51 Test Loss 0.1125\nTime taken for 1 epoch: 0.025931835174560547 secs\n\nEpoch 52 Test Loss 0.0890\nTime taken for 1 epoch: 0.027563810348510742 secs\n\nEpoch 53 Test Loss 0.0962\nTime taken for 1 epoch: 0.026294469833374023 secs\n\nEpoch 54 Test Loss 0.0710\nTime taken for 1 epoch: 0.026406526565551758 secs\n\nSaving checkpoint for epoch 55 at ./checkpoints/train/ckpt-111\nEpoch 55 Test Loss 0.1339\nTime taken for 1 epoch: 0.09092235565185547 secs\n\nEpoch 56 Test Loss 0.1193\nTime taken for 1 epoch: 0.026818513870239258 secs\n\nEpoch 57 Test Loss 0.0910\nTime taken for 1 epoch: 0.0264589786529541 secs\n\nEpoch 58 Test Loss 0.1176\nTime taken for 1 epoch: 0.02536630630493164 secs\n\nEpoch 59 Test Loss 0.0592\nTime taken for 1 epoch: 0.026142358779907227 secs\n\nSaving checkpoint for epoch 60 at ./checkpoints/train/ckpt-112\nEpoch 60 Test Loss 0.1154\nTime taken for 1 epoch: 0.08688640594482422 secs\n\nEpoch 61 Test Loss 0.0485\nTime taken for 1 epoch: 0.0268404483795166 secs\n\nEpoch 62 Test Loss 0.1069\nTime taken for 1 epoch: 0.026851892471313477 secs\n\nEpoch 63 Test Loss 0.0752\nTime taken for 1 epoch: 0.02724432945251465 secs\n\nEpoch 64 Test Loss 0.0898\nTime taken for 1 epoch: 0.02620100975036621 secs\n\nSaving checkpoint for epoch 65 at ./checkpoints/train/ckpt-113\nEpoch 65 Test Loss 0.0416\nTime taken for 1 epoch: 0.08811569213867188 secs\n\nEpoch 66 Test Loss 0.1140\nTime taken for 1 epoch: 0.026435136795043945 secs\n\nEpoch 67 Test Loss 0.0818\nTime taken for 1 epoch: 0.02679920196533203 secs\n\nEpoch 68 Test Loss 0.0813\nTime taken for 1 epoch: 0.027765989303588867 secs\n\nEpoch 69 Test Loss 0.0476\nTime taken for 1 epoch: 0.026018142700195312 secs\n\nSaving checkpoint for epoch 70 at ./checkpoints/train/ckpt-114\nEpoch 70 Test Loss 0.1262\nTime taken for 1 epoch: 0.09062671661376953 secs\n\nEpoch 71 Test Loss 0.0967\nTime taken for 1 epoch: 0.026581764221191406 secs\n\nEpoch 72 Test Loss 0.0819\nTime taken for 1 epoch: 0.02594447135925293 secs\n\nEpoch 73 Test Loss 0.0565\nTime taken for 1 epoch: 0.02571272850036621 secs\n\nEpoch 74 Test Loss 0.1281\nTime taken for 1 epoch: 0.0264894962310791 secs\n\nSaving checkpoint for epoch 75 at ./checkpoints/train/ckpt-115\nEpoch 75 Test Loss 0.1135\nTime taken for 1 epoch: 0.08496785163879395 secs\n\nEpoch 76 Test Loss 0.1029\nTime taken for 1 epoch: 0.026228666305541992 secs\n\nEpoch 77 Test Loss 0.1367\nTime taken for 1 epoch: 0.0260927677154541 secs\n\nEpoch 78 Test Loss 0.0513\nTime taken for 1 epoch: 0.026209354400634766 secs\n\nEpoch 79 Test Loss 0.1352\nTime taken for 1 epoch: 0.026179790496826172 secs\n\nSaving checkpoint for epoch 80 at ./checkpoints/train/ckpt-116\nEpoch 80 Test Loss 0.0994\nTime taken for 1 epoch: 0.09259533882141113 secs\n\nEpoch 81 Test Loss 0.1055\nTime taken for 1 epoch: 0.0271146297454834 secs\n\nEpoch 82 Test Loss 0.1139\nTime taken for 1 epoch: 0.026317119598388672 secs\n\nEpoch 83 Test Loss 0.0637\nTime taken for 1 epoch: 0.02617812156677246 secs\n\nEpoch 84 Test Loss 0.0854\nTime taken for 1 epoch: 0.02613091468811035 secs\n\nSaving checkpoint for epoch 85 at ./checkpoints/train/ckpt-117\nEpoch 85 Test Loss 0.0621\nTime taken for 1 epoch: 0.09361839294433594 secs\n\nEpoch 86 Test Loss 0.0522\nTime taken for 1 epoch: 0.02647686004638672 secs\n\nEpoch 87 Test Loss 0.0887\nTime taken for 1 epoch: 0.0256502628326416 secs\n\nEpoch 88 Test Loss 0.0488\nTime taken for 1 epoch: 0.025939464569091797 secs\n\nEpoch 89 Test Loss 0.0994\nTime taken for 1 epoch: 0.02606058120727539 secs\n\nSaving checkpoint for epoch 90 at ./checkpoints/train/ckpt-118\nEpoch 90 Test Loss 0.0458\nTime taken for 1 epoch: 0.0855109691619873 secs\n\nEpoch 91 Test Loss 0.1189\nTime taken for 1 epoch: 0.02615976333618164 secs\n\nEpoch 92 Test Loss 0.0787\nTime taken for 1 epoch: 0.026378154754638672 secs\n\nEpoch 93 Test Loss 0.1286\nTime taken for 1 epoch: 0.026322126388549805 secs\n\nEpoch 94 Test Loss 0.1309\nTime taken for 1 epoch: 0.026225566864013672 secs\n\nSaving checkpoint for epoch 95 at ./checkpoints/train/ckpt-119\nEpoch 95 Test Loss 0.0624\nTime taken for 1 epoch: 0.0847170352935791 secs\n\nEpoch 96 Test Loss 0.1210\nTime taken for 1 epoch: 0.026300668716430664 secs\n\nEpoch 97 Test Loss 0.0507\nTime taken for 1 epoch: 0.026369094848632812 secs\n\nEpoch 98 Test Loss 0.1306\nTime taken for 1 epoch: 0.02657151222229004 secs\n\nEpoch 99 Test Loss 0.1044\nTime taken for 1 epoch: 0.026262283325195312 secs\n\nSaving checkpoint for epoch 100 at ./checkpoints/train/ckpt-120\nEpoch 100 Test Loss 0.0941\nTime taken for 1 epoch: 0.09162497520446777 secs\n\n"
],
[
"MAX_LENGTH = target_sequence_length\n\ndef evaluate(inp):\n encoder_input = inp\n #print(encoder_input)\n output = tf.expand_dims(encoder_input[:,-1,:],-1)\n #print(output)\n \n for i in range(MAX_LENGTH):\n look_ahead_mask = create_look_ahead_mask(tf.shape(output)[1])\n predictions, attention_weights = transformer(encoder_input, \n output, \n False, \n None, \n look_ahead_mask, \n None)\n \n # select the last word from the seq_len dimension\n predictions = predictions[: ,-1:, :] # (batch_size, 1)\n #print(\"pred:\", predictions) #\n output = tf.concat([output, predictions], axis=1)\n #print(output)\n \n return tf.squeeze(output, axis=0), attention_weights",
"_____no_output_____"
],
[
"def mape(y_pred, y_true):\n return np.mean(np.abs((y_true - y_pred) / y_true)) * 100",
"_____no_output_____"
],
[
"def MAE(y_true, y_pred): \n return np.mean(np.abs((y_true - y_pred)))",
"_____no_output_____"
],
[
"def MSE(y_true, y_pred):\n return np.mean(np.square((y_true - y_pred)))",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import mean_absolute_error",
"_____no_output_____"
],
[
"encode_series = encoder_input_val_data[0:1,:,:] \n#print(encode_series)\n\npred_series, _ = evaluate(encode_series)\npred_series = np.array([pred_series])\nencode_series = encode_series.reshape(-1,1)\npred_series = pred_series.reshape(-1,1)[1:,:] \ntarget_series = decoder_target_val_data[0,:,:1].reshape(-1,1) \n\nencode_series_tail = np.concatenate([encode_series[-1000:],target_series[:1]])\nx_encode = encode_series_tail.shape[0]\n\nprint(mape(pred_series[:24*14+23-23]+0.02294, target_series+0.02294))\n\nprint(MSE(target_series+0.02294, pred_series[:24*14+23-23]+0.02294))\n\nprint(MAE(target_series+0.02294, pred_series[:24*14+23-23]+0.02294))",
"217.37654209136963\n0.1582443\n0.3236527\n"
],
[
"x_encode",
"_____no_output_____"
],
[
"# 실제와 가격차이가 어떻게 나는지 비교해서 보정한다.\n\nplt.figure(figsize=(20,6)) \n\nplt.plot(range(1,x_encode+1),encode_series_tail+0.02294)\nplt.plot(range(x_encode,x_encode+pred_steps-23),target_series+0.02294,color='orange')\nplt.plot(range(x_encode,x_encode+pred_steps-23),pred_series[:24*14+23-23]+0.02294,color='teal',linestyle='--')\n\nplt.title('Encoder Series Tail of Length %d, Target Series, and Predictions' % 1000)\nplt.legend(['Encoding Series','Target Series','Predictions'])",
"_____no_output_____"
]
],
[
[
"#Prophet",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom fbprophet import Prophet\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"df = pd.read_csv(\"/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_ST_Version1.csv\", encoding='CP949')\ndf = df.drop(df.columns[0], axis=1)\n\ndf.columns = [\"ds\",\"y\"]\ndf[\"ds\"] = pd.to_datetime(df[\"ds\"], dayfirst = True)\n\ndf.head()",
"_____no_output_____"
],
[
"m = Prophet()\nm.fit(df[:-24*14])",
"INFO:numexpr.utils:NumExpr defaulting to 4 threads.\nINFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.\n"
],
[
"future = m.make_future_dataframe(freq='H',periods=24*14)\nfuture.tail()",
"_____no_output_____"
],
[
"forecast = m.predict(future)\nforecast[['ds', 'yhat']].tail()",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\n\nplt.plot(df[\"y\"][3320:], label=\"real\")\nplt.plot(range(4320-24*14,4320),forecast['yhat'][-24*14:], label=\"Prophet\")\nplt.plot(range(4320-24*14,4320),pred_series[:24*14+23-23]+0.02294, label=\"Transformer\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"#LSTMa",
"_____no_output_____"
]
],
[
[
"import numpy as np \nimport pandas as pd \nimport matplotlib.pyplot as plt\n\nfrom tqdm import trange\nimport random",
"_____no_output_____"
],
[
"data = pd.read_csv(\"/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_ST_Version1.csv\", encoding='CP949')\n\ndata.head()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import MinMaxScaler\nmin_max_scaler = MinMaxScaler()\ndata[\"종가\"] = min_max_scaler.fit_transform(data[\"종가\"].to_numpy().reshape(-1,1))",
"_____no_output_____"
],
[
"train = data[:-24*14]\ntrain = train[\"종가\"].to_numpy()\n\ntest = data[-24*14:]\ntest = test[\"종가\"].to_numpy()",
"_____no_output_____"
],
[
"import torch\nimport torch.nn as nn\nfrom torch import optim\nimport torch.nn.functional as F\n\ndevice = torch.device(\"cuda\", index=0)",
"_____no_output_____"
],
[
"class lstm_encoder(nn.Module):\n def __init__(self, input_size, hidden_size, num_layers = 1):\n super(lstm_encoder, self).__init__()\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.num_layers = num_layers\n\n self.lstm = nn.LSTM(input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first=True)\n\n def forward(self, x_input):\n lstm_out, self.hidden = self.lstm(x_input)\n return lstm_out, self.hidden",
"_____no_output_____"
],
[
"class lstm_decoder(nn.Module):\n def __init__(self, input_size, hidden_size, num_layers = 1):\n super(lstm_decoder, self).__init__()\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.num_layers = num_layers\n\n self.lstm = nn.LSTM(input_size = input_size, hidden_size = hidden_size,num_layers = num_layers, batch_first=True)\n self.linear = nn.Linear(hidden_size, input_size) \n\n def forward(self, x_input, encoder_hidden_states):\n lstm_out, self.hidden = self.lstm(x_input.unsqueeze(-1), encoder_hidden_states)\n output = self.linear(lstm_out)\n \n return output, self.hidden",
"_____no_output_____"
],
[
"class lstm_encoder_decoder(nn.Module):\n def __init__(self, input_size, hidden_size):\n super(lstm_encoder_decoder, self).__init__()\n\n self.input_size = input_size\n self.hidden_size = hidden_size\n\n self.encoder = lstm_encoder(input_size = input_size, hidden_size = hidden_size)\n self.decoder = lstm_decoder(input_size = input_size, hidden_size = hidden_size)\n\n def forward(self, inputs, targets, target_len, teacher_forcing_ratio):\n batch_size = inputs.shape[0]\n input_size = inputs.shape[2]\n\n outputs = torch.zeros(batch_size, target_len, input_size)\n\n _, hidden = self.encoder(inputs)\n decoder_input = inputs[:,-1, :]\n \n for t in range(target_len): \n out, hidden = self.decoder(decoder_input, hidden)\n out = out.squeeze(1)\n if random.random() < teacher_forcing_ratio:\n decoder_input = targets[:, t, :]\n else:\n decoder_input = out\n outputs[:,t,:] = out\n\n return outputs\n\n def predict(self, inputs, target_len):\n inputs = inputs.unsqueeze(0)\n self.eval()\n batch_size = inputs.shape[0]\n input_size = inputs.shape[2]\n outputs = torch.zeros(batch_size, target_len, input_size)\n _, hidden = self.encoder(inputs)\n decoder_input = inputs[:,-1, :]\n for t in range(target_len): \n out, hidden = self.decoder(decoder_input, hidden)\n out = out.squeeze(1)\n decoder_input = out\n outputs[:,t,:] = out\n return outputs.detach().numpy()[0,:,0]",
"_____no_output_____"
],
[
"from torch.utils.data import DataLoader, Dataset\n\nclass windowDataset(Dataset):\n def __init__(self, y, input_window=80, output_window=20, stride=5):\n #총 데이터의 개수\n L = y.shape[0]\n #stride씩 움직일 때 생기는 총 sample의 개수\n num_samples = (L - input_window - output_window) // stride + 1\n\n #input과 output\n X = np.zeros([input_window, num_samples])\n Y = np.zeros([output_window, num_samples])\n\n for i in np.arange(num_samples):\n start_x = stride*i\n end_x = start_x + input_window\n X[:,i] = y[start_x:end_x]\n\n start_y = stride*i + input_window\n end_y = start_y + output_window\n Y[:,i] = y[start_y:end_y]\n\n X = X.reshape(X.shape[0], X.shape[1], 1).transpose((1,0,2))\n Y = Y.reshape(Y.shape[0], Y.shape[1], 1).transpose((1,0,2))\n self.x = X\n self.y = Y\n \n self.len = len(X)\n def __getitem__(self, i):\n return self.x[i], self.y[i]\n def __len__(self):\n return self.len",
"_____no_output_____"
],
[
"iw = 24*28\now = 24*14\n\ntrain_dataset = windowDataset(train, input_window=iw, output_window=ow, stride=1)\ntrain_loader = DataLoader(train_dataset, batch_size=64)\n# y_train_loader = DataLoader(y_train, batch_size=5)",
"_____no_output_____"
],
[
"model = lstm_encoder_decoder(input_size=1, hidden_size=16).to(device)\n# model.train_model(X_train.to(device), y_train.to(device), n_epochs=100, target_len=ow, batch_size=5, training_bprediction=\"mixed_teacher_forcing\", teacher_forcing_ratio=0.6, learning_rate=0.01, dynamic_tf=False)",
"_____no_output_____"
],
[
"#5000으로 할 경우 시간도 오래걸리고 에러도 커서 100으로 줄인다.\n\nlearning_rate=0.01\nepoch = 100\noptimizer = optim.Adam(model.parameters(), lr = learning_rate)\ncriterion = nn.MSELoss()",
"_____no_output_____"
],
[
"from tqdm import tqdm\n\nmodel.train()\nwith tqdm(range(epoch)) as tr:\n for i in tr:\n total_loss = 0.0\n for x,y in train_loader:\n optimizer.zero_grad()\n x = x.to(device).float()\n y = y.to(device).float()\n output = model(x, y, ow, 0.6).to(device)\n loss = criterion(output, y)\n loss.backward()\n optimizer.step()\n total_loss += loss.cpu().item()\n tr.set_postfix(loss=\"{0:.5f}\".format(total_loss/len(train_loader)))",
"100%|██████████| 100/100 [18:04<00:00, 10.84s/it, loss=0.00125]\n"
],
[
"predict = model.predict(torch.tensor(train_dataset[0][0]).to(device).float(), target_len=ow)\nreal = train_dataset[0][1]",
"_____no_output_____"
],
[
"predict = model.predict(torch.tensor(train[-24*14*2:]).reshape(-1,1).to(device).float(), target_len=ow)\nreal = data[\"종가\"].to_numpy()\n\npredict = min_max_scaler.inverse_transform(predict.reshape(-1,1))\nreal = min_max_scaler.inverse_transform(real.reshape(-1,1))",
"_____no_output_____"
],
[
"real.shape",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\nplt.plot(range(3319,4320), real[3320:], label=\"real\")\nplt.plot(range(4320-24*14,4320), predict[-24*14:], label=\"LSTMa\")\nplt.plot(range(4320-24*14,4320),forecast['yhat'][-24*14:], label=\"Prophet\")\nplt.plot(range(4320-24*14,4320),pred_series[:24*14+23-23]+0.02294, label=\"Transformer\")\n\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"#Informer",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/zhouhaoyi/Informer2020.git",
"Cloning into 'Informer2020'...\nremote: Enumerating objects: 535, done.\u001b[K\nremote: Total 535 (delta 0), reused 0 (delta 0), pack-reused 535\u001b[K\nReceiving objects: 100% (535/535), 6.47 MiB | 23.75 MiB/s, done.\nResolving deltas: 100% (306/306), done.\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"import sys\nif not 'Informer2020' in sys.path:\n sys.path += ['Informer2020']",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import MinMaxScaler\n\nfrom datetime import timedelta\nimport torch\nfrom torch import nn\nfrom torch import optim\nfrom torch.utils.data import DataLoader, Dataset\n\nfrom tqdm import tqdm\nfrom models.model import Informer",
"_____no_output_____"
],
[
"class StandardScaler():\n def __init__(self):\n self.mean = 0.\n self.std = 1.\n \n def fit(self, data):\n self.mean = data.mean(0)\n self.std = data.std(0)\n\n def transform(self, data):\n mean = torch.from_numpy(self.mean).type_as(data).to(data.device) if torch.is_tensor(data) else self.mean\n std = torch.from_numpy(self.std).type_as(data).to(data.device) if torch.is_tensor(data) else self.std\n return (data - mean) / std\n\n def inverse_transform(self, data):\n mean = torch.from_numpy(self.mean).type_as(data).to(data.device) if torch.is_tensor(data) else self.mean\n std = torch.from_numpy(self.std).type_as(data).to(data.device) if torch.is_tensor(data) else self.std\n return (data * std) + mean\n \n\ndef time_features(dates, freq='h'):\n dates['month'] = dates.date.apply(lambda row:row.month,1)\n dates['day'] = dates.date.apply(lambda row:row.day,1)\n dates['weekday'] = dates.date.apply(lambda row:row.weekday(),1)\n dates['hour'] = dates.date.apply(lambda row:row.hour,1)\n dates['minute'] = dates.date.apply(lambda row:row.minute,1)\n dates['minute'] = dates.minute.map(lambda x:x//15)\n freq_map = {\n 'y':[],'m':['month'],'w':['month'],'d':['month','day','weekday'],\n 'b':['month','day','weekday'],'h':['month','day','weekday','hour'],\n 't':['month','day','weekday','hour','minute'],\n }\n return dates[freq_map[freq.lower()]].values\n\ndef _process_one_batch(batch_x, batch_y, batch_x_mark, batch_y_mark):\n batch_x = batch_x.float().to(device)\n batch_y = batch_y.float()\n batch_x_mark = batch_x_mark.float().to(device)\n batch_y_mark = batch_y_mark.float().to(device)\n dec_inp = torch.zeros([batch_y.shape[0], pred_len, batch_y.shape[-1]]).float()\n dec_inp = torch.cat([batch_y[:,:label_len,:], dec_inp], dim=1).float().to(device)\n outputs = model(batch_x, batch_x_mark, dec_inp, batch_y_mark)\n batch_y = batch_y[:,-pred_len:,0:].to(device)\n return outputs, batch_y",
"_____no_output_____"
],
[
"class Dataset_Pred(Dataset):\n def __init__(self, dataframe, size=None, scale=True):\n self.seq_len = size[0]\n self.label_len = size[1]\n self.pred_len = size[2]\n self.dataframe = dataframe\n \n self.scale = scale\n self.__read_data__()\n\n def __read_data__(self):\n self.scaler = StandardScaler()\n df_raw = self.dataframe\n df_raw[\"date\"] = pd.to_datetime(df_raw[\"date\"])\n\n delta = df_raw[\"date\"].iloc[1] - df_raw[\"date\"].iloc[0]\n if delta>=timedelta(hours=1):\n self.freq='h'\n else:\n self.freq='t'\n\n \n\n border1 = 0\n border2 = len(df_raw)\n cols_data = df_raw.columns[1:]\n df_data = df_raw[cols_data]\n\n\n if self.scale:\n self.scaler.fit(df_data.values)\n data = self.scaler.transform(df_data.values)\n else:\n data = df_data.values\n \n tmp_stamp = df_raw[['date']][border1:border2]\n tmp_stamp['date'] = pd.to_datetime(tmp_stamp.date)\n pred_dates = pd.date_range(tmp_stamp.date.values[-1], periods=self.pred_len+1, freq=self.freq)\n \n df_stamp = pd.DataFrame(columns = ['date'])\n df_stamp.date = list(tmp_stamp.date.values) + list(pred_dates[1:])\n data_stamp = time_features(df_stamp, freq=self.freq)\n\n self.data_x = data[border1:border2]\n self.data_y = data[border1:border2]\n self.data_stamp = data_stamp\n \n def __getitem__(self, index):\n s_begin = index\n s_end = s_begin + self.seq_len\n r_begin = s_end - self.label_len\n r_end = r_begin + self.label_len + self.pred_len\n\n seq_x = self.data_x[s_begin:s_end]\n seq_y = self.data_y[r_begin:r_end]\n seq_x_mark = self.data_stamp[s_begin:s_end]\n seq_y_mark = self.data_stamp[r_begin:r_end]\n return seq_x, seq_y, seq_x_mark, seq_y_mark\n\n def __len__(self):\n return len(self.data_x) - self.seq_len- self.pred_len + 1",
"_____no_output_____"
],
[
"data = pd.read_csv(\"/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_ST_Version1.csv\", encoding='CP949')\n\ndata.head()",
"_____no_output_____"
],
[
"data[\"date\"] = data[\"날짜\"]\ndata[\"date\"] = pd.to_datetime(data[\"date\"], dayfirst = True)\ndata[\"value\"] = data[\"종가\"]\n\nmin_max_scaler = MinMaxScaler()\ndata[\"value\"] = min_max_scaler.fit_transform(data[\"value\"].to_numpy().reshape(-1,1)).reshape(-1)\ndata = data[[\"date\", \"value\"]]\n\ndata_train = data.iloc[:-24*14].copy()",
"_____no_output_____"
],
[
"pred_len = 24*14\n\nseq_len = pred_len#인풋 크기\nlabel_len = pred_len#디코더에서 참고할 크기\npred_len = pred_len#예측할 크기\n\nbatch_size = 10\nshuffle_flag = True\nnum_workers = 0\ndrop_last = True\n\n\n\ndataset = Dataset_Pred(dataframe=data_train ,scale=True, size = (seq_len, label_len,pred_len))\ndata_loader = DataLoader(dataset,batch_size=batch_size,shuffle=shuffle_flag,num_workers=num_workers,drop_last=drop_last)",
"_____no_output_____"
],
[
"enc_in = 1\ndec_in = 1\nc_out = 1\ndevice = torch.device(\"cuda:0\")\n\nmodel = Informer(enc_in, dec_in, c_out, seq_len, label_len, pred_len, device = device).to(device)\nlearning_rate = 1e-4\ncriterion = nn.MSELoss()\n\nmodel_optim = optim.Adam(model.parameters(), lr=learning_rate)",
"_____no_output_____"
],
[
"# Informer는 error를 100하는게 시간도 덜 걸리고 에러도 적다.\n\ntrain_epochs = 100\nmodel.train()\nprogress = tqdm(range(train_epochs))\nfor epoch in progress:\n train_loss = []\n for i, (batch_x,batch_y,batch_x_mark,batch_y_mark) in enumerate(data_loader):\n model_optim.zero_grad()\n pred, true = _process_one_batch(batch_x, batch_y, batch_x_mark, batch_y_mark)\n loss = criterion(pred, true)\n train_loss.append(loss.item())\n loss.backward()\n model_optim.step()\n train_loss = np.average(train_loss)\n progress.set_description(\"loss: {:0.6f}\".format(train_loss))",
" 0%| | 0/100 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:652: UserWarning:\n\nNamed tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)\n\nloss: 0.043161: 100%|██████████| 100/100 [59:48<00:00, 35.88s/it]\n"
],
[
"import time\nnow = time.time()\nscaler = dataset.scaler\ndf_test = data_train.copy()\ndf_test[\"value\"] = scaler.transform(df_test[\"value\"])\ndf_test[\"date\"] = pd.to_datetime(df_test[\"date\"].values)\n\ndelta = df_test[\"date\"][1] - df_test[\"date\"][0]\nfor i in range(pred_len):\n df_test = df_test.append({\"date\":df_test[\"date\"].iloc[-1]+delta}, ignore_index=True)\ndf_test = df_test.fillna(0)\n\n\ndf_test_x = df_test.iloc[-seq_len-pred_len:-pred_len].copy()\ndf_test_y = df_test.iloc[-label_len-pred_len:].copy()\n\ndf_test_numpy = df_test.to_numpy()[:,1:].astype(\"float\")\ntest_time_x = time_features(df_test_x, freq=dataset.freq) #인풋 타임 스템프\ntest_data_x = df_test_numpy[-seq_len-pred_len:-pred_len] #인풋 데이터\n\n\ntest_time_y = time_features(df_test_y, freq=dataset.freq) #아웃풋 타임스템프\ntest_data_y =df_test_numpy[-label_len-pred_len:]\ntest_data_y[-pred_len:] = np.zeros_like(test_data_y[-pred_len:]) #예측하는 부분을 0으로 채워준다.\n\n\n\ntest_time_x = test_time_x\ntest_time_y = test_time_y\ntest_data_y = test_data_y.astype(np.float64)\ntest_data_x = test_data_x.astype(np.float64)\n\n_test = [(test_data_x,test_data_y,test_time_x,test_time_y)]\n_test_loader = DataLoader(_test,batch_size=1,shuffle=False)\n\npreds = []\n\nwith torch.no_grad():\n for i, (batch_x,batch_y,batch_x_mark,batch_y_mark) in enumerate(_test_loader):\n \n batch_x = batch_x.float().to(device)\n batch_y = batch_y.float().to(device)\n\n batch_x_mark = batch_x_mark.float().to(device)\n batch_y_mark = batch_y_mark.float().to(device)\n\n outputs = model(batch_x, batch_x_mark, batch_y, batch_y_mark)\n preds = outputs.detach().cpu().numpy()\n\npreds = scaler.inverse_transform(preds[0])\n\ndf_test.iloc[-pred_len:, 1:] = preds\nprint(time.time() - now)",
"0.7257692813873291\n"
],
[
"import matplotlib.pyplot as plt\n\nreal = data[\"value\"].to_numpy()\nresult = df_test[\"value\"].iloc[-24*14:].to_numpy()\n\nreal = min_max_scaler.inverse_transform(real.reshape(-1,1)).reshape(-1)\nresult = min_max_scaler.inverse_transform(result.reshape(-1,1)).reshape(-1)\n\nplt.figure(figsize=(20,5))\nplt.plot(range(3319,4320),real[3320:], label=\"real\")\nplt.plot(range(4320-24*14,4320),result, label=\"Informer\")\nplt.plot(range(4320-24*14,4320), predict[-24*14:], label=\"LSTMa\")\nplt.plot(range(4320-24*14,4320),forecast['yhat'][-24*14:], label=\"Prophet\")\nplt.plot(range(4320-24*14,4320),pred_series[:24*14+23-23]+0.02294, label=\"Transformer\")\n\nplt.legend()\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"#ARIMA",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df = pd.read_csv(\"/content/drive/MyDrive/Colab Notebooks/Data/삼성전자_6M_ST_Version1.csv\", encoding='CP949')\ndf = df.drop(df.columns[0], axis=1)\ndf.columns = [\"ds\",\"y\"]\ndf.head()",
"_____no_output_____"
],
[
"df_train = df.iloc[:-24*14]",
"_____no_output_____"
],
[
"from statsmodels.tsa.seasonal import seasonal_decompose",
"/usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning:\n\npandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n\n"
],
[
"import statsmodels.api as sm\nfig = plt.figure(figsize=(20,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(df_train[\"y\"], lags=20, ax=ax1)\n\nfig = plt.figure(figsize=(20,8))\nax1 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(df_train[\"y\"], lags=20, ax=ax1)",
"_____no_output_____"
],
[
"from statsmodels.tsa.arima_model import ARIMA\nfrom statsmodels.tsa.statespace.sarimax import SARIMAX\nimport itertools\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"p = range(0,3)\nd = range(1,2)\nq = range(0,6)\nm = 24\n\npdq = list(itertools.product(p,d,q))\nseasonal_pdq = [(x[0],x[1], x[2], m) for x in list(itertools.product(p,d,q))]\n\naic = []\nparams = []\n\nwith tqdm(total = len(pdq) * len(seasonal_pdq)) as pg:\n for i in pdq:\n for j in seasonal_pdq:\n pg.update(1)\n try:\n model = SARIMAX(df_train[\"y\"], order=(i), season_order = (j))\n model_fit = model.fit()\n # print(\"SARIMA:{}{}, AIC:{}\".format(i,j, round(model_fit.aic,2)))\n aic.append(round(model_fit.aic,2))\n params.append((i,j))\n except:\n continue",
" 50%|█████ | 163/324 [01:36<05:35, 2.08s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 51%|█████ | 164/324 [01:38<06:00, 2.26s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 51%|█████ | 165/324 [01:41<06:15, 2.36s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 51%|█████ | 166/324 [01:44<06:26, 2.45s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 52%|█████▏ | 167/324 [01:46<06:34, 2.51s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 52%|█████▏ | 168/324 [01:49<06:37, 2.55s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 52%|█████▏ | 169/324 [01:52<06:37, 2.57s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 52%|█████▏ | 170/324 [01:54<06:37, 2.58s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 53%|█████▎ | 171/324 [01:57<06:37, 2.60s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 53%|█████▎ | 172/324 [02:00<06:38, 2.62s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 53%|█████▎ | 173/324 [02:02<06:37, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 54%|█████▎ | 174/324 [02:05<06:34, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 54%|█████▍ | 175/324 [02:07<06:32, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 54%|█████▍ | 176/324 [02:10<06:27, 2.62s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 55%|█████▍ | 177/324 [02:13<06:24, 2.61s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 55%|█████▍ | 178/324 [02:15<06:23, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 55%|█████▌ | 179/324 [02:18<06:21, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 56%|█████▌ | 180/324 [02:21<06:18, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 56%|█████▌ | 181/324 [02:23<06:16, 2.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 56%|█████▌ | 182/324 [02:27<07:09, 3.02s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 56%|█████▋ | 183/324 [02:31<07:44, 3.30s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 57%|█████▋ | 184/324 [02:35<08:11, 3.51s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 57%|█████▋ | 185/324 [02:39<08:29, 3.67s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 57%|█████▋ | 186/324 [02:43<08:36, 3.74s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 58%|█████▊ | 187/324 [02:47<08:39, 3.79s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 58%|█████▊ | 188/324 [02:51<08:41, 3.84s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 58%|█████▊ | 189/324 [02:55<08:40, 3.85s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 59%|█████▊ | 190/324 [02:59<08:40, 3.89s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 59%|█████▉ | 191/324 [03:03<08:38, 3.90s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 59%|█████▉ | 192/324 [03:07<08:40, 3.95s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 60%|█████▉ | 193/324 [03:11<08:40, 3.97s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 60%|█████▉ | 194/324 [03:15<08:35, 3.97s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 60%|██████ | 195/324 [03:19<08:31, 3.96s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 60%|██████ | 196/324 [03:23<08:27, 3.96s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 61%|██████ | 197/324 [03:27<08:22, 3.96s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 61%|██████ | 198/324 [03:31<08:20, 3.97s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 61%|██████▏ | 199/324 [03:35<08:18, 3.99s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 62%|██████▏ | 200/324 [03:39<08:44, 4.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 62%|██████▏ | 201/324 [03:44<08:58, 4.38s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 62%|██████▏ | 202/324 [03:49<09:04, 4.46s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 63%|██████▎ | 203/324 [03:53<09:08, 4.53s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 63%|██████▎ | 204/324 [03:58<09:08, 4.57s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 63%|██████▎ | 205/324 [04:03<09:10, 4.62s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 64%|██████▎ | 206/324 [04:08<09:08, 4.65s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 64%|██████▍ | 207/324 [04:12<09:09, 4.70s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 64%|██████▍ | 208/324 [04:17<09:07, 4.72s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 65%|██████▍ | 209/324 [04:22<09:03, 4.72s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 65%|██████▍ | 210/324 [04:27<08:59, 4.74s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 65%|██████▌ | 211/324 [04:31<08:55, 4.74s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 65%|██████▌ | 212/324 [04:36<08:49, 4.73s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 66%|██████▌ | 213/324 [04:41<08:42, 4.71s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 66%|██████▌ | 214/324 [04:46<08:39, 4.72s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 66%|██████▋ | 215/324 [04:50<08:33, 4.71s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 67%|██████▋ | 216/324 [04:55<08:26, 4.69s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 78%|███████▊ | 253/324 [05:07<00:23, 3.03it/s]/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:949: UserWarning:\n\nNon-stationary starting autoregressive parameters found. Using zeros as starting parameters.\n\n/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:961: UserWarning:\n\nNon-invertible starting MA parameters found. Using zeros as starting parameters.\n\n/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 78%|███████▊ | 254/324 [05:09<00:45, 1.53it/s]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 79%|███████▊ | 255/324 [05:11<01:10, 1.03s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 79%|███████▉ | 256/324 [05:13<01:39, 1.46s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 79%|███████▉ | 257/324 [05:16<01:57, 1.75s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 80%|███████▉ | 258/324 [05:18<02:10, 1.98s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 80%|███████▉ | 259/324 [05:20<02:17, 2.12s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 80%|████████ | 260/324 [05:23<02:22, 2.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 81%|████████ | 261/324 [05:25<02:25, 2.31s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 81%|████████ | 262/324 [05:28<02:26, 2.36s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 81%|████████ | 263/324 [05:30<02:12, 2.17s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 81%|████████▏ | 264/324 [05:31<01:59, 1.99s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 82%|████████▏ | 265/324 [05:33<01:50, 1.87s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 82%|████████▏ | 266/324 [05:34<01:39, 1.72s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 82%|████████▏ | 267/324 [05:36<01:32, 1.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 83%|████████▎ | 268/324 [05:37<01:27, 1.57s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 83%|████████▎ | 269/324 [05:38<01:23, 1.51s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 83%|████████▎ | 270/324 [05:40<01:19, 1.47s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 84%|████████▎ | 271/324 [05:41<01:16, 1.44s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 84%|████████▍ | 272/324 [05:45<01:59, 2.30s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 84%|████████▍ | 273/324 [05:50<02:27, 2.89s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 85%|████████▍ | 274/324 [05:54<02:46, 3.32s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 85%|████████▍ | 275/324 [05:58<02:57, 3.63s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 85%|████████▌ | 276/324 [06:03<03:04, 3.84s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 85%|████████▌ | 277/324 [06:07<03:06, 3.97s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 86%|████████▌ | 278/324 [06:11<03:07, 4.07s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 86%|████████▌ | 279/324 [06:16<03:06, 4.14s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 86%|████████▋ | 280/324 [06:20<03:03, 4.17s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 87%|████████▋ | 281/324 [06:24<03:01, 4.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 87%|████████▋ | 282/324 [06:29<02:58, 4.24s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 87%|████████▋ | 283/324 [06:33<02:53, 4.24s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 88%|████████▊ | 284/324 [06:37<02:49, 4.24s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 88%|████████▊ | 285/324 [06:41<02:45, 4.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 88%|████████▊ | 286/324 [06:45<02:40, 4.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 89%|████████▊ | 287/324 [06:50<02:37, 4.25s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 89%|████████▉ | 288/324 [06:54<02:33, 4.27s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 89%|████████▉ | 289/324 [06:58<02:30, 4.29s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 90%|████████▉ | 290/324 [07:04<02:34, 4.55s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 90%|████████▉ | 291/324 [07:09<02:35, 4.72s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 90%|█████████ | 292/324 [07:14<02:35, 4.87s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 90%|█████████ | 293/324 [07:19<02:34, 4.98s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 91%|█████████ | 294/324 [07:24<02:31, 5.05s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 91%|█████████ | 295/324 [07:30<02:28, 5.11s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 91%|█████████▏| 296/324 [07:35<02:23, 5.13s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 92%|█████████▏| 297/324 [07:40<02:18, 5.13s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 92%|█████████▏| 298/324 [07:45<02:13, 5.13s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 92%|█████████▏| 299/324 [07:50<02:08, 5.13s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 93%|█████████▎| 300/324 [07:55<02:03, 5.13s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 93%|█████████▎| 301/324 [08:00<01:58, 5.14s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 93%|█████████▎| 302/324 [08:06<01:53, 5.18s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 94%|█████████▎| 303/324 [08:11<01:49, 5.19s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 94%|█████████▍| 304/324 [08:16<01:44, 5.22s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 94%|█████████▍| 305/324 [08:21<01:39, 5.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 94%|█████████▍| 306/324 [08:27<01:34, 5.23s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 95%|█████████▍| 307/324 [08:32<01:28, 5.21s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 95%|█████████▌| 308/324 [08:39<01:33, 5.83s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 95%|█████████▌| 309/324 [08:46<01:32, 6.20s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 96%|█████████▌| 310/324 [08:53<01:30, 6.45s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 96%|█████████▌| 311/324 [09:00<01:26, 6.66s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 96%|█████████▋| 312/324 [09:08<01:22, 6.84s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 97%|█████████▋| 313/324 [09:15<01:16, 6.99s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 97%|█████████▋| 314/324 [09:22<01:10, 7.10s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 97%|█████████▋| 315/324 [09:30<01:04, 7.14s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 98%|█████████▊| 316/324 [09:37<00:57, 7.13s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 98%|█████████▊| 317/324 [09:44<00:50, 7.17s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 98%|█████████▊| 318/324 [09:51<00:43, 7.17s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 98%|█████████▊| 319/324 [09:58<00:35, 7.17s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 99%|█████████▉| 320/324 [10:05<00:28, 7.17s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 99%|█████████▉| 321/324 [10:13<00:21, 7.15s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n 99%|█████████▉| 322/324 [10:20<00:14, 7.20s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n100%|█████████▉| 323/324 [10:27<00:07, 7.18s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n100%|██████████| 324/324 [10:34<00:00, 7.15s/it]/usr/local/lib/python3.7/dist-packages/statsmodels/base/model.py:512: ConvergenceWarning:\n\nMaximum Likelihood optimization failed to converge. Check mle_retvals\n\n100%|██████████| 324/324 [10:41<00:00, 1.98s/it]\n"
],
[
"optimal = [(params[i],j) for i,j in enumerate(aic) if j == min(aic)]\nmodel_opt = SARIMAX(df_train[\"y\"], order = optimal[0][0][0], seasonal_order = optimal[0][0][1])\nmodel_opt_fit = model_opt.fit()\nmodel_opt_fit.summary()",
"_____no_output_____"
],
[
"model = SARIMAX(df_train[\"y\"], order=optimal[0][0][0], seasonal_order=optimal[0][0][1])\nmodel_fit = model.fit(disp=0)\nARIMA_forecast = model_fit.forecast(steps=24*14)\n\nplt.figure(figsize=(20,5))\nplt.plot(range(0,4320), df[\"y\"].iloc[1:], label=\"Real\")\n\nplt.plot(ARIMA_forecast, label=\"ARIMA\")\nplt.plot(range(4320-24*14,4320),result, label=\"Informer\")\nplt.plot(range(4320-24*14,4320), predict[-24*14:], label=\"LSTMa\")\nplt.plot(range(4320-24*14,4320),forecast['yhat'][-24*14:], label=\"Prophet\")\nplt.plot(range(4320-24*14,4320),pred_series[:24*14+23-23]+0.02294, label=\"Transformer\")\n\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\nplt.plot(range(3319,4320), df[\"y\"].iloc[3320:], label=\"Real\")\n\nplt.plot(ARIMA_forecast, label=\"ARIMA\")\nplt.plot(range(4320-24*14,4320),result, label=\"Informer\")\nplt.plot(range(4320-24*14,4320), predict[-24*14:], label=\"LSTMa\")\nplt.plot(range(4320-24*14,4320),forecast['yhat'][-24*14:], label=\"Prophet\")\nplt.plot(range(4320-24*14,4320),pred_series[:24*14+23-23]+0.02294, label=\"Transformer\")\n\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import mean_absolute_error\n\ndef MAPEval(y_pred, y_true):\n return np.mean(np.abs((y_true - y_pred) / y_true)) * 100\n\ndef MSE(y_true, y_pred):\n return np.mean(np.square((y_true - y_pred)))\n\ndef MAE(y_true, y_pred): \n return np.mean(np.abs((y_true - y_pred)))\n\n\n\nprint('Transformer')\nprint('-' * 40)\nprint('MAPE: {} |\\nMSE: {} |\\nMAE : {}\\n'.format(mape(pred_series[:24*14+23-23]+0.02294, target_series+0.02294), mean_squared_error(target_series+0.02294, pred_series[:24*14+23-23]+0.02294), mean_absolute_error(target_series+0.02294, pred_series[:24*14+23-23]+0.02294)))\n\nprint('Informer')\nprint('-' * 40)\nprint('MAPE: {} |\\nMSE: {} |\\nMAE : {}\\n'.format(mape(result, real[-24*14:]), mean_squared_error(real[-24*14:], result), mean_absolute_error(real[-24*14:], result)))\n\nprint('ARIMA')\nprint('-' * 40)\nprint('MAPE: {} |\\nMSE: {} |\\nMAE : {}\\n'.format(mape(ARIMA_forecast, df[\"y\"].iloc[-24*14:]), mean_squared_error(df[\"y\"].iloc[-24*14:], ARIMA_forecast), mean_absolute_error(df[\"y\"].iloc[-24*14:], ARIMA_forecast)))\n\nprint('Prophet')\nprint('-' * 40)\nprint('MAPE: {} |\\nMSE: {} |\\nMAE : {}\\n'.format(mape(forecast['yhat'][4320-24*14:],df[\"y\"][4320-24*14:]), mean_squared_error(df[\"y\"][4320-24*14:], forecast['yhat'][4320-24*14:]), mean_absolute_error(df[\"y\"][4320-24*14:], forecast['yhat'][4320-24*14:])))\n\nprint('LSTMa')\nprint('-' * 40)\nprint('MAPE: {} |\\nMSE: {} |\\nMAE : {}\\n'.format(mape(predict[-24*14:],real[-24*14:]), mean_squared_error(real[-24*14:], predict[-24*14:]), mean_absolute_error(real[-24*14:], predict[-24*14:])))",
"Transformer\n----------------------------------------\nMAPE: 217.37654209136963 |\nMSE: 0.1582442969083786 |\nMAE : 0.3236527144908905\n\nInformer\n----------------------------------------\nMAPE: 119.080629583259 |\nMSE: 0.06549985100523903 |\nMAE : 0.19162431005741273\n\nARIMA\n----------------------------------------\nMAPE: 1118.243950197205 |\nMSE: 4.103259031274165 |\nMAE : 1.743264802037423\n\nProphet\n----------------------------------------\nMAPE: 105.0988943500482 |\nMSE: 0.043528450789886146 |\nMAE : 0.16946382930205087\n\nLSTMa\n----------------------------------------\nMAPE: 100.54587154164933 |\nMSE: 0.04236059743523971 |\nMAE : 0.16292019188548787\n\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec919d83e39c299fdaccd08904c437080daf6d1d | 14,592 | ipynb | Jupyter Notebook | docker/jupyter-scipy-notebook-ee/test_jupyter_scipy_notebook_ee.ipynb | schnjaso2/ee-jupyter-contrib | 937e7b45547a72b2bc60cbcd466fe52700ca7280 | [
"MIT"
] | null | null | null | docker/jupyter-scipy-notebook-ee/test_jupyter_scipy_notebook_ee.ipynb | schnjaso2/ee-jupyter-contrib | 937e7b45547a72b2bc60cbcd466fe52700ca7280 | [
"MIT"
] | null | null | null | docker/jupyter-scipy-notebook-ee/test_jupyter_scipy_notebook_ee.ipynb | schnjaso2/ee-jupyter-contrib | 937e7b45547a72b2bc60cbcd466fe52700ca7280 | [
"MIT"
] | null | null | null | 26.010695 | 261 | 0.555921 | [
[
[
"# Overview\n\nThe purpose of this notebooks is to run simple tests on the libraries provided by the Docker image [Jupyter Notebook Scientific Python Stack + Earth Engine](https://github.com/gee-community/ee-jupyter-contrib/tree/master/docker/jupyter-scipy-notebook-ee).",
"_____no_output_____"
],
[
"# Earth Engine Packages\n\n## Earth Engine Python API\n\n\"The Earth Engine Python API is a client library that facilitates interacting with the Earth Engine servers using the Python programming language.\" \n\n* Homepage: https://earthengine.google.com/\n* Docs: https://developers.google.com/earth-engine/\n* Source code: https://github.com/google/earthengine-api",
"_____no_output_____"
]
],
[
[
"import ee\nprint(ee.__version__)",
"_____no_output_____"
]
],
[
[
"# Jupyter Project packages",
"_____no_output_____"
],
[
"## Jupyter Notebook\n\nThe core library for Jupyter Interactive Notebooks.\n\n* Docs: https://jupyter-notebook.readthedocs.io/\n* Source code: https://github.com/jupyter/notebook",
"_____no_output_____"
]
],
[
[
"import notebook\nprint(notebook.__version__)",
"_____no_output_____"
]
],
[
[
"## JupyterHub\n\n\"JupyterHub, a multi-user Hub, spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server.\"\n\n* Docs: https://jupyterhub.readthedocs.io\n* Source code: https://github.com/jupyterhub/jupyterhub",
"_____no_output_____"
]
],
[
[
"import jupyterhub\nprint(jupyterhub.__version__)",
"_____no_output_____"
]
],
[
[
"## JupyterLab\n\n\"JupyterLab computational environment.\"\n\n* Docs: http://jupyterlab.readthedocs.io\n* Source code: https://github.com/jupyterlab/jupyterlab",
"_____no_output_____"
]
],
[
[
"import jupyterlab\nprint(jupyterlab.__version__)",
"_____no_output_____"
]
],
[
[
"## Jupyter Notebook Widgets\n\n\"Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox, etc.\"\n\n* Homepage: http://jupyter.org/widgets\n* Docs: https://ipywidgets.readthedocs.io\n* Source code: https://github.com/jupyter-widgets/ipywidgets\n",
"_____no_output_____"
]
],
[
[
"import ipywidgets\nprint(ipywidgets.__version__)",
"_____no_output_____"
],
[
"my_slider= ipywidgets.widgets.IntSlider()\ndisplay(my_slider)",
"_____no_output_____"
],
[
"my_slider.value",
"_____no_output_____"
]
],
[
[
"### ipyleaflet\n\nipyleaflet is a Jupyter Widget for interactive mapping, based on the [Leaflet](https://leafletjs.com/) Javascript library.\n\n* Source code: https://github.com/jupyter-widgets/ipyleaflet",
"_____no_output_____"
]
],
[
[
"import ipyleaflet\nprint(ipyleaflet.__version__)",
"_____no_output_____"
],
[
"my_map = ipyleaflet.Map(center=(53.35, -6.2), zoom=11)\nmy_map",
"_____no_output_____"
]
],
[
[
"### bqplot\n\n\"bqplot is a Grammar of Graphics-based interactive plotting framework for the Jupyter notebook.\" \n\n* Docs: https://bqplot.readthedocs.io/en/stable/\n* Source code: https://github.com/bloomberg/bqplot",
"_____no_output_____"
]
],
[
[
"import bqplot\nprint(bqplot.__version__)",
"_____no_output_____"
],
[
"import numpy as np\nsize = 100\nx_data = range(size)\nnp.random.seed(0)\ny_data = np.cumsum(np.random.randn(size) * 100.0)\ny_data_2 = np.cumsum(np.random.randn(size))\ny_data_3 = np.cumsum(np.random.randn(size) * 100.)\n\nsc_ord = bqplot.OrdinalScale()\nsc_y = bqplot.LinearScale()\nsc_y_2 = bqplot.LinearScale()\n\nord_ax = bqplot.Axis(label='Test X', scale=sc_ord, tick_format='0.0f', grid_lines='none')\ny_ax = bqplot.Axis(label='Test Y', scale=sc_y, \n orientation='vertical', tick_format='0.2f', \n grid_lines='solid')\ny_ax_2 = bqplot.Axis(label='Test Y 2', scale=sc_y_2, \n orientation='vertical', side='right', \n tick_format='0.0f', grid_lines='solid')\n\nline_chart = bqplot.Lines(x=x_data[:10], y = [y_data[:10], y_data_2[:10] * 100, y_data_3[:10]],\n scales={'x': sc_ord, 'y': sc_y},\n labels=['Line1', 'Line2', 'Line3'], \n display_legend=True)\n\nbar_chart = bqplot.Bars(x=x_data[:10], \n y=[y_data[:10], y_data_2[:10] * 100, y_data_3[:10]], \n scales={'x': sc_ord, 'y': sc_y_2},\n labels=['Bar1', 'Bar2', 'Bar3'],\n display_legend=True)\n\n# the line does not have a Y value set. So only the bars will be displayed\nbqplot.Figure(axes=[ord_ax, y_ax], marks=[bar_chart, line_chart], legend_location = 'bottom-left')",
"_____no_output_____"
]
],
[
[
"## JupyterLab Sidecar\n\n\"A sidecar output widget for JupyterLab\" \n\n* Source code: https://github.com/jupyter-widgets/jupyterlab-sidecar",
"_____no_output_____"
]
],
[
[
"import sidecar\nprint(sidecar.__version__)",
"_____no_output_____"
],
[
"from ipywidgets import IntSlider\n\nsc = sidecar.Sidecar(title='Sidecar Output')\nsl = IntSlider(description='Some slider')\nwith sc:\n display(sl)",
"_____no_output_____"
]
],
[
[
"## nbdime\n\n\"Jupyter Notebook Diff & Merge tools\"\n\n* docs: https://nbdime.readthedocs.io\n* source: https://github.com/jupyter/nbdime",
"_____no_output_____"
]
],
[
[
"import nbdime\nprint(nbdime.__version__)",
"_____no_output_____"
]
],
[
[
"# Visualization packages",
"_____no_output_____"
],
[
"## ipythonblocks\n\n\"Practice Python with colored grids in the IPython Notebook\"\n\n* Homepage: http://www.ipythonblocks.org\n* Source code: https://github.com/jiffyclub/ipythonblocks",
"_____no_output_____"
]
],
[
[
"import ipythonblocks\nprint(ipythonblocks.__version__)",
"_____no_output_____"
],
[
"grid = ipythonblocks.BlockGrid(10, 10, fill=(123, 234, 123))\ngrid",
"_____no_output_____"
]
],
[
[
"## Palettable\n\n\"Palettable is a library of color palettes for Python.\" \n\n* Docs: https://jiffyclub.github.io/palettable/\n* Source code: https://github.com/jiffyclub/palettable",
"_____no_output_____"
]
],
[
[
"import palettable\nprint(palettable.__version__)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.colors import LogNorm\nfrom palettable.colorbrewer.sequential import YlGnBu_9\n\n#normal distribution center at x=0 and y=5\nx = np.random.randn(100000)\ny = np.random.randn(100000)+5\n\nplt.hist2d(x, y, bins=40, norm=LogNorm(), cmap=YlGnBu_9.mpl_colormap)\nplt.colorbar()",
"_____no_output_____"
]
],
[
[
"## Altair\n\n\"Altair is a declarative statistical visualization library for Python.\" \n\n* Docs: https://altair-viz.github.io/\n* Source code: https://github.com/altair-viz/altair\n* Tutorials: https://github.com/altair-viz/altair_notebooks",
"_____no_output_____"
]
],
[
[
"import altair as alt\nprint(alt.__version__)",
"_____no_output_____"
],
[
"# load a simple dataset as a pandas DataFrame\nfrom vega_datasets import data\ncars = data.cars()\n\nalt.Chart(cars).mark_point().encode(\n x='Horsepower',\n y='Miles_per_Gallon',\n color='Origin',\n)",
"_____no_output_____"
]
],
[
[
"## PILLOW\n\n\"Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.\"\n\n* Docs: http://pillow.readthedocs.io",
"_____no_output_____"
]
],
[
[
"import PIL\nprint(PIL.__version__)",
"_____no_output_____"
]
],
[
[
"## imageio\n\n\"Imageio is a Python library that provides an easy interface to read and write a wide range of image data, including animated images, video, volumetric data, and scientific formats.\"\n\n* Home: https://imageio.github.io/\n* Docs: http://imageio.readthedocs.io/\n* Source Code: https://github.com/imageio/imageio",
"_____no_output_____"
],
[
"# Machine Learning\n\n## Tensorflow\n\n\"Computation using data flow graphs for scalable machine learning\"\n\nHomepage: https://www.tensorflow.org/\nDocs: https://www.tensorflow.org/api_docs/python/\nSource code: https://github.com/tensorflow/tensorflow\nTutorials: https://www.tensorflow.org/tutorials/",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nprint(tf.__version__)",
"_____no_output_____"
],
[
"from tensorflow.python.keras.datasets import mnist\nfrom tensorflow.python.keras.models import Sequential\nfrom tensorflow.python.keras.layers import Dense, Dropout\nfrom tensorflow.python.keras.optimizers import RMSprop\nfrom tensorflow.python import keras\n\nbatch_size = 128\nnum_classes = 10\nepochs = 2\n\n# the data, split between train and test sets\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\nx_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\nprint(x_train.shape[0], 'train samples')\nprint(x_test.shape[0], 'test samples')\n\n# convert class vectors to binary class matrices\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\n\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(num_classes, activation='softmax'))\n\nmodel.summary()\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer=RMSprop(),\n metrics=['accuracy'])\n\nhistory = model.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(x_test, y_test))\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ec919ef890745bd71e5ada7c964569d2f61d1767 | 39,679 | ipynb | Jupyter Notebook | site/en-snapshot/tutorials/keras/regression.ipynb | phoenix-fork-tensorflow/docs-l10n | 2287738c22e3e67177555e8a41a0904edfcf1544 | [
"Apache-2.0"
] | 5,672 | 2018-08-27T18:49:33.000Z | 2022-03-31T07:52:12.000Z | site/en-snapshot/tutorials/keras/regression.ipynb | phoenix-fork-tensorflow/docs-l10n | 2287738c22e3e67177555e8a41a0904edfcf1544 | [
"Apache-2.0"
] | 1,635 | 2018-08-28T15:27:17.000Z | 2022-03-23T23:15:14.000Z | site/en-snapshot/tutorials/keras/regression.ipynb | phoenix-fork-tensorflow/docs-l10n | 2287738c22e3e67177555e8a41a0904edfcf1544 | [
"Apache-2.0"
] | 6,035 | 2018-08-27T19:13:09.000Z | 2022-03-31T08:55:13.000Z | 28.505029 | 420 | 0.513798 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Basic regression: Predict fuel efficiency",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/regression\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/regression.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/regression.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/regression.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"In a *regression* problem, the aim is to predict the output of a continuous value, like a price or a probability. Contrast this with a *classification* problem, where the aim is to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture).\n\nThis tutorial uses the classic [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) dataset and demonstrates how to build models to predict the fuel efficiency of the late-1970s and early 1980s automobiles. To do this, you will provide the models with a description of many automobiles from that time period. This description includes attributes like cylinders, displacement, horsepower, and weight.\n\nThis example uses the Keras API. (Visit the Keras [tutorials](https://www.tensorflow.org/tutorials/keras) and [guides](https://www.tensorflow.org/guide/keras) to learn more.)",
"_____no_output_____"
]
],
[
[
"# Use seaborn for pairplot.\n!pip install -q seaborn",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n# Make NumPy printouts easier to read.\nnp.set_printoptions(precision=3, suppress=True)",
"_____no_output_____"
],
[
"import tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## The Auto MPG dataset\n\nThe dataset is available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/).\n",
"_____no_output_____"
],
[
"### Get the data\nFirst download and import the dataset using pandas:",
"_____no_output_____"
]
],
[
[
"url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'\ncolumn_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',\n 'Acceleration', 'Model Year', 'Origin']\n\nraw_dataset = pd.read_csv(url, names=column_names,\n na_values='?', comment='\\t',\n sep=' ', skipinitialspace=True)",
"_____no_output_____"
],
[
"dataset = raw_dataset.copy()\ndataset.tail()",
"_____no_output_____"
]
],
[
[
"### Clean the data\n\nThe dataset contains a few unknown values:",
"_____no_output_____"
]
],
[
[
"dataset.isna().sum()",
"_____no_output_____"
]
],
[
[
"Drop those rows to keep this initial tutorial simple:",
"_____no_output_____"
]
],
[
[
"dataset = dataset.dropna()",
"_____no_output_____"
]
],
[
[
"The `\"Origin\"` column is categorical, not numeric. So the next step is to one-hot encode the values in the column with [pd.get_dummies](https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html).\n\nNote: You can set up the `tf.keras.Model` to do this kind of transformation for you but that's beyond the scope of this tutorial. Check out the [Classify structured data using Keras preprocessing layers](../structured_data/preprocessing_layers.ipynb) or [Load CSV data](../load_data/csv.ipynb) tutorials for examples.",
"_____no_output_____"
]
],
[
[
"dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})",
"_____no_output_____"
],
[
"dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='')\ndataset.tail()",
"_____no_output_____"
]
],
[
[
"### Split the data into training and test sets\n\nNow, split the dataset into a training set and a test set. You will use the test set in the final evaluation of your models.",
"_____no_output_____"
]
],
[
[
"train_dataset = dataset.sample(frac=0.8, random_state=0)\ntest_dataset = dataset.drop(train_dataset.index)",
"_____no_output_____"
]
],
[
[
"### Inspect the data\n\nReview the joint distribution of a few pairs of columns from the training set.\n\nThe top row suggests that the fuel efficiency (MPG) is a function of all the other parameters. The other rows indicate they are functions of each other.",
"_____no_output_____"
]
],
[
[
"sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde')",
"_____no_output_____"
]
],
[
[
"Let's also check the overall statistics. Note how each feature covers a very different range:",
"_____no_output_____"
]
],
[
[
"train_dataset.describe().transpose()",
"_____no_output_____"
]
],
[
[
"### Split features from labels\n\nSeparate the target value—the \"label\"—from the features. This label is the value that you will train the model to predict.",
"_____no_output_____"
]
],
[
[
"train_features = train_dataset.copy()\ntest_features = test_dataset.copy()\n\ntrain_labels = train_features.pop('MPG')\ntest_labels = test_features.pop('MPG')",
"_____no_output_____"
]
],
[
[
"## Normalization\n\nIn the table of statistics it's easy to see how different the ranges of each feature are:",
"_____no_output_____"
]
],
[
[
"train_dataset.describe().transpose()[['mean', 'std']]",
"_____no_output_____"
]
],
[
[
"It is good practice to normalize features that use different scales and ranges.\n\nOne reason this is important is because the features are multiplied by the model weights. So, the scale of the outputs and the scale of the gradients are affected by the scale of the inputs.\n\nAlthough a model *might* converge without feature normalization, normalization makes training much more stable.\n\nNote: There is no advantage to normalizing the one-hot features—it is done here for simplicity. For more details on how to use the preprocessing layers, refer to the [Working with preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) guide and the [Classify structured data using Keras preprocessing layers](../structured_data/preprocessing_layers.ipynb) tutorial.",
"_____no_output_____"
],
[
"### The Normalization layer\n\nThe `tf.keras.layers.Normalization` is a clean and simple way to add feature normalization into your model.\n\nThe first step is to create the layer:",
"_____no_output_____"
]
],
[
[
"normalizer = tf.keras.layers.Normalization(axis=-1)",
"_____no_output_____"
]
],
[
[
"Then, fit the state of the preprocessing layer to the data by calling `Normalization.adapt`:",
"_____no_output_____"
]
],
[
[
"normalizer.adapt(np.array(train_features))",
"_____no_output_____"
]
],
[
[
"Calculate the mean and variance, and store them in the layer:",
"_____no_output_____"
]
],
[
[
"print(normalizer.mean.numpy())",
"_____no_output_____"
]
],
[
[
"When the layer is called, it returns the input data, with each feature independently normalized:",
"_____no_output_____"
]
],
[
[
"first = np.array(train_features[:1])\n\nwith np.printoptions(precision=2, suppress=True):\n print('First example:', first)\n print()\n print('Normalized:', normalizer(first).numpy())",
"_____no_output_____"
]
],
[
[
"## Linear regression\n\nBefore building a deep neural network model, start with linear regression using one and several variables.",
"_____no_output_____"
],
[
"### Linear regression with one variable\n\nBegin with a single-variable linear regression to predict `'MPG'` from `'Horsepower'`.\n\nTraining a model with `tf.keras` typically starts by defining the model architecture. Use a `tf.keras.Sequential` model, which [represents a sequence of steps](https://www.tensorflow.org/guide/keras/sequential_model).\n\nThere are two steps in your single-variable linear regression model:\n\n- Normalize the `'Horsepower'` input features using the `tf.keras.layers.Normalization` preprocessing layer.\n- Apply a linear transformation ($y = mx+b$) to produce 1 output using a linear layer (`tf.keras.layers.Dense`).\n\nThe number of _inputs_ can either be set by the `input_shape` argument, or automatically when the model is run for the first time.",
"_____no_output_____"
],
[
"First, create a NumPy array made of the `'Horsepower'` features. Then, instantiate the `tf.keras.layers.Normalization` and fit its state to the `horsepower` data:",
"_____no_output_____"
]
],
[
[
"horsepower = np.array(train_features['Horsepower'])\n\nhorsepower_normalizer = layers.Normalization(input_shape=[1,], axis=None)\nhorsepower_normalizer.adapt(horsepower)",
"_____no_output_____"
]
],
[
[
"Build the Keras Sequential model:",
"_____no_output_____"
]
],
[
[
"horsepower_model = tf.keras.Sequential([\n horsepower_normalizer,\n layers.Dense(units=1)\n])\n\nhorsepower_model.summary()",
"_____no_output_____"
]
],
[
[
"This model will predict `'MPG'` from `'Horsepower'`.\n\nRun the untrained model on the first 10 'Horsepower' values. The output won't be good, but notice that it has the expected shape of `(10, 1)`:",
"_____no_output_____"
]
],
[
[
"horsepower_model.predict(horsepower[:10])",
"_____no_output_____"
]
],
[
[
"Once the model is built, configure the training procedure using the Keras `Model.compile` method. The most important arguments to compile are the `loss` and the `optimizer`, since these define what will be optimized (`mean_absolute_error`) and how (using the `tf.keras.optimizers.Adam`).",
"_____no_output_____"
]
],
[
[
"horsepower_model.compile(\n optimizer=tf.optimizers.Adam(learning_rate=0.1),\n loss='mean_absolute_error')",
"_____no_output_____"
]
],
[
[
"Use Keras `Model.fit` to execute the training for 100 epochs:",
"_____no_output_____"
]
],
[
[
"%%time\nhistory = horsepower_model.fit(\n train_features['Horsepower'],\n train_labels,\n epochs=100,\n # Suppress logging.\n verbose=0,\n # Calculate validation results on 20% of the training data.\n validation_split = 0.2)",
"_____no_output_____"
]
],
[
[
"Visualize the model's training progress using the stats stored in the `history` object:",
"_____no_output_____"
]
],
[
[
"hist = pd.DataFrame(history.history)\nhist['epoch'] = history.epoch\nhist.tail()",
"_____no_output_____"
],
[
"def plot_loss(history):\n plt.plot(history.history['loss'], label='loss')\n plt.plot(history.history['val_loss'], label='val_loss')\n plt.ylim([0, 10])\n plt.xlabel('Epoch')\n plt.ylabel('Error [MPG]')\n plt.legend()\n plt.grid(True)",
"_____no_output_____"
],
[
"plot_loss(history)",
"_____no_output_____"
]
],
[
[
"Collect the results on the test set for later:",
"_____no_output_____"
]
],
[
[
"test_results = {}\n\ntest_results['horsepower_model'] = horsepower_model.evaluate(\n test_features['Horsepower'],\n test_labels, verbose=0)",
"_____no_output_____"
]
],
[
[
"Since this is a single variable regression, it's easy to view the model's predictions as a function of the input:",
"_____no_output_____"
]
],
[
[
"x = tf.linspace(0.0, 250, 251)\ny = horsepower_model.predict(x)",
"_____no_output_____"
],
[
"def plot_horsepower(x, y):\n plt.scatter(train_features['Horsepower'], train_labels, label='Data')\n plt.plot(x, y, color='k', label='Predictions')\n plt.xlabel('Horsepower')\n plt.ylabel('MPG')\n plt.legend()",
"_____no_output_____"
],
[
"plot_horsepower(x,y)",
"_____no_output_____"
]
],
[
[
"### Linear regression with multiple inputs",
"_____no_output_____"
],
[
"You can use an almost identical setup to make predictions based on multiple inputs. This model still does the same $y = mx+b$ except that $m$ is a matrix and $b$ is a vector.\n\nCreate a two-step Keras Sequential model again with the first layer being `normalizer` (`tf.keras.layers.Normalization(axis=-1)`) you defined earlier and adapted to the whole dataset:",
"_____no_output_____"
]
],
[
[
"linear_model = tf.keras.Sequential([\n normalizer,\n layers.Dense(units=1)\n])",
"_____no_output_____"
]
],
[
[
"When you call `Model.predict` on a batch of inputs, it produces `units=1` outputs for each example:",
"_____no_output_____"
]
],
[
[
"linear_model.predict(train_features[:10])",
"_____no_output_____"
]
],
[
[
"When you call the model, its weight matrices will be built—check that the `kernel` weights (the $m$ in $y=mx+b$) have a shape of `(9, 1)`:",
"_____no_output_____"
]
],
[
[
"linear_model.layers[1].kernel",
"_____no_output_____"
]
],
[
[
"Configure the model with Keras `Model.compile` and train with `Model.fit` for 100 epochs:",
"_____no_output_____"
]
],
[
[
"linear_model.compile(\n optimizer=tf.optimizers.Adam(learning_rate=0.1),\n loss='mean_absolute_error')",
"_____no_output_____"
],
[
"%%time\nhistory = linear_model.fit(\n train_features,\n train_labels,\n epochs=100,\n # Suppress logging.\n verbose=0,\n # Calculate validation results on 20% of the training data.\n validation_split = 0.2)",
"_____no_output_____"
]
],
[
[
"Using all the inputs in this regression model achieves a much lower training and validation error than the `horsepower_model`, which had one input:",
"_____no_output_____"
]
],
[
[
"plot_loss(history)",
"_____no_output_____"
]
],
[
[
"Collect the results on the test set for later:",
"_____no_output_____"
]
],
[
[
"test_results['linear_model'] = linear_model.evaluate(\n test_features, test_labels, verbose=0)",
"_____no_output_____"
]
],
[
[
"## Regression with a deep neural network (DNN)",
"_____no_output_____"
],
[
"In the previous section, you implemented two linear models for single and multiple inputs.\n\nHere, you will implement single-input and multiple-input DNN models.\n\nThe code is basically the same except the model is expanded to include some \"hidden\" non-linear layers. The name \"hidden\" here just means not directly connected to the inputs or outputs.",
"_____no_output_____"
],
[
"These models will contain a few more layers than the linear model:\n\n* The normalization layer, as before (with `horsepower_normalizer` for a single-input model and `normalizer` for a multiple-input model).\n* Two hidden, non-linear, `Dense` layers with the ReLU (`relu`) activation function nonlinearity.\n* A linear `Dense` single-output layer.\n\nBoth models will use the same training procedure so the `compile` method is included in the `build_and_compile_model` function below.",
"_____no_output_____"
]
],
[
[
"def build_and_compile_model(norm):\n model = keras.Sequential([\n norm,\n layers.Dense(64, activation='relu'),\n layers.Dense(64, activation='relu'),\n layers.Dense(1)\n ])\n\n model.compile(loss='mean_absolute_error',\n optimizer=tf.keras.optimizers.Adam(0.001))\n return model",
"_____no_output_____"
]
],
[
[
"### Regression using a DNN and a single input",
"_____no_output_____"
],
[
"Create a DNN model with only `'Horsepower'` as input and `horsepower_normalizer` (defined earlier) as the normalization layer:",
"_____no_output_____"
]
],
[
[
"dnn_horsepower_model = build_and_compile_model(horsepower_normalizer)",
"_____no_output_____"
]
],
[
[
"This model has quite a few more trainable parameters than the linear models:",
"_____no_output_____"
]
],
[
[
"dnn_horsepower_model.summary()",
"_____no_output_____"
]
],
[
[
"Train the model with Keras `Model.fit`:",
"_____no_output_____"
]
],
[
[
"%%time\nhistory = dnn_horsepower_model.fit(\n train_features['Horsepower'],\n train_labels,\n validation_split=0.2,\n verbose=0, epochs=100)",
"_____no_output_____"
]
],
[
[
"This model does slightly better than the linear single-input `horsepower_model`:",
"_____no_output_____"
]
],
[
[
"plot_loss(history)",
"_____no_output_____"
]
],
[
[
"If you plot the predictions as a function of `'Horsepower'`, you should notice how this model takes advantage of the nonlinearity provided by the hidden layers:",
"_____no_output_____"
]
],
[
[
"x = tf.linspace(0.0, 250, 251)\ny = dnn_horsepower_model.predict(x)",
"_____no_output_____"
],
[
"plot_horsepower(x, y)",
"_____no_output_____"
]
],
[
[
"Collect the results on the test set for later:",
"_____no_output_____"
]
],
[
[
"test_results['dnn_horsepower_model'] = dnn_horsepower_model.evaluate(\n test_features['Horsepower'], test_labels,\n verbose=0)",
"_____no_output_____"
]
],
[
[
"### Regression using a DNN and multiple inputs",
"_____no_output_____"
],
[
"Repeat the previous process using all the inputs. The model's performance slightly improves on the validation dataset.",
"_____no_output_____"
]
],
[
[
"dnn_model = build_and_compile_model(normalizer)\ndnn_model.summary()",
"_____no_output_____"
],
[
"%%time\nhistory = dnn_model.fit(\n train_features,\n train_labels,\n validation_split=0.2,\n verbose=0, epochs=100)",
"_____no_output_____"
],
[
"plot_loss(history)",
"_____no_output_____"
]
],
[
[
"Collect the results on the test set:",
"_____no_output_____"
]
],
[
[
"test_results['dnn_model'] = dnn_model.evaluate(test_features, test_labels, verbose=0)",
"_____no_output_____"
]
],
[
[
"## Performance",
"_____no_output_____"
],
[
"Since all models have been trained, you can review their test set performance:",
"_____no_output_____"
]
],
[
[
"pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T",
"_____no_output_____"
]
],
[
[
"These results match the validation error observed during training.",
"_____no_output_____"
],
[
"### Make predictions\n\nYou can now make predictions with the `dnn_model` on the test set using Keras `Model.predict` and review the loss:",
"_____no_output_____"
]
],
[
[
"test_predictions = dnn_model.predict(test_features).flatten()\n\na = plt.axes(aspect='equal')\nplt.scatter(test_labels, test_predictions)\nplt.xlabel('True Values [MPG]')\nplt.ylabel('Predictions [MPG]')\nlims = [0, 50]\nplt.xlim(lims)\nplt.ylim(lims)\n_ = plt.plot(lims, lims)\n",
"_____no_output_____"
]
],
[
[
"It appears that the model predicts reasonably well.\n\nNow, check the error distribution:",
"_____no_output_____"
]
],
[
[
"error = test_predictions - test_labels\nplt.hist(error, bins=25)\nplt.xlabel('Prediction Error [MPG]')\n_ = plt.ylabel('Count')",
"_____no_output_____"
]
],
[
[
"If you're happy with the model, save it for later use with `Model.save`:",
"_____no_output_____"
]
],
[
[
"dnn_model.save('dnn_model')",
"_____no_output_____"
]
],
[
[
"If you reload the model, it gives identical output:",
"_____no_output_____"
]
],
[
[
"reloaded = tf.keras.models.load_model('dnn_model')\n\ntest_results['reloaded'] = reloaded.evaluate(\n test_features, test_labels, verbose=0)",
"_____no_output_____"
],
[
"pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T",
"_____no_output_____"
]
],
[
[
"## Conclusion\n\nThis notebook introduced a few techniques to handle a regression problem. Here are a few more tips that may help:\n\n- Mean squared error (MSE) (`tf.losses.MeanSquaredError`) and mean absolute error (MAE) (`tf.losses.MeanAbsoluteError`) are common loss functions used for regression problems. MAE is less sensitive to outliers. Different loss functions are used for classification problems.\n- Similarly, evaluation metrics used for regression differ from classification.\n- When numeric input data features have values with different ranges, each feature should be scaled independently to the same range.\n- Overfitting is a common problem for DNN models, though it wasn't a problem for this tutorial. Visit the [Overfit and underfit](overfit_and_underfit.ipynb) tutorial for more help with this.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ec91a37784fd521ab7e07321ecef7f8501120f6b | 17,597 | ipynb | Jupyter Notebook | hackathon/data_formats.ipynb | mrussl/toast-workshop-ucsd-2019 | d4781c4c8df4dd2a54d72179e4ab04e354088879 | [
"BSD-2-Clause"
] | 4 | 2019-10-08T18:17:01.000Z | 2019-12-04T02:19:35.000Z | hackathon/data_formats.ipynb | mrussl/toast-workshop-ucsd-2019 | d4781c4c8df4dd2a54d72179e4ab04e354088879 | [
"BSD-2-Clause"
] | 15 | 2019-10-08T14:41:19.000Z | 2019-11-05T17:07:22.000Z | hackathon/data_formats.ipynb | mrussl/toast-workshop-ucsd-2019 | d4781c4c8df4dd2a54d72179e4ab04e354088879 | [
"BSD-2-Clause"
] | 26 | 2019-08-12T03:28:32.000Z | 2019-10-23T22:21:50.000Z | 35.912245 | 256 | 0.514292 | [
[
[
"# Example Starting Point for New Data Formats",
"_____no_output_____"
]
],
[
[
"# Are you using a special reservation for a workshop?\n# If so, set it here:\nnersc_reservation = \"toast3\"\n\n# Load common tools for all lessons\nimport sys\nsys.path.insert(0, \"../lessons\")\nfrom lesson_tools import (\n check_nersc,\n)\nnersc_host, nersc_repo, nersc_resv = check_nersc(reservation=nersc_reservation)\n\n# Capture C++ output in the jupyter cells\n%reload_ext wurlitzer\n",
"_____no_output_____"
]
],
[
[
"## TOD Class\n\nThis is the stub of a TOD class to read one observation of data.",
"_____no_output_____"
]
],
[
[
"import os\n\nimport toast\nfrom toast.mpi import MPI\nfrom toast.tod import TOD\n\nclass NewTOD(TOD):\n # You can override the default names of cache keys here. They\n # are defined in the toast.TOD \n BORESIGHT_NAME = \"boresight\"\n BORESIGHT_AZEL_NAME = \"boresight_azel\"\n \"\"\"This class contains the timestream data.\n\n This loads data from a custom data format. Add more documentation here\n about what it is doing...\n \n Add more constructor arguments to get all the info you need to be\n able to read the data.\n\n Args:\n path (str): The path to an observation file.\n detquats (dict): Dictionary of detector names and quaternion\n offsets from the boresight.\n mpicomm (mpi4py.MPI.Comm): the MPI communicator over which this\n observation data is distributed.\n detranks (int): The dimension of the process grid in the detector\n direction. The MPI communicator size must be evenly divisible\n by this number.\n\n \"\"\"\n def __init__(self, path, detquats, mpicomm=None, detranks=1):\n self._path = path\n self._detquats = detquats\n \n # Figure out how many samples there are in this observation. Also,\n # if there are any kind of \"sub chunks\" in the observation that should\n # not be split up between processes (e.g. left and right azimuth\n # scans), then compute them here.\n\n nsamp = 1000 # Change this\n \n # This is just a list of one element (the whole observation). You\n # could specify the chunks in samples that should never be split up\n # between processes.\n sampsizes = [nsamp]\n \n # Here we assign unique IDs to every detector. This is used for\n # reproducible simulations. You can decide how to assign these for\n # your project. Here they just assigned based on the sorted list\n # of detector names.\n \n detnames = list(sorted(detquats.keys()))\n \n detindx = {x[1]: x[0] for x in enumerate(detnames)}\n\n # Call base class constructor to distribute data\n super().__init__(\n mpicomm, detnames, nsamp,\n detindx=detindx, detranks=detranks,\n sampsizes=sampsizes, meta=dict()\n )\n \n # If we are caching some data (e.g. boresight pointing, auxilliary\n # files needed by any read operation, etc) then do it here. Depending\n # on the data format, you may need to just load all data into the\n # self.cache object here.\n\n return\n\n def detoffset(self):\n return dict(self._detquats)\n \n # The methods below assume that the data was cached during construction.\n # If not, then you can read the different data products inside each method.\n # You can customize the \n\n def _get_boresight(self, start, n):\n # This assumes you cached the boresight pointing in RA/DEC\n # in the constructor.\n ref = self.cache.reference(self.BORESIGHT_NAME)[start:start+n, :]\n return ref\n\n def _put_boresight(self, start, data):\n ref = self.cache.reference(self.BORESIGHT_NAME)\n ref[start:(start+data.shape[0]), :] = data\n del ref\n return\n\n# def _get_boresight_azel(self, start, n):\n# ref = self.cache.reference(self.BORESIGHT_AZEL_NAME)[start:start+n, :]\n# return ref\n\n# def _put_boresight_azel(self, start, data):\n# ref = self.cache.reference(self.BORESIGHT_AZEL_NAME)\n# ref[start:(start+data.shape[0]), :] = data\n# del ref\n# return\n\n def _get(self, detector, start, n):\n name = \"{}_{}\".format(self.SIGNAL_NAME, detector)\n ref = self.cache.reference(name)[start:start+n]\n return ref\n\n def _put(self, detector, start, data):\n name = \"{}_{}\".format(self.SIGNAL_NAME, detector)\n ref = self.cache.reference(name)\n ref[start:(start+data.shape[0])] = data\n del ref\n return\n\n def _get_flags(self, detector, start, n):\n name = \"{}_{}\".format(self.FLAG_NAME, detector)\n ref = self.cache.reference(name)[start:start+n]\n return ref\n\n def _put_flags(self, detector, start, flags):\n name = \"{}_{}\".format(self.FLAG_NAME, detector)\n ref = self.cache.reference(name)\n ref[start:(start+flags.shape[0])] = flags\n del ref\n return\n\n def _get_common_flags(self, start, n):\n ref = self.cache.reference(self.COMMON_FLAG_NAME)[start:start+n]\n return ref\n\n def _put_common_flags(self, start, flags):\n ref = self.cache.reference(self.COMMON_FLAG_NAME)\n ref[start:(start+flags.shape[0])] = flags\n del ref\n return\n\n def _get_hwp_angle(self, start, n):\n if self.cache.exists(self.HWP_ANGLE_NAME):\n hwpang = self.cache.reference(self.HWP_ANGLE_NAME)[start:start+n]\n else:\n hwpang = None\n return hwpang\n\n def _put_hwp_angle(self, start, hwpang):\n ref = self.cache.reference(self.HWP_ANGLE_NAME)\n ref[start:(start + hwpang.shape[0])] = hwpang\n del ref\n return\n\n def _get_times(self, start, n):\n ref = self.cache.reference(self.TIMESTAMP_NAME)[start:start+n]\n tm = 1.0e-9 * ref.astype(np.float64)\n del ref\n return tm\n\n def _put_times(self, start, stamps):\n ref = self.cache.reference(self.TIMESTAMP_NAME)\n ref[start:(start+stamps.shape[0])] = np.array(1.0e9 * stamps,\n dtype=np.int64)\n del ref\n return\n\n def _get_pntg(self, detector, start, n):\n # Get boresight pointing (from disk or cache)\n bore = self._get_boresight(start, n)\n # Apply detector quaternion and return\n return qa.mult(bore, self._detquats[detector])\n\n def _put_pntg(self, detector, start, data):\n raise RuntimeError(\"This class computes detector pointing on the fly\")\n return\n\n def _get_position(self, start, n):\n ref = self.cache.reference(self.POSITION_NAME)[start:start+n, :]\n return ref\n\n def _put_position(self, start, pos):\n ref = self.cache.reference(self.POSITION_NAME)\n ref[start:(start+pos.shape[0]), :] = pos\n del ref\n return\n\n def _get_velocity(self, start, n):\n ref = self.cache.reference(self.VELOCITY_NAME)[start:start+n, :]\n return ref\n\n def _put_velocity(self, start, vel):\n ref = self.cache.reference(self.VELOCITY_NAME)\n ref[start:(start+vel.shape[0]), :] = vel\n del ref\n return",
"_____no_output_____"
]
],
[
[
"## Loading a Single Observation\n\nThis function creates one observation (i.e. a dictionary) with the TOD object and any other metadata.",
"_____no_output_____"
]
],
[
[
"def load_observation(path, mpicomm=None, detranks=1, **kwargs):\n \"\"\"Create an observation.\n\n Extra keyword args are passed to the TOD constructor.\n\n Args:\n path (str): The path to the observation.\n mpicomm (mpi4py.MPI.Comm): the MPI communicator over which this\n observation data is distributed.\n detranks (int): The dimension of the process grid in the detector\n direction. The MPI communicator size must be evenly divisible\n by this number.\n\n Returns:\n (dict): The observation dictionary.\n\n \"\"\"\n rank = 0\n if mpicomm is not None:\n rank = mpicomm.rank\n\n obs = dict()\n\n if rank == 0:\n # Rank zero should open up any files to get things needed to construct the TOD\n pass\n\n obs[\"tod\"] = NewTOD(path, detquats, mpicomm=mpicomm, detranks=detranks, **kwargs)\n return obs",
"_____no_output_____"
]
],
[
[
"## Load Balancing Observations\n\nThis function computes a \"weight\" for each observation based on the same information that will be given to the TOD constructor. Here we just return a weight based on the number of samples. This can be used for an approximate load balancing below.",
"_____no_output_____"
]
],
[
[
"def obsweight(path):\n \"\"\"Compute observation weight.\n\n Given a path to a \"file\", return the relative weight for this\n observation.\n\n Args:\n path (str): Path to the observation\n\n Returns:\n (float): Relative weight\n\n \"\"\"\n return 1.0",
"_____no_output_____"
]
],
[
[
"## Loading a Dataset (Multiple Observations)\n\nThis function takes some parameters and distributes observations among process groups. Then every group creates their assigned observations.",
"_____no_output_____"
]
],
[
[
"from toast.dist import distribute_discrete\n\ndef load_data(dir, obs=None, comm=None, **kwargs):\n \"\"\"Loads data.\n\n This should take options for selecting observations based on some criteria.\n\n Additional keyword args are passed to the load_observation function.\n\n Args:\n dir (str): Top directory of data.\n obs (list): The list of observations to load.\n comm (toast.Comm): the toast Comm class for distributing the data.\n\n Returns:\n (toast.Data): The distributed data object.\n\n \"\"\"\n # the global communicator\n cworld = comm.comm_world\n # the communicator within the group\n cgroup = comm.comm_group\n\n # One process gets the list of observation directories\n obslist = list()\n weight = dict()\n\n worldrank = 0\n if cworld is not None:\n worldrank = cworld.rank\n\n if worldrank == 0:\n# for root, dirs, files in os.walk(dir, topdown=True):\n# for d in dirs:\n# # Get a list of directory names as the \"observations\". What you\n# # do here depends on how your data is organized.\n# obslist.append(d)\n# weight[d] = obsweight(os.path.join(root, dir))\n# break\n obslist = [\"foo\", \"bar\", \"blat\", \"obs_to_cut\"]\n obslist = sorted(obslist)\n # Filter by the requested obs\n fobs = list()\n if obs is not None:\n for ob in obslist:\n if ob in obs:\n fobs.append(ob)\n obslist = fobs\n\n # Communicate what observations we are using.\n if cworld is not None:\n obslist = cworld.bcast(obslist, root=0)\n weight = cworld.bcast(weight, root=0)\n\n # Distribute observations based on the relative weight.\n dweight = [weight[x] for x in obslist]\n distobs = distribute_discrete(dweight, comm.ngroups)\n\n # Distributed data\n data = Data(comm)\n\n # Now every group adds its observations to the list\n\n firstobs = distobs[comm.group][0]\n nobs = distobs[comm.group][1]\n for ob in range(firstobs, firstobs+nobs):\n opath = os.path.join(dir, obslist[ob])\n print(\"Loading {}\".format(opath))\n # In case something goes wrong on one process, make sure the job\n # is killed.\n try:\n data.obs.append(\n load_observation(opath, mpicomm=cgroup, **kwargs)\n )\n except:\n exc_type, exc_value, exc_traceback = sys.exc_info()\n lines = traceback.format_exception(exc_type, exc_value,\n exc_traceback)\n lines = [\"Proc {}: {}\".format(worldrank, x)\n for x in lines]\n print(\"\".join(lines), flush=True)\n if cworld is not None:\n cworld.Abort()\n\n return data",
"_____no_output_____"
],
[
"# Uncomment this when writing a file for MPI\n# %%writefile data_formats_mpi.py\n\nimport toast\nfrom toast.mpi import MPI\n\ncomm = toast.Comm()\n\ndata = load_data(\"data/directory\", obs=[\"foo\", \"bar\", \"blat\"], comm=comm)\n\nprint(data)\n",
"_____no_output_____"
],
[
"import subprocess as sp\n\ncommand = \"python data_formats_mpi.py\"\nrunstr = None\n\nif nersc_host is not None:\n runstr = \"export OMP_NUM_THREADS=4; srun -N 2 -C haswell -n 32 -c 4 --cpu_bind=cores -t 00:05:00\"\n if nersc_resv is not None:\n runstr = \"{} --reservation {}\".format(runstr, nersc_resv)\nelse:\n # Just use mpirun\n runstr = \"mpirun -np 4\"\n\nruncom = \"{} {}\".format(runstr, command)\nprint(runcom, flush=True)\n\n# Uncomment this line to actually submit the job\n# sp.check_call(runcom, stderr=sp.STDOUT, shell=True)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec91a7502f83259cc41d91c8d4033e36415a75be | 11,542 | ipynb | Jupyter Notebook | HubSpot/HubSpot_Update_followers_from_linkedin.ipynb | techthiyanes/awesome-notebooks | 10ab4da1b94dfa101e908356a649609b0b17561a | [
"BSD-3-Clause"
] | null | null | null | HubSpot/HubSpot_Update_followers_from_linkedin.ipynb | techthiyanes/awesome-notebooks | 10ab4da1b94dfa101e908356a649609b0b17561a | [
"BSD-3-Clause"
] | null | null | null | HubSpot/HubSpot_Update_followers_from_linkedin.ipynb | techthiyanes/awesome-notebooks | 10ab4da1b94dfa101e908356a649609b0b17561a | [
"BSD-3-Clause"
] | null | null | null | 27.546539 | 300 | 0.620256 | [
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# HubSpot - Update followers from linkedin\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/HubSpot/HubSpot_Update_followers_from_linkedin.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #hubspot #crm #sales #contact #naas_drivers #linkedin #network #scheduler #naas #automation",
"_____no_output_____"
],
[
"**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import library",
"_____no_output_____"
]
],
[
[
"from naas_drivers import hubspot, linkedin\nimport naas\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"### Setup your HubSpot\n👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)",
"_____no_output_____"
]
],
[
[
"HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'",
"_____no_output_____"
]
],
[
[
"### Setup your LinkedIn\n👉 Get <a href='https://www.notion.so/LinkedIn-driver-Get-your-cookies-d20a8e7e508e42af8a5b52e33f3dba75'>your cookies</a>",
"_____no_output_____"
]
],
[
[
"LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2\nJSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585",
"_____no_output_____"
]
],
[
[
"### Setup Naas",
"_____no_output_____"
]
],
[
[
"naas.scheduler.add(cron=\"0 8 * * *\")\n\n#-> Uncomment the line below (by removing the hashtag) to remove your scheduler\n# naas.scheduler.delete()",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Get all contacts in HubSpot",
"_____no_output_____"
]
],
[
[
"properties_list = [\n \"hs_object_id\",\n \"firstname\",\n \"lastname\",\n \"linkedinbio\",\n \"linkedinconnections\",\n]\nhubspot_contacts = hubspot.connect(HS_API_KEY).contacts.get_all(properties_list)\nhubspot_contacts",
"_____no_output_____"
]
],
[
[
"### Filter to get linkedinconnections = \"Not Defined\" and \"linkedinbio\" = defined",
"_____no_output_____"
]
],
[
[
"df_to_update = hubspot_contacts.copy()\n\n# Cleaning\ndf_to_update = df_to_update.fillna(\"Not Defined\")\n\n# Filter on \"Not defined\"\ndf_to_update = df_to_update[(df_to_update.linkedinbio != \"Not Defined\") &\n (df_to_update.linkedinconnections == \"Not Defined\")]\n\n# Limit to last 50 contacts\ndf_to_update = df_to_update.sort_values(by=\"createdate\", ascending=False)[:50].reset_index(drop=True)\n\ndf_to_update",
"_____no_output_____"
]
],
[
[
"### Get followers from Linkedin",
"_____no_output_____"
]
],
[
[
"for _, row in df_to_update.iterrows():\n linkedinbio = row.linkedinbio\n \n # Get followers\n df = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(linkedinbio)\n linkedinconnections = df.loc[0, \"FOLLOWERS_COUNT\"]\n \n # Get linkedinbio\n df_to_update.loc[_, \"linkedinconnections\"] = linkedinconnections\n \ndf_to_update",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Update followers in Hubspot",
"_____no_output_____"
]
],
[
[
"for _, row in df_to_update.iterrows():\n # Init data\n data = {}\n \n # Get data\n hs_object_id = row.hs_object_id\n linkedinconnections = row.linkedinconnections\n\n # Update LK Bio\n if linkedinconnections != None:\n data = {\"properties\": {\"linkedinconnections\": linkedinconnections}}\n hubspot.connect(HS_API_KEY).contacts.patch(hs_object_id, data)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ec91d7b3b555a3af014f32b92778901398ce62ba | 49,314 | ipynb | Jupyter Notebook | tests/ipython-notebooks/Statsmodels.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | tests/ipython-notebooks/Statsmodels.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | tests/ipython-notebooks/Statsmodels.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:17:28.000Z | 2019-05-10T09:17:28.000Z | 122.977556 | 34,752 | 0.797826 | [
[
[
"# Statsmodels",
"_____no_output_____"
],
[
"Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.\n\nLibrary documentation: <a>http://statsmodels.sourceforge.net/</a>",
"_____no_output_____"
],
[
"### Linear Regression Models",
"_____no_output_____"
]
],
[
[
"# needed to display the graphs\n%matplotlib inline\nfrom pylab import *",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nfrom statsmodels.sandbox.regression.predstd import wls_prediction_std\nnp.random.seed(9876789)",
"/srv/venv/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n"
],
[
"# create some artificial data\nnsample = 100\nx = np.linspace(0, 10, 100)\nX = np.column_stack((x, x**2))\nbeta = np.array([1, 0.1, 10])\ne = np.random.normal(size=nsample)",
"_____no_output_____"
],
[
"# add column of 1s for intercept\nX = sm.add_constant(X)\ny = np.dot(X, beta) + e",
"_____no_output_____"
],
[
"# fit model and print the summary\nmodel = sm.OLS(y, X)\nresults = model.fit()\nprint(results.summary())",
" OLS Regression Results \n==============================================================================\nDep. Variable: y R-squared: 1.000\nModel: OLS Adj. R-squared: 1.000\nMethod: Least Squares F-statistic: 4.020e+06\nDate: Fri, 19 Jan 2018 Prob (F-statistic): 2.83e-239\nTime: 08:48:50 Log-Likelihood: -146.51\nNo. Observations: 100 AIC: 299.0\nDf Residuals: 97 BIC: 306.8\nDf Model: 2 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 1.3423 0.313 4.292 0.000 0.722 1.963\nx1 -0.0402 0.145 -0.278 0.781 -0.327 0.247\nx2 10.0103 0.014 715.745 0.000 9.982 10.038\n==============================================================================\nOmnibus: 2.042 Durbin-Watson: 2.274\nProb(Omnibus): 0.360 Jarque-Bera (JB): 1.875\nSkew: 0.234 Prob(JB): 0.392\nKurtosis: 2.519 Cond. No. 144.\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
],
[
"# individual results parameters can be accessed\nprint('Parameters: ', results.params)\nprint('R2: ', results.rsquared)",
"Parameters: [ 1.34233516 -0.04024948 10.01025357]\nR2: 0.999987936503\n"
],
[
"# example with non-linear relationship\nnsample = 50\nsig = 0.5\nx = np.linspace(0, 20, nsample)\nX = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))\nbeta = [0.5, 0.5, -0.02, 5.]\n\ny_true = np.dot(X, beta)\ny = y_true + sig * np.random.normal(size=nsample)\n\nres = sm.OLS(y, X).fit()\nprint(res.summary())",
" OLS Regression Results \n==============================================================================\nDep. Variable: y R-squared: 0.933\nModel: OLS Adj. R-squared: 0.928\nMethod: Least Squares F-statistic: 211.8\nDate: Fri, 19 Jan 2018 Prob (F-statistic): 6.30e-27\nTime: 08:48:52 Log-Likelihood: -34.438\nNo. Observations: 50 AIC: 76.88\nDf Residuals: 46 BIC: 84.52\nDf Model: 3 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nx1 0.4687 0.026 17.751 0.000 0.416 0.522\nx2 0.4836 0.104 4.659 0.000 0.275 0.693\nx3 -0.0174 0.002 -7.507 0.000 -0.022 -0.013\nconst 5.2058 0.171 30.405 0.000 4.861 5.550\n==============================================================================\nOmnibus: 0.655 Durbin-Watson: 2.896\nProb(Omnibus): 0.721 Jarque-Bera (JB): 0.360\nSkew: 0.207 Prob(JB): 0.835\nKurtosis: 3.026 Cond. No. 221.\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
],
[
"# look at some quantities of interest\nprint('Parameters: ', res.params)\nprint('Standard errors: ', res.bse)\nprint('Predicted values: ', res.predict())",
"Parameters: [ 0.46872448 0.48360119 -0.01740479 5.20584496]\nStandard errors: [ 0.02640602 0.10380518 0.00231847 0.17121765]\nPredicted values: [ 4.77072516 5.22213464 5.63620761 5.98658823 6.25643234\n 6.44117491 6.54928009 6.60085051 6.62432454 6.6518039\n 6.71377946 6.83412169 7.02615877 7.29048685 7.61487206\n 7.97626054 8.34456611 8.68761335 8.97642389 9.18997755\n 9.31866582 9.36587056 9.34740836 9.28893189 9.22171529\n 9.17751587 9.1833565 9.25708583 9.40444579 9.61812821\n 9.87897556 10.15912843 10.42660281 10.65054491 10.8063004\n 10.87946503 10.86825119 10.78378163 10.64826203 10.49133265\n 10.34519853 10.23933827 10.19566084 10.22490593 10.32487947\n 10.48081414 10.66779556 10.85485568 11.01006072 11.10575781]\n"
],
[
"# plot the true relationship vs. the prediction\nprstd, iv_l, iv_u = wls_prediction_std(res)\n\nfig, ax = plt.subplots(figsize=(8,6))\n\nax.plot(x, y, 'o', label=\"data\")\nax.plot(x, y_true, 'b-', label=\"True\")\nax.plot(x, res.fittedvalues, 'r--.', label=\"OLS\")\nax.plot(x, iv_u, 'r--')\nax.plot(x, iv_l, 'r--')\nax.legend(loc='best')",
"_____no_output_____"
]
],
[
[
"### Time-Series Analysis",
"_____no_output_____"
]
],
[
[
"from statsmodels.tsa.arima_process import arma_generate_sample",
"_____no_output_____"
],
[
"# generate some data\nnp.random.seed(12345)\narparams = np.array([.75, -.25])\nmaparams = np.array([.65, .35])",
"_____no_output_____"
],
[
"# set parameters\narparams = np.r_[1, -arparams]\nmaparam = np.r_[1, maparams]\nnobs = 250\ny = arma_generate_sample(arparams, maparams, nobs)",
"_____no_output_____"
],
[
"# add some dates information\ndates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs)\ny = pd.Series(y, index=dates)\narma_mod = sm.tsa.ARMA(y, order=(2,2))\narma_res = arma_mod.fit(trend='nc', disp=-1)",
"_____no_output_____"
],
[
"print(arma_res.summary())",
" ARMA Model Results \n==============================================================================\nDep. Variable: y No. Observations: 250\nModel: ARMA(2, 2) Log Likelihood -245.887\nMethod: css-mle S.D. of innovations 0.645\nDate: Fri, 19 Jan 2018 AIC 501.773\nTime: 08:53:23 BIC 519.381\nSample: 01-31-1980 HQIC 508.860\n - 10-31-2000 \n==============================================================================\n coef std err z P>|z| [0.025 0.975]\n------------------------------------------------------------------------------\nar.L1.y 0.8411 0.403 2.089 0.038 0.052 1.630\nar.L2.y -0.2693 0.247 -1.092 0.276 -0.753 0.214\nma.L1.y 0.5352 0.412 1.299 0.195 -0.273 1.343\nma.L2.y 0.0157 0.306 0.051 0.959 -0.585 0.616\n Roots \n=============================================================================\n Real Imaginary Modulus Frequency\n-----------------------------------------------------------------------------\nAR.1 1.5617 -1.1289j 1.9271 -0.0996\nAR.2 1.5617 +1.1289j 1.9271 0.0996\nMA.1 -1.9835 +0.0000j 1.9835 0.5000\nMA.2 -32.1818 +0.0000j 32.1818 0.5000\n-----------------------------------------------------------------------------\n"
],
[
"Testing complete; Gopala",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec91de8c851eacd5fd6b1f59dff447c7c9a39cf9 | 26,334 | ipynb | Jupyter Notebook | src/python/plan_rec_demo.ipynb | ml4ai/tomcat-planrec | 70af6464851c641eb414fd1c818cfb5b1351e079 | [
"MIT"
] | null | null | null | src/python/plan_rec_demo.ipynb | ml4ai/tomcat-planrec | 70af6464851c641eb414fd1c818cfb5b1351e079 | [
"MIT"
] | 4 | 2021-06-17T15:21:25.000Z | 2021-07-18T20:20:07.000Z | src/python/plan_rec_demo.ipynb | ml4ai/tomcat-planrec | 70af6464851c641eb414fd1c818cfb5b1351e079 | [
"MIT"
] | null | null | null | 34.468586 | 120 | 0.388471 | [
[
[
"from PSDG_Domain import PSDG_Domain\nfrom sar_domain import actions, methods\nfrom nltk import Nonterminal\nfrom numpy import math,e\nimport copy\nfrom PSDG_plan_recognition import generate_initial_belief_state, update_belief_state, generate_belief_state_seq\nfrom pprint import pprint",
"_____no_output_____"
],
[
"sar = PSDG_Domain(methods,actions)\nnext_loc = [\n 'Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Center Hallway Top',\n 'Left Hallway',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Room 107',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Herbalife Conference Room',\n 'Room 108',\n 'Room 109',\n 'Center Hallway Top',\n 'Room 110',\n 'Right Hallway',\n 'Room 111',\n 'Right Hallway',\n 'Room 107',\n 'Computer Farm',\n 'Room 107',\n 'Room 107',\n 'Computer Farm',\n 'Right Hallway',\n 'Room 109',\n 'Room 108',\n 'Room 107',\n 'Room 108',\n 'Room 109',\n 'Room 110',\n 'Room 110',\n 'Room 110',\n 'Right Hallway',\n 'Room 109',\n 'Room 111',\n 'Right Hallway',\n 'Computer Farm',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Left Hallway',\n 'Executive Suite 2',\n 'Left Hallway',\n 'Room 111',\n 'Center Hallway Middle',\n 'Room 111',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Room 111',\n 'Room 105',\n 'Room 111',\n 'Room 105',\n 'Right Hallway',\n 'Room 103',\n 'Room 111',\n 'Room 103',\n 'Room 111',\n 'Right Hallway',\n 'Room 102',\n 'Room 111',\n 'Room 102',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Center Hallway Top',\n 'Computer Farm',\n 'Center Hallway Top',\n 'Left Hallway',\n \"King Chris's Office\",\n \"The King's Terrace\"]\n\n\nhallways = [\n 'Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Center Hallway Bottom',\n 'Front Yard'\n]\n\nrooms = list(set([i for i in next_loc if not i in hallways]))\ninitial_state = {'time': 0, \n 'num_of_yellow_victims_found_in_adj_room': 0, \n 'num_of_yellow_victims_found_total': 0,\n 'num_of_green_victims_found_in_adj_room': 0,\n 'num_of_green_victims_found_total': 0,\n 'times_searched': 0,\n 'num_of_yellow_victims_found_in_current_room': 0,\n 'num_of_green_victims_found_in_current_room': 0,\n 'current_loc': 'Entrance Walkway',\n 'next_loc': next_loc,\n 'hallways': hallways,\n 'rooms': rooms,\n \"num_of_yellow_victims_triaged_in_current_room\": 0,\n \"num_of_green_victims_triaged_in_current_room\": 0,\n \"num_of_yellow_victims_triaged_total\": 0,\n \"num_of_green_victims_triaged_total\": 0,\n \"recent_search\": 0\n }\nsar.initialize_planning(initial_state)",
"_____no_output_____"
],
[
"q_0 = {'current_loc': 'Entrance Walkway',\n 'hallways': ['Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Center Hallway Bottom',\n 'Front Yard'],\n 'next_loc': ['Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Center Hallway Top',\n 'Left Hallway',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Room 107',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Herbalife Conference Room',\n 'Room 108',\n 'Room 109',\n 'Center Hallway Top',\n 'Room 110',\n 'Right Hallway',\n 'Room 111',\n 'Right Hallway',\n 'Room 107',\n 'Computer Farm',\n 'Room 107',\n 'Room 107',\n 'Computer Farm',\n 'Right Hallway',\n 'Room 109',\n 'Room 108',\n 'Room 107',\n 'Room 108',\n 'Room 109',\n 'Room 110',\n 'Room 110',\n 'Room 110',\n 'Right Hallway',\n 'Room 109',\n 'Room 111',\n 'Right Hallway',\n 'Computer Farm',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Left Hallway',\n 'Executive Suite 2',\n 'Left Hallway',\n 'Room 111',\n 'Center Hallway Middle',\n 'Room 111',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Room 111',\n 'Room 105',\n 'Room 111',\n 'Room 105',\n 'Right Hallway',\n 'Room 103',\n 'Room 111',\n 'Room 103',\n 'Room 111',\n 'Right Hallway',\n 'Room 102',\n 'Room 111',\n 'Room 102',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Center Hallway Top',\n 'Computer Farm',\n 'Center Hallway Top',\n 'Left Hallway',\n \"King Chris's Office\",\n \"The King's Terrace\"],\n 'num_of_green_victims_found_in_adj_room': 0,\n 'num_of_green_victims_found_in_current_room': 0,\n 'num_of_green_victims_found_total': 0,\n 'num_of_green_victims_triaged_in_current_room': 0,\n 'num_of_green_victims_triaged_total': 0,\n 'num_of_yellow_victims_found_in_adj_room': 0,\n 'num_of_yellow_victims_found_in_current_room': 0,\n 'num_of_yellow_victims_found_total': 0,\n 'num_of_yellow_victims_triaged_in_current_room': 0,\n 'num_of_yellow_victims_triaged_total': 0,\n 'recent_search': 0,\n 'rooms': [\"King Chris's Office\",\n \"The King's Terrace\",\n 'Room 111',\n 'Room 105',\n 'Security Office',\n 'Room 103',\n 'Room 110',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Room 101',\n 'Executive Suite 1',\n 'Break Room',\n 'Room 108',\n 'Computer Farm',\n 'Room 102',\n 'Room 109',\n 'Executive Suite 2'],\n 'time': 0,\n 'times_searched': 0}",
"_____no_output_____"
],
[
"q_1 = {'current_loc': 'Entrance Walkway',\n 'hallways': ['Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Center Hallway Bottom',\n 'Front Yard'],\n 'next_loc': ['Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Center Hallway Top',\n 'Left Hallway',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Room 107',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Herbalife Conference Room',\n 'Room 108',\n 'Room 109',\n 'Center Hallway Top',\n 'Room 110',\n 'Right Hallway',\n 'Room 111',\n 'Right Hallway',\n 'Room 107',\n 'Computer Farm',\n 'Room 107',\n 'Room 107',\n 'Computer Farm',\n 'Right Hallway',\n 'Room 109',\n 'Room 108',\n 'Room 107',\n 'Room 108',\n 'Room 109',\n 'Room 110',\n 'Room 110',\n 'Room 110',\n 'Right Hallway',\n 'Room 109',\n 'Room 111',\n 'Right Hallway',\n 'Computer Farm',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Left Hallway',\n 'Executive Suite 2',\n 'Left Hallway',\n 'Room 111',\n 'Center Hallway Middle',\n 'Room 111',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Room 111',\n 'Room 105',\n 'Room 111',\n 'Room 105',\n 'Right Hallway',\n 'Room 103',\n 'Room 111',\n 'Room 103',\n 'Room 111',\n 'Right Hallway',\n 'Room 102',\n 'Room 111',\n 'Room 102',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Center Hallway Top',\n 'Computer Farm',\n 'Center Hallway Top',\n 'Left Hallway',\n \"King Chris's Office\",\n \"The King's Terrace\"],\n 'num_of_green_victims_found_in_adj_room': 0,\n 'num_of_green_victims_found_in_current_room': 0,\n 'num_of_green_victims_found_total': 0,\n 'num_of_green_victims_triaged_in_current_room': 0,\n 'num_of_green_victims_triaged_total': 0,\n 'num_of_yellow_victims_found_in_adj_room': 0,\n 'num_of_yellow_victims_found_in_current_room': 0,\n 'num_of_yellow_victims_found_total': 0,\n 'num_of_yellow_victims_triaged_in_current_room': 0,\n 'num_of_yellow_victims_triaged_total': 0,\n 'recent_search': 1,\n 'rooms': ['Room 108',\n 'Herbalife Conference Room',\n 'Executive Suite 2',\n \"King Chris's Office\",\n 'Room 102',\n 'Room 111',\n 'Room 101',\n 'Break Room',\n 'Security Office',\n 'Room 109',\n 'Room 105',\n 'Computer Farm',\n 'Room 103',\n 'Room 110',\n \"The King's Terrace\",\n 'Room 107',\n 'Executive Suite 1'],\n 'time': 5,\n 'times_searched': 1}",
"_____no_output_____"
],
[
"q_2 = {'current_loc': 'Entrance Lobby',\n 'hallways': ['Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Center Hallway Bottom',\n 'Front Yard'],\n 'next_loc': ['Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Entrance Walkway',\n 'Entrance Lobby',\n 'Left Hallway',\n 'Security Office',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Executive Suite 1',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Center Hallway Top',\n 'Left Hallway',\n 'Center Hallway Top',\n 'Right Hallway',\n 'Room 107',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Center Hallway Top',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Room 107',\n 'Herbalife Conference Room',\n 'Herbalife Conference Room',\n 'Room 108',\n 'Room 109',\n 'Center Hallway Top',\n 'Room 110',\n 'Right Hallway',\n 'Room 111',\n 'Right Hallway',\n 'Room 107',\n 'Computer Farm',\n 'Room 107',\n 'Room 107',\n 'Computer Farm',\n 'Right Hallway',\n 'Room 109',\n 'Room 108',\n 'Room 107',\n 'Room 108',\n 'Room 109',\n 'Room 110',\n 'Room 110',\n 'Room 110',\n 'Right Hallway',\n 'Room 109',\n 'Room 111',\n 'Right Hallway',\n 'Computer Farm',\n 'Left Hallway',\n 'Break Room',\n 'Left Hallway',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Left Hallway',\n 'Executive Suite 2',\n 'Left Hallway',\n 'Room 111',\n 'Center Hallway Middle',\n 'Room 111',\n 'Center Hallway Middle',\n 'Right Hallway',\n 'Room 111',\n 'Room 105',\n 'Room 111',\n 'Room 105',\n 'Right Hallway',\n 'Room 103',\n 'Room 111',\n 'Room 103',\n 'Room 111',\n 'Right Hallway',\n 'Room 102',\n 'Room 111',\n 'Room 102',\n 'Right Hallway',\n 'Center Hallway Top',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Computer Farm',\n 'Room 101',\n 'Center Hallway Top',\n 'Computer Farm',\n 'Center Hallway Top',\n 'Left Hallway',\n \"King Chris's Office\",\n \"The King's Terrace\"],\n 'num_of_green_victims_found_in_adj_room': 0,\n 'num_of_green_victims_found_in_current_room': 0,\n 'num_of_green_victims_found_total': 0,\n 'num_of_green_victims_triaged_in_current_room': 0,\n 'num_of_green_victims_triaged_total': 0,\n 'num_of_yellow_victims_found_in_adj_room': 0,\n 'num_of_yellow_victims_found_in_current_room': 0,\n 'num_of_yellow_victims_found_total': 0,\n 'num_of_yellow_victims_triaged_in_current_room': 0,\n 'num_of_yellow_victims_triaged_total': 0,\n 'recent_search': 0,\n 'rooms': ['Room 108',\n 'Herbalife Conference Room',\n 'Executive Suite 2',\n \"King Chris's Office\",\n 'Room 102',\n 'Room 111',\n 'Room 101',\n 'Break Room',\n 'Security Office',\n 'Room 109',\n 'Room 105',\n 'Computer Farm',\n 'Room 103',\n 'Room 110',\n \"The King's Terrace\",\n 'Room 107',\n 'Executive Suite 1'],\n 'time': 8,\n 'times_searched': 0}",
"_____no_output_____"
],
[
"B = generate_initial_belief_state(sar,q_0,d=4)",
"_____no_output_____"
],
[
"B = update_belief_state(sar,B,q_0,q_1,d=4)",
"_____no_output_____"
],
[
"B = update_belief_state(sar,B,q_1,q_2,d=4)",
"_____no_output_____"
],
[
"pprint(B[1])",
"_____no_output_____"
],
[
"pprint(B[2])",
"_____no_output_____"
],
[
"pprint(B[3])",
"_____no_output_____"
],
[
"Q = [q_0,q_1,q_2]",
"_____no_output_____"
],
[
"B = generate_belief_state_seq(sar,B,Q,d=4)",
"_____no_output_____"
],
[
"pprint(B[3])",
"_____no_output_____"
],
[
"_,S = generate_belief_state_seq(sar,B,Q,d=4,include_sym_exp = True)",
"_____no_output_____"
],
[
"pprint(S[0])",
"_____no_output_____"
],
[
"pprint(S[1])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec91e3875f60ed53b081170863943e65557d4f3b | 744,504 | ipynb | Jupyter Notebook | content/NOTES 02.04 - PANDAS.ipynb | restrepo/ai4eng.v1 | ee143630570c83395c1c4cbe40f7f6904cc2d6f2 | [
"BSD-3-Clause"
] | null | null | null | content/NOTES 02.04 - PANDAS.ipynb | restrepo/ai4eng.v1 | ee143630570c83395c1c4cbe40f7f6904cc2d6f2 | [
"BSD-3-Clause"
] | null | null | null | content/NOTES 02.04 - PANDAS.ipynb | restrepo/ai4eng.v1 | ee143630570c83395c1c4cbe40f7f6904cc2d6f2 | [
"BSD-3-Clause"
] | null | null | null | 744,504 | 744,504 | 0.850568 | [
[
[
"# 02.04 - PANDAS",
"_____no_output_____"
]
],
[
[
"!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1/main/content/init.py\nimport init; init.init(force_download=False); init.get_weblink()",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## `pandas` is mostly about manipulating tables of data\n\nsee this cheat sheet: https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf\n",
"_____no_output_____"
],
[
"## Pandas main object is a `DataFrame`\n\n- can read .csv, .excel, etc.\n",
"_____no_output_____"
]
],
[
[
"!head local/data/internet_facebook.dat",
"# Pais,Uso_Internet,Uso_Facebook\r\nArgentina,49.40,30.53\r\nAustralia,80.60,46.01\r\nBelgium,67.30,36.98\r\nBrazil,37.76,4.39\r\nCanada,72.30,52.08\r\nChile,50.90,46.14\r\nChina,22.40,0.05\r\nColombia,38.80,25.90\r\nEgypt,12.90,5.68\r\n"
],
[
"!wc local/data/weather_data_austin_2010.csv",
" 8760 17519 254046 local/data/weather_data_austin_2010.csv\r\n"
],
[
"df = pd.read_csv('local/data/internet_facebook.dat', index_col='# Pais')\ndf",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.index",
"_____no_output_____"
]
],
[
[
"**fix the index name**",
"_____no_output_____"
]
],
[
[
"df.index.name=\"Pais\"\ndf.head()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nIndex: 33 entries, Argentina to Venezuela\nData columns (total 2 columns):\nUso_Internet 33 non-null float64\nUso_Facebook 33 non-null float64\ndtypes: float64(2)\nmemory usage: 792.0+ bytes\n"
]
],
[
[
"**a dataframe is made of `Series`**. Observe that each series has **its own type**",
"_____no_output_____"
]
],
[
[
"s1 = df[\"Uso_Internet\"]\ntype(s1)",
"_____no_output_____"
],
[
"s1",
"_____no_output_____"
]
],
[
[
"if the column name is not too fancy (empy spaces, accents, etc.) we can use columns names as python syntax.",
"_____no_output_____"
]
],
[
[
"df.Uso_Facebook",
"_____no_output_____"
]
],
[
[
"## DataFrame indexing\n\nis **NOT** exactly like numpy\n\n- first index\n - if string refers to columns\n - if `Series` of booleans is used as a filter\n \n- for selecting columns:\n - use `.loc` to select by Index\n - use `.iloc` to select by position ",
"_____no_output_____"
]
],
[
[
"df[\"Colombia\"]",
"_____no_output_____"
],
[
"df.loc[\"Colombia\"]",
"_____no_output_____"
]
],
[
[
"Index semantics is exact!!",
"_____no_output_____"
]
],
[
[
"df.loc[\"Colombia\":\"Spain\"]",
"_____no_output_____"
],
[
"df.iloc[10:15]",
"_____no_output_____"
]
],
[
[
"filtering",
"_____no_output_____"
]
],
[
[
"df[df.Uso_Internet>80]",
"_____no_output_____"
]
],
[
[
"combined conditions",
"_____no_output_____"
]
],
[
[
"df[(df.Uso_Internet>50)&(df.Uso_Facebook>50)]",
"_____no_output_____"
],
[
"df[(df.Uso_Internet>50)|(df.Uso_Facebook>50)]",
"_____no_output_____"
]
],
[
[
"## Managing data",
"_____no_output_____"
],
[
" \n \nobserve csv structure:\n- missing column name\n- missing data ",
"_____no_output_____"
]
],
[
[
"!head local/data/comptagevelo2009.csv",
"Date,,Berri1,Maisonneuve_1,Maisonneuve_2,Brébeuf\r\n01/01/2009,00:00,29,20,35,\r\n02/01/2009,00:00,19,3,22,\r\n03/01/2009,00:00,24,12,22,\r\n04/01/2009,00:00,24,8,15,\r\n05/01/2009,00:00,120,111,141,\r\n06/01/2009,00:00,261,146,236,\r\n07/01/2009,00:00,60,33,80,\r\n08/01/2009,00:00,24,14,14,\r\n09/01/2009,00:00,35,20,32,\r\n"
],
[
"d = pd.read_csv(\"local/data/comptagevelo2009.csv\")\nd",
"_____no_output_____"
],
[
"d.columns, d.shape\n",
"_____no_output_____"
]
],
[
[
"numerical features",
"_____no_output_____"
]
],
[
[
"d.describe()",
"_____no_output_____"
],
[
"d[\"Berri1\"].head()",
"_____no_output_____"
],
[
"d[\"Unnamed: 1\"].unique()\n",
"_____no_output_____"
],
[
"d[\"Berri1\"].unique()\n",
"_____no_output_____"
],
[
"d[\"Berri1\"].dtype, d[\"Date\"].dtype, d[\"Unnamed: 1\"].dtype\n",
"_____no_output_____"
],
[
"d.index\n",
"_____no_output_____"
]
],
[
[
"## Fixing data\n\nobserve we set one column as the index one, and we **convert** it to date object type",
"_____no_output_____"
]
],
[
[
"d.Date",
"_____no_output_____"
],
[
"d.index = pd.to_datetime(d.Date)\ndel(d[\"Date\"])\ndel(d[\"Unnamed: 1\"])\nd.head()",
"_____no_output_____"
],
[
"d.index",
"_____no_output_____"
]
],
[
[
"let's fix columns names",
"_____no_output_____"
]
],
[
[
"d.columns=[\"Berri\", \"Mneuve1\", \"Mneuve2\", \"Brebeuf\"]\nd.head()",
"_____no_output_____"
],
[
"for col in d.columns:\n print (col, np.sum(pd.isnull(d[col])))",
"Berri 0\nMneuve1 0\nMneuve2 0\nBrebeuf 187\n"
],
[
"d.shape",
"_____no_output_____"
],
[
"d['Brebeuf'].describe()",
"_____no_output_____"
],
[
"plt.hist(d.Brebeuf, bins=30);",
"_____no_output_____"
]
],
[
[
"**fix missing**!!!",
"_____no_output_____"
]
],
[
[
"d.Brebeuf.fillna(d.Brebeuf.mean(), inplace=True)\n",
"_____no_output_____"
],
[
"d['Brebeuf'].describe()",
"_____no_output_____"
],
[
"plt.hist(d.Brebeuf, bins=30);",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
]
],
[
[
"let's make sure it is sorted",
"_____no_output_____"
]
],
[
[
"d.sort_index(inplace=True)\nd.head()",
"_____no_output_____"
]
],
[
[
"## Filtering",
"_____no_output_____"
]
],
[
[
"d[d.Berri>6000]\n",
"_____no_output_____"
],
[
"d[(d.Berri>6000) & (d.Brebeuf<7000)]\n",
"_____no_output_____"
]
],
[
[
"## Locating",
"_____no_output_____"
]
],
[
[
"d[d.Berri>5500].sort_index(axis=0)\n",
"_____no_output_____"
],
[
"d.iloc[100:110]\n",
"_____no_output_____"
]
],
[
[
"**dates as INDEX have special semantics**",
"_____no_output_____"
]
],
[
[
"d.loc[\"2009-10-01\":\"2009-10-10\"]\n",
"_____no_output_____"
]
],
[
[
"can do sorting across any criteria",
"_____no_output_____"
]
],
[
[
"d.sort_values(by=\"Berri\").head()\n",
"_____no_output_____"
]
],
[
[
"and chain operations",
"_____no_output_____"
]
],
[
[
"d.sort_values(by=\"Berri\").loc[\"2009-10-01\":\"2009-10-10\"]\n",
"_____no_output_____"
]
],
[
[
"## Time series operations",
"_____no_output_____"
]
],
[
[
"d.rolling(3).mean().head(10)\n",
"_____no_output_____"
],
[
"d.index = d.index + pd.Timedelta(\"5m\")\nd.head()",
"_____no_output_____"
],
[
"d.shift(freq=pd.Timedelta(days=365)).head()\n",
"_____no_output_____"
]
],
[
[
"## Downsampling",
"_____no_output_____"
]
],
[
[
"d.resample(pd.Timedelta(\"2d\")).first().head()\n",
"_____no_output_____"
],
[
"d.resample(pd.Timedelta(\"2d\")).mean().head()\n",
"_____no_output_____"
]
],
[
[
"## Upsampling",
"_____no_output_____"
]
],
[
[
"d.resample(pd.Timedelta(\"12h\")).first().head()\n",
"_____no_output_____"
],
[
"d.resample(pd.Timedelta(\"12h\")).fillna(method=\"pad\").head()\n",
"_____no_output_____"
]
],
[
[
"## Building Dataframes from other structures",
"_____no_output_____"
]
],
[
[
"\na = np.random.randint(10,size=(20,5))\na",
"_____no_output_____"
],
[
"k = pd.DataFrame(a, columns=[\"uno\", \"dos\", \"tres\", \"cuatro\", \"cinco\"], index=range(10,10+len(a)))\nk",
"_____no_output_____"
]
],
[
[
"## `.values` access the underlying `numpy` structure",
"_____no_output_____"
]
],
[
[
"d.values",
"_____no_output_____"
]
],
[
[
"## some out-of-the-box plotting\n\nbut recall that we always can do custom plotting",
"_____no_output_____"
]
],
[
[
"d.plot(figsize=(15,3))\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,3))\nplt.plot(d.Berri)",
"_____no_output_____"
],
[
"d.Berri.cumsum().plot()\n",
"_____no_output_____"
],
[
"plt.scatter(d.Berri, d.Brebeuf)\n",
"_____no_output_____"
],
[
"pd.plotting.scatter_matrix(d, figsize=(10,10));",
"_____no_output_____"
]
],
[
[
"## Grouping",
"_____no_output_____"
]
],
[
[
"\nd[\"month\"] = [i.month for i in d.index]\nd.head()",
"_____no_output_____"
],
[
"d.groupby(\"month\").max()\n",
"_____no_output_____"
],
[
"d.groupby(\"month\").count()\n",
"_____no_output_____"
]
],
[
[
"## Time series\n\nobserve we can **establish at load time** many thing if the dataset is relatively clean",
"_____no_output_____"
]
],
[
[
"\ntiempo=pd.read_csv('local/data/weather_data_austin_2010.csv',parse_dates=['Date'], dayfirst=True ,index_col='Date')\ntiempo",
"_____no_output_____"
],
[
"tiempo.loc['2010-08-01':'2010-10-30']\n",
"_____no_output_____"
],
[
"tiempo.loc['2010-06'].head()\n",
"_____no_output_____"
],
[
"tiempo.sample(10)\n",
"_____no_output_____"
],
[
"tiempo.sample(frac=0.01)\n",
"_____no_output_____"
]
],
[
[
"## Resampling",
"_____no_output_____"
]
],
[
[
"tiempo.head()\n",
"_____no_output_____"
],
[
"tiempo.resample(\"5d\").mean().head()",
"_____no_output_____"
],
[
"tiempo.resample(\"5d\").mean().head()\n",
"_____no_output_____"
],
[
"tiempo.resample(\"5d\").mean().head()\n",
"_____no_output_____"
],
[
"tiempo.resample(\"30min\").mean()[:15]\n",
"_____no_output_____"
],
[
"\nsubt=tiempo.between_time(start_time='1:00',end_time='12:00')\nsubt",
"_____no_output_____"
],
[
"tiempo.index.weekday\n",
"_____no_output_____"
],
[
"tiempo.index.month\n",
"_____no_output_____"
],
[
"tiempo.index.day\n",
"_____no_output_____"
],
[
"tiempo.plot(style='.')\n",
"_____no_output_____"
],
[
"tiempo['2010-01'].plot()",
"_____no_output_____"
],
[
"tiempo['2010-01-04'].plot()",
"_____no_output_____"
]
],
[
[
"## Rolling operations",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n### permite obtener data frames directamente de internet\n!pip install yfinance",
"Collecting yfinance\n Downloading https://files.pythonhosted.org/packages/c2/31/8b374a12b90def92a4e27d0fc595fc43635f395984e36a075244d98bd265/yfinance-0.1.54.tar.gz\nCollecting pandas>=0.24\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/4a/6a/94b219b8ea0f2d580169e85ed1edc0163743f55aaeca8a44c2e8fc1e344e/pandas-1.0.3-cp37-cp37m-manylinux1_x86_64.whl (10.0MB)\n\u001b[K |████████████████████████████████| 10.0MB 453kB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: numpy>=1.15 in /opt/anaconda3/lib/python3.7/site-packages (from yfinance) (1.15.1)\nRequirement already satisfied: requests>=2.20 in /opt/anaconda3/lib/python3.7/site-packages (from yfinance) (2.22.0)\nCollecting multitasking>=0.0.7\n Downloading https://files.pythonhosted.org/packages/69/e7/e9f1661c28f7b87abfa08cb0e8f51dad2240a9f4f741f02ea839835e6d18/multitasking-0.0.9.tar.gz\nRequirement already satisfied: pytz>=2017.2 in /opt/anaconda3/lib/python3.7/site-packages (from pandas>=0.24->yfinance) (2018.5)\nRequirement already satisfied: python-dateutil>=2.6.1 in /opt/anaconda3/lib/python3.7/site-packages (from pandas>=0.24->yfinance) (2.7.3)\nRequirement already satisfied: idna<2.9,>=2.5 in /opt/anaconda3/lib/python3.7/site-packages (from requests>=2.20->yfinance) (2.8)\nRequirement already satisfied: certifi>=2017.4.17 in /opt/anaconda3/lib/python3.7/site-packages (from requests>=2.20->yfinance) (2019.9.11)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/anaconda3/lib/python3.7/site-packages (from requests>=2.20->yfinance) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/anaconda3/lib/python3.7/site-packages (from requests>=2.20->yfinance) (1.24.2)\nRequirement already satisfied: six>=1.5 in /opt/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas>=0.24->yfinance) (1.13.0)\nBuilding wheels for collected packages: yfinance, multitasking\n Building wheel for yfinance (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for yfinance: filename=yfinance-0.1.54-py2.py3-none-any.whl size=22411 sha256=d937a7a089b0883844df4d4f388c8bcc4dc145446c43e698a5181e6ebd099621\n Stored in directory: /home/rlx/.cache/pip/wheels/f9/e3/5b/ec24dd2984b12d61e0abf26289746c2436a0e7844f26f2515c\n Building wheel for multitasking (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for multitasking: filename=multitasking-0.0.9-cp37-none-any.whl size=8368 sha256=bfe866419b7d2ac5e39bab8a20d2378ac727b56d6dc6e3f248e2a4115ea368bb\n Stored in directory: /home/rlx/.cache/pip/wheels/37/fa/73/d492849e319038eb4d986f5152e4b19ffb1bc0639da84d2677\nSuccessfully built yfinance multitasking\nInstalling collected packages: pandas, multitasking, yfinance\n Found existing installation: pandas 0.23.4\n Uninstalling pandas-0.23.4:\n Successfully uninstalled pandas-0.23.4\nSuccessfully installed multitasking-0.0.9 pandas-1.0.3 yfinance-0.1.54\n"
],
[
"import yfinance as yf\n",
"_____no_output_____"
],
[
"#define the ticker symbol\ntickerSymbol = 'MSFT'\n\n#get data on this ticker\ntickerData = yf.Ticker(tickerSymbol)\n\n#get the historical prices for this ticker\ngs = tickerData.history(period='1d', start='2010-1-1', end='2020-1-25')\n\n#see your data\ngs",
"_____no_output_____"
],
[
"gs.Close.rolling(10).mean().head(20)\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,3))\nplt.plot(gs.Close)\nplt.plot(gs.Close.rolling(50).mean())\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,3))\nplt.plot(gs.iloc[:400].Close, label=\"original\")\nplt.plot(gs.iloc[:400].Close.rolling(50).mean(), label=\"rolling\")\nplt.plot(gs.iloc[:400].Close.rolling(50, center=True).mean(), label=\"center\")\nplt.legend();",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,3))\nplt.plot(gs.iloc[:400].Close.rolling(10).mean())\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec91fc26c91fc21d4c804c19e69a4551c40feac2 | 458,225 | ipynb | Jupyter Notebook | jupyter_notebooks/matplotlib/tutorial/tutorial_by_chris_moffitt.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | 2 | 2021-02-13T05:52:05.000Z | 2022-02-08T09:52:35.000Z | matplotlib/tutorial/tutorial_by_chris_moffitt.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | null | null | null | matplotlib/tutorial/tutorial_by_chris_moffitt.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | null | null | null | 534.061772 | 145,580 | 0.929692 | [
[
[
"## Effectively Using Matplotlib\n\nFull article on [pbpython.com](http://pbpython.com/effective-matplotlib.html)",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image \nImage(filename='anatomy.png')",
"_____no_output_____"
],
[
"# Standard imports\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import FuncFormatter",
"_____no_output_____"
],
[
"# Ensure plots are displayed inline\n%matplotlib inline\n#%matplotlib notebook",
"_____no_output_____"
],
[
"# Read in some data to show some real world exampled\ndf = pd.read_excel(\"https://github.com/chris1610/pbpython/blob/master/data/sample-salesv3.xlsx?raw=true\")",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"Summarize the data by customer and get the top 10 customers.\nAlso, clean up the column names for consistency",
"_____no_output_____"
]
],
[
[
"top_10 = (df.groupby('name')['ext price', 'quantity'].agg({'ext price': 'sum', 'quantity': 'count'})\n .sort_values(by='ext price', ascending=False))[:10].reset_index()\ntop_10.rename(columns={'name': 'Name', 'ext price': 'Sales', 'quantity': 'Purchases'}, inplace=True)",
"_____no_output_____"
],
[
"top_10",
"_____no_output_____"
]
],
[
[
"Look at available styles",
"_____no_output_____"
]
],
[
[
"plt.style.available",
"_____no_output_____"
]
],
[
[
"Use the ggplot style to improve the overall esthetics.",
"_____no_output_____"
]
],
[
[
"plt.style.use('ggplot')",
"_____no_output_____"
]
],
[
[
"Basic pandas plot to get started",
"_____no_output_____"
]
],
[
[
"top_10.plot(kind='barh', y=\"Sales\", x=\"Name\");",
"_____no_output_____"
]
],
[
[
"Get the figure and axes for future customization",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax);",
"_____no_output_____"
]
],
[
[
"Set some limits and labels",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax)\nax.set_xlim([-10000, 140000])\nax.set_xlabel('Total Revenue')\nax.set_ylabel('Customer');",
"_____no_output_____"
]
],
[
[
"Alternative api using set",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax)\nax.set_xlim([-10000, 140000])\nax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer');",
"_____no_output_____"
]
],
[
[
"Hide the legend since it is not useful in this case. Also change the size of the image.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(5, 6))\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax)\nax.set_xlim([-10000, 140000])\nax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer')\nax.legend().set_visible(False)",
"_____no_output_____"
]
],
[
[
"Add some annotations, and turn off the grid - just to show how it is done",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(5, 6))\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax)\navg = top_10['Sales'].mean()\nax.set_xlim([-10000, 140000])\nax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer')\nax.axvline(x=avg, color='b', label='Average', linestyle='--', linewidth=1)\nax.grid(False)\nax.legend().set_visible(False)",
"_____no_output_____"
]
],
[
[
"To clean up the currency in Total Revenue, we define a custom formatting function",
"_____no_output_____"
]
],
[
[
"def currency(x, pos):\n 'The two args are the value and tick position'\n if x >= 1000000:\n return '${:1.1f}M'.format(x*1e-6)\n return '${:1.0f}K'.format(x*1e-3)",
"_____no_output_____"
]
],
[
[
"Use the new formatter",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax)\nax.set_xlim([-10000, 140000])\nax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer')\nformatter = FuncFormatter(currency)\nax.xaxis.set_major_formatter(formatter)\nax.legend().set_visible(False)",
"_____no_output_____"
]
],
[
[
"Fully commented example",
"_____no_output_____"
]
],
[
[
"# Create the figure and the axes\nfig, ax = plt.subplots()\n\n# Plot the data and get the averaged\ntop_10.plot(kind='barh', y=\"Sales\", x=\"Name\", ax=ax)\navg = top_10['Sales'].mean()\n\n# Set limits and labels\nax.set_xlim([-10000, 140000])\nax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer')\n\n# Add a line for the average\nax.axvline(x=avg, color='b', label='Average', linestyle='--', linewidth=1)\n\n# Annotate the new customers\nfor cust in [3, 5, 8]:\n ax.text(115000, cust, \"New Customer\")\n \n# Format the currency\nformatter = FuncFormatter(currency)\nax.xaxis.set_major_formatter(formatter)\n\n# Hide the legend\nax.legend().set_visible(False);",
"_____no_output_____"
]
],
[
[
"Add two plots to a figure",
"_____no_output_____"
]
],
[
[
"# Get the figure and the axes\nfig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(10,4))\n\n# Build the first plot\ntop_10.plot(kind='barh', x='Name', y='Sales', ax=ax0)\nax0.set(title='Revenue', xlabel='Total revenue', ylabel='Customers')\nformatter = FuncFormatter(currency)\nax0.xaxis.set_major_formatter(formatter)\n\n# Add average line to the first plot\nrevenue_average = top_10['Sales'].mean()\nax0.axvline(x=revenue_average, color='b', label='Average', linestyle='--', linewidth=1)\n\n# Build the second plot\ntop_10.plot(kind='barh', x='Name', y='Purchases', ax=ax1)\nax1.set(title='Units', xlabel='Total units', ylabel='')\n\n# Add average line to the second plot\npurchases_average = top_10['Purchases'].mean()\nax1.axvline(x=purchases_average, color='b', label='Average', linestyle='--', linewidth=1)\n\n# Title the figure\nfig.suptitle('2014 Sales Analysis', fontsize=14, fontweight='bold')\n\n# Hide the plot legends\nax0.legend().set_visible(False)\nax1.legend().set_visible(False)",
"_____no_output_____"
]
],
[
[
"Save some files",
"_____no_output_____"
]
],
[
[
"# Let's look at how to save the files\nfig.canvas.get_supported_filetypes()",
"_____no_output_____"
],
[
"fig.savefig('sales.png', transparent=False, dpi=80, bbox_inches=\"tight\")",
"_____no_output_____"
]
],
[
[
"Display the file to see what it looks like",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image",
"_____no_output_____"
],
[
"Image('sales.png')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9207c0bd3add561db7347558cf2409e936472a | 48,084 | ipynb | Jupyter Notebook | Model Evaluation and Hyperparameter Tuning/P24.ipynb | ZohebAbai/Projects-both-in-Python-R | 18a60fb3159d5e480a614675bfbaa79614df43a7 | [
"MIT"
] | null | null | null | Model Evaluation and Hyperparameter Tuning/P24.ipynb | ZohebAbai/Projects-both-in-Python-R | 18a60fb3159d5e480a614675bfbaa79614df43a7 | [
"MIT"
] | null | null | null | Model Evaluation and Hyperparameter Tuning/P24.ipynb | ZohebAbai/Projects-both-in-Python-R | 18a60fb3159d5e480a614675bfbaa79614df43a7 | [
"MIT"
] | 1 | 2019-08-26T20:52:09.000Z | 2019-08-26T20:52:09.000Z | 122.040609 | 21,504 | 0.855544 | [
[
[
"## We want to 'Evaluate our model Performance' and then 'Improving our model Performance'.",
"_____no_output_____"
],
[
"### We shall implement k-fold cross validation and GridSearch in our earlier developed kernel svm model. ",
"_____no_output_____"
],
[
"### Hyperparameters are the parameters we chose ourselves while model building. It can be optimized.",
"_____no_output_____"
]
],
[
[
"# Importing the libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
],
[
"# Importing the dataset\ndataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, [2, 3]].values\ny = dataset.iloc[:, 4].values",
"_____no_output_____"
],
[
"dataset.head()",
"_____no_output_____"
],
[
"# Splitting the dataset into the Training set and Test set\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)",
"_____no_output_____"
],
[
"# Feature Scaling\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)",
"_____no_output_____"
],
[
"# Fitting Kernel SVM to the Training set\nfrom sklearn.svm import SVC\nclassifier = SVC(kernel = 'rbf', random_state = 0)\nclassifier.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# Predicting the Test set results\ny_pred = classifier.predict(X_test)",
"_____no_output_____"
],
[
"# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\nconfusion_matrix(y_test, y_pred)",
"_____no_output_____"
],
[
"# Applying k-Fold Cross Validation\nfrom sklearn.model_selection import cross_val_score\naccuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)\nprint(accuracies.mean())\nprint(accuracies.std())",
"0.9005302187615868\n0.06388957356626285\n"
]
],
[
[
"### Here we have high mean of 10 accuracies for 10 model evaluation. Also all the different accuracies doesn't vary much from the mean accuracy as variance is low enough. Thus our model has low bias and low variance. That's good!",
"_____no_output_____"
]
],
[
[
"# Applying Grid Search to find the best model and the best parameters\nfrom sklearn.model_selection import GridSearchCV\nparameters = [{'C': [1, 10, 100, 1000], 'kernel': ['linear']},\n {'C': [1, 10, 100, 1000], 'kernel': ['rbf'], 'gamma': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]}]\ngrid_search = GridSearchCV(estimator = classifier,\n param_grid = parameters,\n scoring = 'accuracy',\n cv = 10,\n n_jobs = -1)\ngrid_search = grid_search.fit(X_train, y_train)\nprint(grid_search.best_score_)\nprint(grid_search.best_params_)",
"0.9033333333333333\n{'C': 1, 'gamma': 0.7, 'kernel': 'rbf'}\n"
]
],
[
[
"### The accuracy after gridsearch is very close to our model, which indicates that our model hyperparameters are good. ",
"_____no_output_____"
]
],
[
[
"# Visualising the Training set results\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_train, y_train\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, grid_search.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n alpha = 0.75, cmap = ListedColormap(('yellow', 'white')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n color = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Kernel SVM (Training set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"# Visualising the Test set results\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = X_test, y_test\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),\n np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))\nplt.contourf(X1, X2, grid_search.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),\n alpha = 0.75, cmap = ListedColormap(('yellow', 'white')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],\n color = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Kernel SVM (Test set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9216d3e3d0d45e903f18c839fe9ea33abfe20b | 50,951 | ipynb | Jupyter Notebook | notebooks/Computational Seismology/Summation-by-Parts/1d/sp_elasticwave1D.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-07-11T10:01:39.000Z | 2020-12-16T14:26:03.000Z | notebooks/Computational Seismology/Summation-by-Parts/1d/sp_elasticwave1D.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | null | null | null | notebooks/Computational Seismology/Summation-by-Parts/1d/sp_elasticwave1D.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-11-11T05:05:41.000Z | 2022-03-12T09:36:24.000Z | 120.167453 | 33,088 | 0.813468 | [
[
[
"<div style='background-image: url(\"../../../share/images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Computational Seismology</div>\n <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\"> SBP-SAT finite difference method for the 1D elastic wave equation in first order form </div>\n </div>\n </div>\n</div>",
"_____no_output_____"
],
[
"This notebook is based on the paper [Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models](https://pangea.stanford.edu/~edunham/publications/Duru_Dunham_FD3d_JCP16.pdf), and on the theory of summation-by-parts (SBP) finite difference methods and weak implementation of boundary conditions using the simultaneous-approximation-term (SAT).\n\n\n##### Authors:\n* Kenneth Duru\n\n---",
"_____no_output_____"
],
[
"## Basic Equations ##\n\nWe consider in this notebook a system of hyperbolic PDE, the elastic wave equation, in 1D. The source-free elastic wave equation in a heterogeneous 1D medium is \n\n\\begin{align}\n\\rho(x)\\frac{\\partial v(x,t)}{\\partial t} -\\frac{\\partial\\sigma(x,t)}{\\partial x} & = 0\\\\\n\\frac{1}{\\mu(x)}\\frac{\\partial\\sigma(x,t)}{\\partial t} -\\frac{\\partial v(x,t)}{\\partial x} & = 0 \n\\end{align}\n\nwith $\\rho(x)$ the density, $\\mu(x)$ the shear modulus and $x \\in [0, L]$. At the boundaries $ x = 0, x = L$ we pose the general well-posed linear boundary conditions\n\n\\begin{equation}\n\\begin{split}\nB_0(v, \\sigma, Z_{s}, r_0): =\\frac{Z_{s}}{2}\\left({1-r_0}\\right){v} -\\frac{1+r_0}{2} {\\sigma} = 0, \\quad \\text{at} \\quad x = 0, \\\\\n B_L(v, \\sigma, Z_{s}, r_n): =\\frac{Z_{s}}{2} \\left({1-r_n}\\right){v} + \\frac{1+r_n}{2}{\\sigma} = 0, \\quad \\text{at} \\quad x = L.\n \\end{split}\n\\end{equation}\n\nwith the reflection coefficients $r_0$, $r_n$ being real numbers and $|r_0|, |r_n| \\le 1$. \n\nNote that at $x = 0$, while $r_0 = -1$ yields a clamped wall, $r_0 = 0$ yields an absorbing boundary, and with $r_0 = 1$ we have a free-surface boundary condition. Similarly, at $x = L$, $r_n = -1$ yields a clamped wall, $r_n = 0$ yields an absorbing boundary, and $r_n = 1$ gives a free-surface boundary condition.\n\nWe introduce the mechanical energy defined by\n\\begin{equation}\nE(t) = \\int_0^L{\\left(\\frac{\\rho(x)}{2} v^2(x, t) + \\frac{1}{2\\mu(x)}\\sigma^2(x, t)\\right) dx},\n\\end{equation}\n\nwhere $E(t)$ is the sum of the kinetic energy and the strain energy.\nWe have \n\n\\begin{equation}\n\\frac{d E(t)}{dt} = -v(0, t)\\sigma(0, t) + v(L, t)\\sigma(L, t) \\le 0.\n\\end{equation}\n\nFrom the boundary conditions, it is easy to check that $v(0, t)\\sigma(0, t) \\ge 0$ and $v(L, t)\\sigma(L, t) \\le 0$, for all $|r_0|, |r_n| \\le 1$. This energy loss through the boundaries is what the numerical method should emulate. \n\n1) Discretize the spatial domain $x$ into $N$ discrete nodes with the uniform spatial step $\\Delta{x} = L/(N-1)$, denote the unknown fields at the nodes: $\\mathbf{v}\\left(t\\right) = [v_1\\left(t\\right), v_2\\left(t\\right), \\cdots, v_N\\left(t\\right)]$, and $\\boldsymbol{\\sigma}\\left(t\\right) = [\\sigma_1\\left(t\\right), \\sigma_2\\left(t\\right), \\cdots, \\sigma_N\\left(t\\right)]$.\n\n\n2) At the grid-point $x_j = (j-1)\\Delta{x}$: Approximate the spatial derivative by a finite difference operator $\\partial v/\\partial x\\Big|_{x = x_j} \\approx \\left(\\mathbf{D}\\mathbf{v}\\right)_j $. Here $\\mathbf{D}$ is a finite difference matrix satisfying the summation-by-parts property:\n\n\\begin{align}\n\\mathbf{H}\\mathbf{D} = \\mathbf{Q}, \\quad \\mathbf{Q} + \\mathbf{Q} = \\left(\\boldsymbol{e}_{N}\\boldsymbol{e}_{N}^T -\\boldsymbol{e}_{1}\\boldsymbol{e}_{1}^T\\right), \\quad \\mathbf{H}^T = \\mathbf{H} > 0,\n\\end{align}\n\nwhere, $\\boldsymbol{e}_{0} = [1, 0, \\dots, 0 ]^T, \\quad \\boldsymbol{e}_{L} = [ 0, 0, \\dots, 1 ]^T$ and $\\mathbf{H}$ defines a dicrete norm. We consider only diagonal norm SBP operators with $H_{jj} = h_j > 0$, and define the quadrature rule\n\n\\begin{equation}\n \\sum_{i = 1}^{N} f(x_j)h_j \\approx \\int_{0}^{L}f(x) dx.\n\\end{equation}\n\nThe second order accurate SBP operator for the first derivative is:\n\\begin{align}\n\\left(\\mathbf{D}\\mathbf{v}\\right)_j = \\frac{v_{j+1}-v_{j-1}}{2 \\Delta{x}}, \\quad j = 2, 3, \\cdots N-1, \\quad\n\\left(\\mathbf{D}\\mathbf{v}\\right)_1 = \\frac{v_{2}-v_{1}}{\\Delta{x}},\\quad\n\\left(\\mathbf{D}\\mathbf{v}\\right)_N = \\frac{v_{N}-v_{N-1}}{\\Delta{x}}, \\quad j = N.\n\\end{align}\n\nNote that the interior stencils are centered, with second order accuracy and the boundary stencils are one-sided and first order accurate. \n\nHigher order SBP operators can be found in the book: High Order Difference Methods for Time Dependent PDE, by B. Gustafsson. In this notebook we implement SBP operators with interior accuracy 2, 4 and 6. The implementation of the spatial derivative operators can be found in the file first_derivative_sbp_operators.py\n\nTo construct a stable semi-discrete approximation we replace the spatial derivatives by the SBP operators, and add the boundary conditions as SAT-terms with special penalty weights having:\n\n\\begin{align}\n\\frac{d \\mathbf{v}(t)}{d t} = {\\boldsymbol{\\rho}}^{-1}\n\\left(\\mathbf{D} \\boldsymbol{\\sigma}(t) - \\underbrace{\\mathbf{H}^{-1}\\left(\\tau_{11}\\boldsymbol{e}_{1}B_0\\left(v_1, \\sigma_1, Z_{s}, r_0\\right) + \\tau_{12}\\boldsymbol{e}_{N}B_L\\left(v_N, \\sigma_N, Z_{s}, r_n\\right)\\right)}_{SAT \\to 0}\\right),\n\\end{align}\n\n\\begin{align}\n\\frac{d \\boldsymbol{\\sigma}(t)}{d t} = \\boldsymbol{\\mu}\n\\left(\\mathbf{D} \\mathbf{v}(t) + \\underbrace{\\mathbf{H}^{-1}\\left(\\tau_{21}\\frac{\\boldsymbol{e}_{1}}{Z_{s}}B_0\\left(v_1, \\sigma_1, Z_{s}, r_0\\right) - \\tau_{22}\\frac{\\boldsymbol{e}_{N}}{Z_{s}}B_L\\left(v_N, \\sigma_N, Z_{s}, r_n\\right)\\right)}_{SAT \\to 0}\\right).\n\\end{align}\nHere $\\tau_{ij}$ are penalty parameters determined by requiring stability.\n\nApproximate the mechanical energy by the above quadrature rule, having \n\\begin{align}\n\\mathcal{E}( t) = \\sum_{j}^{N}\\frac{1}{2}\\left(\\rho_jv_j^2 + \\frac{1}{\\mu_j}\\sigma_j^2\\right)h_j > 0.\n\\end{align}\n\nBy chosing the penalty parameters $\\tau_{ij} = 1$, the semi-discrete approximation satisfies the energy estimate:\n\\begin{align}\n\\frac{d \\mathcal{E}( t)}{d t} = -\\frac{1}{2}\\left(\\left(1-r_0\\right)Zv_1^2 + \\frac{\\left(1+r_0\\right)}{Z}\\sigma_1^2 +\n\\left(1-r_n\\right)Zv_N^2 + \\frac{\\left(1+r_n\\right)}{Z}\\sigma_N^2\\right) \\le 0.\n\\end{align}\n\n\n3) The discrete mechanical energy can never grow in time, and thus the semidiscrete numerical approximation is asymptotically stable.\n\n4) Time integration can be performed using any stable time stepping scheme. This notebook implements the fourth order accurate Runge-Kutta method. \n\nTo keep the problem simple, we use as spatial initial condition a Gauss function with half-width $\\delta$\n\n\\begin{equation}\nv(x,t=0) = e^{-1/\\delta^2 (x - x_{o})^2}, \\quad \\sigma(x,t=0) = 0\n\\end{equation}",
"_____no_output_____"
],
[
"**** Exercises****\n\n",
"_____no_output_____"
]
],
[
[
"# Parameters initialization and plotting the simulation\n# Import necessary routines\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time_integrator\nimport rate\nimport utils\nimport timeit\n\n#plt.switch_backend(\"TkAgg\") # plots in external window\nplt.switch_backend(\"nbagg\") # plots within this notebook",
"_____no_output_____"
],
[
"# Initializations\nL = 10.0 # length of the domain (km)\nt = 0.0 # initial time\ntend = 1.45 # final time\nnx = 501 # grid points in x \ndx = L/(nx-1) # grid increment in x\ncs = 3.464 # velocity (km/s) (can be an array) \niplot = 5 # snapshot frequency\nrho = 2.6702 # density [g/cm^3]\nmu = rho*cs**2 # shear modulus [GPa]\nZs = rho*cs # shear impedance \n\norder = 6 # order of accuracy\n\n\n#Initialize the domain\ny = np.zeros((nx, 1))\n\n# Initial particle velocity perturbation and discretize the domain\nfor j in range(0, nx):\n y[j, :] = j*dx # discrete domain\n\n\n\n# Time stepping parameters\ncfl = 1.0 # CFL number\ndt = (cfl/cs)*dx # Time step\nnt = int(round(tend/dt)) # number of time steps\nn = 0 # counter\n\n# Boundary condition reflection coefficients \nr0 = 1 # r=0:absorbing, r=1:free-surface, r=-1: clamped \nr1 = 1 # r=0:absorbing, r=1:free-surface, r=-1: clamped\n\n# penalty parameters\ntau_11 = 1 \ntau_12 = 1\ntau_21 = 1 \ntau_22 = 1\n\n# Initialize: particle velocity (v); and shear stress (s)\nv = np.zeros((nx, 1))\ns = np.zeros((nx, 1))\n\nU = np.zeros((nx, 1))\nV = np.zeros((nx, 1))\nU_t = np.zeros((nx, 1))\nV_t = np.zeros((nx, 1))\nU_x = np.zeros((nx, 1))\nV_x = np.zeros((nx, 1))\n\n \n\n# Difference between analyticla and numerical solutions\nEV = [0] # initialize errors in V (velocity)\nEU = [0] # initialize errors in U (stress)\nT = [0] # later append every time steps to this",
"_____no_output_____"
],
[
"# Computation and plotting\n\n# Initialize animated plot for velocity and stress\nfig1 = plt.figure(figsize=(10,10))\nax1 = fig1.add_subplot(4,1,1)\nline1 = ax1.plot(y, v, 'r', y, U, 'k--')\nplt.title('numerical vs exact')\nplt.xlabel('x [km]')\nplt.ylabel('velocity [m/s]')\n\nax2 = fig1.add_subplot(4,1,2)\nline2 = ax2.plot(y, s, 'r', y, V, 'k--')\nplt.title('numerical vs exact')\nplt.xlabel('x[km]')\nplt.ylabel('stress [MPa]')\n\n# Initialize error plot (for velocity and stress)\nax3 = fig1.add_subplot(4,1,3)\nline3 = ax3.plot(T, EV, 'r')\nplt.title('relative error in particle velocity')\nplt.xlabel('time [s]')\nax3.set_ylim([10**-5, 1])\nplt.ylabel('error')\n\nax4 = fig1.add_subplot(4,1,4)\nline4 = ax4.plot(T, EU, 'r') \nplt.ylabel('error')\nplt.xlabel('time[t]')\nax4.set_ylim([10**-5, 1])\nplt.title('relative error in stress')\n\nplt.tight_layout()\nplt.ion()\nplt.show()\n\n\nt=0 # initial time\n\nforcing = 1.0 # forcing function, forcing = 1, and no forcing function, forcing = 0\n\n# type of initial data: Gaussian or Sinusoidal\ntype_0 = 'Gaussian'\n#type_0 = 'Sinusoidal'\n\n\nif type_0 in ('Sinusoidal'):\n forcing = 1.0 # we must use forcing for Sinusoidal initial condition\n\n# L2-norm normalizer\n# Generate conditions for normalization\nrate.mms(v, s, U_t, V_t, U_x, V_x, y, 0.65, type_0)\nA = (np.linalg.norm(v)) \nB = (np.linalg.norm(s))\n\n\n# Loop through time and evolve the wave-fields using ADER time-stepping scheme of N+1 order of accuracy\nstart = timeit.default_timer()\n\n# Generate initial conditions\nrate.mms(v, s, U_t, V_t, U_x, V_x, y, t, type_0)\n\nfor t in utils.drange (0.0, tend+dt,dt):\n n = n+1\n \n # compute numerical solution \n time_integrator.elastic_RK4(v, s, v, s, rho, mu, nx, dx, order, y, t, dt, r0, r1, tau_11,\\\n tau_21, tau_12, tau_22, type_0, forcing)\n \n # Analytical solution\n rate.mms(U, V, U_t, V_t, U_x, V_x, y, t+dt, type_0)\n \n # compute error and append to the error array\n EU.append(np.linalg.norm(U-v)/A)\n EV.append(np.linalg.norm(V-s)/B)\n \n \n T.append(t)\n\n # Updating plots\n if n % iplot == 0: \n for l in line1:\n l.remove()\n del l \n for l in line2:\n l.remove()\n del l\n for l in line3:\n l.remove()\n del l \n for l in line4:\n l.remove()\n del l \n\n # Display lines\n line1 = ax1.plot(y, v, 'r', y, U, 'k--')\n ax1.legend(iter(line1),('Numerical', 'Analytical'))\n line2 = ax2.plot(y, s, 'r', y, V, 'k--')\n ax2.legend(iter(line2),('Numerical', 'Analytical'))\n line3 = ax3.plot(T, EU, 'k--')\n ax3.set_yscale(\"log\")#, nonposx='clip')\n line4 = ax4.plot(T, EV, 'k--')\n ax4.set_yscale(\"log\")#, nonposx='clip')\n plt.gcf().canvas.draw()\n \nplt.ioff()\nplt.show()\n\n# Simulation end time\nstop = timeit.default_timer()\nprint('total simulation time = ', stop - start) # print the time required for simulation\nprint('spatial order of accuracy = ', order) # print the polynomial degree used\nprint('number of grid points = ', nx) # print the degree of freedom\nprint('maximum relative error in particle velocity = ', max(EU)) # max. relative error in particle velocity\nprint('maximum relative error in stress = ', max(EV)) # max. relative error in stress",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec921afe0f29c50402dada9573d408e2c310035c | 53,325 | ipynb | Jupyter Notebook | basic/learn_pytorch04.ipynb | e8035669/pytorch_learning | 8a9131e1eb0dc587c5edfe776356616c4f0d57ab | [
"MIT"
] | null | null | null | basic/learn_pytorch04.ipynb | e8035669/pytorch_learning | 8a9131e1eb0dc587c5edfe776356616c4f0d57ab | [
"MIT"
] | null | null | null | basic/learn_pytorch04.ipynb | e8035669/pytorch_learning | 8a9131e1eb0dc587c5edfe776356616c4f0d57ab | [
"MIT"
] | null | null | null | 149.789326 | 22,620 | 0.885326 | [
[
[
"import torch\nimport torchvision\nimport torchvision.transforms as transforms",
"_____no_output_____"
],
[
"transform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))])\n\ntrainset = torchvision.datasets.CIFAR10(\n root='./data', train=True, download=True,\n transform=transform)\ntrainloader = torch.utils.data.DataLoader(\n trainset, batch_size=4, shuffle=True, num_workers=2)\n\ntestset = torchvision.datasets.CIFAR10(\n root='./data', train=False, download=True,\n transform=transform)\ntestloader = torch.utils.data.DataLoader(\n testset, batch_size=4, shuffle=False, num_workers=2)\n\nclasses = ('plane', 'car', 'bird', 'cat',\n 'deer', 'dog', 'frog', 'horse',\n 'ship', 'truck')",
"Files already downloaded and verified\nFiles already downloaded and verified\n"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n\ndef imshow(img):\n img = img / 2 + 0.5\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n plt.show()\n \ndataiter = iter(trainloader)\nimages, labels = dataiter.next()\n\nimshow(torchvision.utils.make_grid(images))\n\nprint(' '.join('%5s' % classes[labels[j]] for j in range(4)))",
"_____no_output_____"
],
[
"import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16*5*5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n \n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 16*5*5)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x",
"_____no_output_____"
],
[
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nnet = Net().to(device)",
"_____no_output_____"
],
[
"import torch.optim as optim\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.SGD(\n net.parameters(), lr = 0.001, momentum=0.9)\n",
"_____no_output_____"
],
[
"for epoch in range(2):\n running_loss = 0.0\n for i, data in enumerate(trainloader, 0):\n inputs, labels = data\n inputs, labels = inputs.to(device), labels.to(device)\n optimizer.zero_grad()\n \n outputs = net(inputs)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n if i % 2000 == 1999:\n print('[%d, %5d] loss: %.3f' %\n (epoch + 1, i + 1, running_loss / 2000))\n running_loss = 0\nprint('Finished Training')",
"[1, 2000] loss: 2.196\n[1, 4000] loss: 1.876\n[1, 6000] loss: 1.676\n[1, 8000] loss: 1.588\n[1, 10000] loss: 1.509\n[1, 12000] loss: 1.471\n[2, 2000] loss: 1.385\n[2, 4000] loss: 1.383\n[2, 6000] loss: 1.354\n[2, 8000] loss: 1.328\n[2, 10000] loss: 1.289\n[2, 12000] loss: 1.285\nFinished Training\n"
],
[
"dataiter = iter(testloader)\nimages, labels = dataiter.next()\n\nimshow(torchvision.utils.make_grid(images))\nprint('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))",
"_____no_output_____"
],
[
"outputs = net(images.to(device))",
"_____no_output_____"
],
[
"_, predicted = torch.max(outputs, 1)\n\nprint('Predicted: ', ' '.join('%5s' % \n classes[predicted[j]] for j in range(4)))",
"Predicted: cat ship plane plane\n"
],
[
"net = net.to(device)\ncorrect = 0\ntotal = 0\nwith torch.no_grad():\n for data in testloader:\n images, labels = data\n images, labels = images.to(device), labels.to(device)\n outputs = net(images)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\nprint('Accuracy of the network on the 10000 test images: %d %%' % (\n 100 * correct / total))",
"Accuracy of the network on the 10000 test images: 56 %\n"
],
[
"class_correct = list(0. for i in range(10))\nclass_total = list(0. for i in range(10))\nwith torch.no_grad():\n for data in testloader:\n images, labels = data\n images, labels = images.to(device), labels.to(device)\n outputs = net(images)\n _, predicted = torch.max(outputs, 1)\n c = (predicted == labels).squeeze()\n for i in range(4):\n label = labels[i]\n class_correct[label] += c[i].item()\n class_total[label] += 1\n\n\nfor i in range(10):\n print('Accuracy of %5s : %2d %%' % (\n classes[i], 100 * class_correct[i] / class_total[i]))",
"Accuracy of plane : 68 %\nAccuracy of car : 66 %\nAccuracy of bird : 41 %\nAccuracy of cat : 42 %\nAccuracy of deer : 48 %\nAccuracy of dog : 37 %\nAccuracy of frog : 73 %\nAccuracy of horse : 50 %\nAccuracy of ship : 66 %\nAccuracy of truck : 68 %\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9247f3112d6258bf8969d98cdadac10f595b6c | 11,902 | ipynb | Jupyter Notebook | notebooks/chapter03_notebook/07_webcam_py2.ipynb | svaksha/cookbook-code | 960becec4cc48f14991ed9d8525d5bcd21bc42a7 | [
"BSD-2-Clause"
] | 5 | 2015-11-26T14:18:23.000Z | 2018-06-08T00:46:35.000Z | notebooks/chapter03_notebook/07_webcam_py2.ipynb | kunalj101/cookbook-code | adcbdeb6b92e448350ce2643003a2a0719e574ca | [
"BSD-2-Clause"
] | null | null | null | notebooks/chapter03_notebook/07_webcam_py2.ipynb | kunalj101/cookbook-code | adcbdeb6b92e448350ce2643003a2a0719e574ca | [
"BSD-2-Clause"
] | 8 | 2015-11-14T23:18:50.000Z | 2019-08-20T22:47:07.000Z | 37.545741 | 471 | 0.451605 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec92533d2ab5a459df7e6101f40999a1284bd57b | 8,148 | ipynb | Jupyter Notebook | notebooks/pull_entity_sentiment_script.ipynb | safurrier/Entity-Sentiment-Extraction | cdbe9638cc9ea6e5ebd8190a3c3f05f73924fb1a | [
"MIT"
] | 1 | 2020-05-02T15:30:37.000Z | 2020-05-02T15:30:37.000Z | notebooks/pull_entity_sentiment_script.ipynb | safurrier/Entity-Sentiment-Extraction | cdbe9638cc9ea6e5ebd8190a3c3f05f73924fb1a | [
"MIT"
] | null | null | null | notebooks/pull_entity_sentiment_script.ipynb | safurrier/Entity-Sentiment-Extraction | cdbe9638cc9ea6e5ebd8190a3c3f05f73924fb1a | [
"MIT"
] | null | null | null | 28 | 154 | 0.548233 | [
[
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import spacy\nimport textacy\nimport pandas as pd\nimport os\nimport ruamel.yaml as yaml\nimport datetime\nimport logging\nimport sys\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"## Change to root directory",
"_____no_output_____"
]
],
[
[
"NO_CONFIG_ERR_MSG = \"\"\"No config file found. Root directory is determined by presence of \"config.yaml\" file.\"\"\" \n\noriginal_wd = os.getcwd()\n\n# Number of times to move back in directory\nnum_retries = 10\nfor x in range(0, num_retries):\n # try to load config file \n try:\n with open(\"config.yaml\", 'r') as stream:\n cfg = yaml.safe_load(stream)\n # If not found move back one directory level\n except FileNotFoundError:\n os.chdir('../')\n # If reached the max number of directory levels change to original wd and print error msg\n if x+1 == num_retries:\n os.chdir(original_wd)\n print(NO_CONFIG_ERR_MSG) ",
"_____no_output_____"
]
],
[
[
"## Import local code",
"_____no_output_____"
]
],
[
[
"# ## Add current wd to path for localimports\npath = os.getcwd()\n\nif path not in sys.path:\n sys.path.append(path) \n\nfrom src.convenience_functions.textacy_convenience_functions import load_textacy_corpus\nfrom src.convenience_functions.textacy_convenience_functions import entity_statements\nfrom src.convenience_functions.textacy_convenience_functions import list_of_entity_statements\nfrom src.convenience_functions.textacy_convenience_functions import dask_df_apply\nfrom src.textblob_entity_sentiment import textblob_entity_sentiment",
"_____no_output_____"
]
],
[
[
"## Create log file",
"_____no_output_____"
]
],
[
[
"now = datetime.datetime.now().strftime(\"%Y-%m-%d %H-%M\")\nlogging.basicConfig(filename='logs/{}.txt'.format(now), \n level=logging.INFO,\n filemode='w',\n format='%(asctime)s - %(levelname)s - %(message)s')",
"_____no_output_____"
]
],
[
[
"## Load Data",
"_____no_output_____"
]
],
[
[
"logging.info(\"\"\"Reading in data from {}\"\"\".format(cfg['input_filepath']))\n\n\ndf = pd.read_csv(cfg['input_filepath'])",
"_____no_output_____"
]
],
[
[
"## Dask Multiprocessing of applied textacy docs",
"_____no_output_____"
],
[
"Using dask to multiprocess the loading of textacy docs for each text\n\n1. Use dask to create partitioned dataframe\n\n2. To each partition map an apply that creates textacy docs from the Policy_Text column\n\n3. Concatenate back to original df",
"_____no_output_____"
]
],
[
[
"logging.info(\"\"\"Creating textacy Doc objects using the text found in the '{}' column\"\"\".format(cfg['text_col']))\n\ndf = dask_df_apply(df, cfg['text_col'], inplace=True)",
"_____no_output_____"
]
],
[
[
"## Extracting Entity Text, Counts and Sentiments",
"_____no_output_____"
],
[
"#### For each entity selected, return the count of entity occurence as well as mean, min and max of sentiments of sentences that contain said entity",
"_____no_output_____"
]
],
[
[
"logging.info(\"\"\"Extracting the following descriptive stats for entity sentiments: {} \"\"\".format(cfg['sentiment_descriptive_stats']))\n\nlogging.info(\"\"\"Extracting the sentiments for the following entities: {} \"\"\".format(cfg['entities']))\n\nsentiments = [textblob_entity_sentiment(df=df, \n textacy_col='textacy_doc', \n entity=entity, \n inplace=False,\n keep_stats=cfg['sentiment_descriptive_stats']) \n for entity\n in cfg['entities']]\n# Concat to single df\nsentiments = pd.concat(sentiments, axis=1)",
"_____no_output_____"
]
],
[
[
"#### Concat sentiment features and original df",
"_____no_output_____"
]
],
[
[
"texts_with_sentiment_info = pd.concat([df, sentiments], axis=1).drop(labels=['textacy_doc'], axis=1)",
"_____no_output_____"
],
[
"texts_with_sentiment_info.columns",
"_____no_output_____"
]
],
[
[
"## Export features",
"_____no_output_____"
]
],
[
[
"now = datetime.datetime.now().strftime(\"%Y-%m-%d %H-%M\")\narchive_output_path = 'output/{}.csv'.format(now)\nlogging.info(\"\"\"Outputting sentiments to {}\"\"\".format(archive_output_path))\ntexts_with_sentiment_info.to_csv(archive_output_path, index=False)\nprint(\"\"\"Outputting sentiments to {}\"\"\".format(archive_output_path))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec92718ef1e52a78894ba0ec3821ccf4d1b8b0f2 | 11,731 | ipynb | Jupyter Notebook | notebooks/istio_example.ipynb | songzhiwei7/seldon-core | 43fd3b39780b71aec8b30094025c6215114523c6 | [
"Apache-2.0"
] | 1 | 2020-10-10T07:46:00.000Z | 2020-10-10T07:46:00.000Z | notebooks/istio_example.ipynb | songzhiwei7/seldon-core | 43fd3b39780b71aec8b30094025c6215114523c6 | [
"Apache-2.0"
] | null | null | null | notebooks/istio_example.ipynb | songzhiwei7/seldon-core | 43fd3b39780b71aec8b30094025c6215114523c6 | [
"Apache-2.0"
] | null | null | null | 22.009381 | 378 | 0.562441 | [
[
[
"# Example Seldon Core Deployments using Helm\n<img src=\"images/deploy-graph.png\" alt=\"predictor with canary\" title=\"ml graph\"/>",
"_____no_output_____"
],
[
"## Setup Cluster and Ingress\n\nUse the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Istio Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Istio). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).",
"_____no_output_____"
]
],
[
[
"!kubectl create namespace seldon",
"_____no_output_____"
],
[
"!kubectl config set-context $(kubectl config current-context) --namespace=seldon",
"_____no_output_____"
]
],
[
[
"## Configure Istio\n\nFor this example we will create the default istio gateway for seldon which needs to be called `seldon-gateway`. You can supply your own gateway by adding to your SeldonDeployments resources the annotation `seldon.io/istio-gateway` with values the name of your istio gateway.",
"_____no_output_____"
],
[
"Create a gateway for our istio-ingress",
"_____no_output_____"
]
],
[
[
"%%writefile resources/seldon-gateway.yaml\napiVersion: networking.istio.io/v1alpha3\nkind: Gateway\nmetadata:\n name: seldon-gateway\n namespace: istio-system\nspec:\n selector:\n istio: ingressgateway # use istio default controller\n servers:\n - port:\n number: 80\n name: http\n protocol: HTTP\n hosts:\n - \"*\"",
"_____no_output_____"
],
[
"!kubectl create -f resources/seldon-gateway.yaml -n istio-system",
"_____no_output_____"
]
],
[
[
"Ensure the istio ingress gatewaty is port-forwarded to localhost:8004\n\n * Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8004:80`",
"_____no_output_____"
]
],
[
[
"ISTIO_GATEWAY=\"localhost:8004\"",
"_____no_output_____"
]
],
[
[
"## Start Seldon Core\n\nUse the setup notebook to [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core) with Istio Ingress. Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).",
"_____no_output_____"
],
[
"## Serve Single Model",
"_____no_output_____"
]
],
[
[
"!helm install mymodel ../helm-charts/seldon-single-model --set 'model.image=seldonio/mock_classifier_rest:1.3'",
"_____no_output_____"
],
[
"!helm template mymodel ../helm-charts/seldon-single-model --set 'model.image=seldonio/mock_classifier_rest:1.3' | pygmentize -l json",
"_____no_output_____"
],
[
"!kubectl rollout status deploy/mymodel-default-0-model",
"_____no_output_____"
]
],
[
[
"### Get predictions",
"_____no_output_____"
]
],
[
[
"from seldon_core.seldon_client import SeldonClient\nsc = SeldonClient(deployment_name=\"mymodel\",namespace=\"seldon\",gateway_endpoint=ISTIO_GATEWAY)",
"_____no_output_____"
]
],
[
[
"#### REST Request",
"_____no_output_____"
]
],
[
[
"r = sc.predict(gateway=\"istio\",transport=\"rest\")\nassert(r.success==True)\nprint(r)",
"_____no_output_____"
],
[
"!helm delete mymodel",
"_____no_output_____"
]
],
[
[
"## Serve AB Test",
"_____no_output_____"
]
],
[
[
"!helm install myabtest ../helm-charts/seldon-abtest",
"_____no_output_____"
],
[
"!helm template ../helm-charts/seldon-abtest | pygmentize -l json",
"_____no_output_____"
],
[
"!kubectl rollout status deploy/myabtest-default-0-classifier-1\n!kubectl rollout status deploy/myabtest-default-1-classifier-2",
"_____no_output_____"
]
],
[
[
"### Get predictions",
"_____no_output_____"
]
],
[
[
"from seldon_core.seldon_client import SeldonClient\nsc = SeldonClient(deployment_name=\"myabtest\",namespace=\"seldon\",gateway_endpoint=ISTIO_GATEWAY)",
"_____no_output_____"
]
],
[
[
"#### REST Request",
"_____no_output_____"
]
],
[
[
"r = sc.predict(gateway=\"istio\",transport=\"rest\")\nassert(r.success==True)\nprint(r)",
"_____no_output_____"
],
[
"!helm delete myabtest",
"_____no_output_____"
]
],
[
[
"## Serve Multi-Armed Bandit",
"_____no_output_____"
]
],
[
[
"!helm install mymab ../helm-charts/seldon-mab",
"_____no_output_____"
],
[
"!helm template ../helm-charts/seldon-mab | pygmentize -l json",
"_____no_output_____"
],
[
"!kubectl rollout status deploy/mymab-default-0-classifier-1\n!kubectl rollout status deploy/mymab-default-1-classifier-2\n!kubectl rollout status deploy/mymab-default-2-eg-router",
"_____no_output_____"
]
],
[
[
"### Get predictions",
"_____no_output_____"
]
],
[
[
"from seldon_core.seldon_client import SeldonClient\nsc = SeldonClient(deployment_name=\"mymab\",namespace=\"seldon\",gateway_endpoint=ISTIO_GATEWAY)",
"_____no_output_____"
]
],
[
[
"#### REST Request",
"_____no_output_____"
]
],
[
[
"r = sc.predict(gateway=\"istio\",transport=\"rest\")\nassert(r.success==True)\nprint(r)",
"_____no_output_____"
],
[
"!helm delete mymab",
"_____no_output_____"
]
],
[
[
"## Serve with Shadow\n\nWe'll use a pre-packaged model server but the 'shadow' flag can be set on any predictor.",
"_____no_output_____"
]
],
[
[
"!pygmentize ./resources/istio_shadow.yaml",
"_____no_output_____"
],
[
"!kubectl apply -f ./resources/istio_shadow.yaml",
"_____no_output_____"
],
[
"!kubectl rollout status deploy/iris-default-0-iris-default\n!kubectl rollout status deploy/iris-shadow-0-iris-shadow",
"_____no_output_____"
],
[
"from seldon_core.seldon_client import SeldonClient\nsc = SeldonClient(deployment_name=\"iris\",namespace=\"seldon\",gateway_endpoint=ISTIO_GATEWAY)",
"_____no_output_____"
],
[
"r = sc.predict(gateway=\"istio\",transport=\"rest\",shape=(1,4))\nassert(r.success==True)\nprint(r)",
"_____no_output_____"
]
],
[
[
"The traffic should go to both the default predictor and the shadow. If desired this can be checked in istio dashboards in the same way as with the istio canary example. When shadowing only the responses from the default predictor are used.",
"_____no_output_____"
]
],
[
[
"!kubectl delete -f ./resources/istio_shadow.yaml",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9274bbcbea48201d5b0fc6e13af5bb3d876905 | 29,987 | ipynb | Jupyter Notebook | pandas/pandas_scikit.ipynb | shevkunov/workout | d36b84f4341d36a6c45553a1c7fa7d147370fba8 | [
"BSD-3-Clause"
] | null | null | null | pandas/pandas_scikit.ipynb | shevkunov/workout | d36b84f4341d36a6c45553a1c7fa7d147370fba8 | [
"BSD-3-Clause"
] | null | null | null | pandas/pandas_scikit.ipynb | shevkunov/workout | d36b84f4341d36a6c45553a1c7fa7d147370fba8 | [
"BSD-3-Clause"
] | null | null | null | 39.404731 | 376 | 0.459399 | [
[
[
"# scikit learning\n\nhttps://youtu.be/t4319ffzRg0?list=PLQVvvaa0QuDc-3szzjeP6N6b0aDrrKyL-\n\n\nhttp://scikit-learn.org/stable/tutorial/machine_learning_map/index.html\nscikit-learn cheat-sheet",
"_____no_output_____"
]
],
[
[
"import quandl;\nimport pandas as pd;\n\nimport pickle;\n\nimport matplotlib.pyplot as plt;\nfrom matplotlib import style;\nstyle.use(\"ggplot\");\n\nimport numpy as np;\n\nfrom statistics import mean;\n\nfrom sklearn import svm, preprocessing, cross_validation;",
"/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n"
],
[
"api_key = open(\"quandlapikey.txt\", \"r\").read();\n\ndef mortgage_30y_resampled():\n df = quandl.get(\"FMAC/MORTG\", trim_start = \"1975-01-01\", authtoken = api_key); \n df[\"Value\"] = (df[\"Value\"] - df[\"Value\"][0]) / df[\"Value\"][0] * 100.0;\n df.columns = [\"M30\"];\n return df.resample(\"M\").mean();\n\ndef state_list():\n fiddy_states = pd.read_html(\"https://simple.wikipedia.org/wiki/List_of_U.S._states\");\n return fiddy_states[0][0][1:];\n\ndef grap_initial_state_data_start_pct():\n states = state_list();\n main_df = pd.DataFrame();\n for ab in states:\n querry = \"FMAC/HPI_\" + ab;\n df = quandl.get(querry, authtoken = api_key);\n df.columns = [ab]; \n df[ab] = (df[ab] - df[ab][0]) / df[ab][0] * 100.0; # <-------\n if main_df.empty:\n main_df = df;\n else:\n main_df = main_df.join(df);\n\n pickle_out = open(\"./data/fiddy_states.pickle\", \"wb\");\n pickle.dump(main_df, pickle_out);\n pickle_out.close();\n \ndef HPI_Benchmark():\n df = quandl.get(\"FMAC/HPI_USA\", authtoken = api_key);\n df.columns = [\"US\"]; \n df[\"US\"] = (df[\"US\"] - df[\"US\"][0]) / df[\"US\"][0] * 100.0; # <-------\n return df;\n",
"_____no_output_____"
],
[
"def sp500_data():\n df = quandl.get(\"YAHOO/INDEX_GSPC\", trim_start = \"1975-01-01\", authtoken = api_key);\n df[\"Adjusted Close\"] = (df[\"Adjusted Close\"] - df[\"Adjusted Close\"][0]) / df[\"Adjusted Close\"][0] * 100.0; # <-------\n df = df.resample(\"M\").mean();\n df.rename(columns={\"Adjusted Close\":\"sp500\"}, inplace = True);\n df = df[\"sp500\"];\n return df;",
"_____no_output_____"
],
[
"df = sp500_data();\nprint(df.head());",
"Date\n1975-01-31 3.323491\n1975-02-28 14.049322\n1975-03-31 19.367785\n1975-04-30 20.636734\n1975-05-31 28.287322\nFreq: M, Name: sp500, dtype: float64\n"
],
[
"def gdp_data():\n df = quandl.get(\"BCB/4385\", trim_start = \"1975-01-01\", authtoken = api_key);\n df[\"Value\"] = (df[\"Value\"] - df[\"Value\"][0]) / df[\"Value\"][0] * 100.0; # <-------\n df = df.resample(\"M\").mean();\n df.rename(columns={\"Value\":\"GDP\"}, inplace = True);\n df = df[\"GDP\"];\n return df;\n ",
"_____no_output_____"
],
[
"def us_unemployment():\n df = quandl.get(\"ECPI/JOB_G\", trim_start = \"1975-01-01\", authtoken = api_key);\n df[\"Unemployment Rate\"] = (df[\"Unemployment Rate\"] - df[\"Unemployment Rate\"][0]) / df[\"Unemployment Rate\"][0] * 100.0; # <-------\n df = df.resample(\"1D\").mean();\n df = df.resample(\"M\").mean();\n return df;",
"_____no_output_____"
],
[
"sp500 = sp500_data();\nUS_GDP = gdp_data();\nUS_uneployment = us_unemployment();\n\nm30 = mortgage_30y_resampled();\nHPI_data = pd.read_pickle(\"./data/fiddy_states.pickle\");\nHPI_bench = HPI_Benchmark();",
"_____no_output_____"
],
[
"HPI = HPI_data.join([HPI_bench, m30, US_uneployment, US_GDP, sp500]);\nprint(HPI.head());\nprint(HPI.corr().head());\n# we have nans!!",
" AL AK AZ AR CA CO \\\nDate \n1975-01-31 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 \n1975-02-28 0.626500 1.485775 1.688504 0.846192 0.356177 1.159639 \n1975-03-31 1.358575 3.006473 3.261346 1.581956 1.575690 2.299449 \n1975-04-30 2.254726 4.593530 4.475810 2.183669 3.573196 3.359028 \n1975-05-31 3.107829 6.327600 5.139617 2.786248 5.241395 4.226895 \n\n CT DE FL GA ... VA \\\nDate ... \n1975-01-31 0.000000 0.000000 0.000000 0.000000 ... 0.000000 \n1975-02-28 2.123926 0.142451 3.938796 -0.902841 ... 0.987288 \n1975-03-31 3.719898 0.387918 9.798243 -1.282758 ... 1.707474 \n1975-04-30 4.616778 0.891619 16.974819 -1.068371 ... 2.238392 \n1975-05-31 4.901787 1.752086 17.891884 -0.676830 ... 2.684036 \n\n WA WV WI WY US M30 \\\nDate \n1975-01-31 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 \n1975-02-28 0.397561 2.495069 1.404920 1.438502 0.639523 -3.393425 \n1975-03-31 1.015723 5.093474 2.791371 2.962512 1.681678 -5.620361 \n1975-04-30 1.757887 7.772685 4.034165 4.545011 3.047044 -6.468717 \n1975-05-31 2.426439 10.469784 5.047628 6.080901 3.922540 -5.514316 \n\n Unemployment Rate GDP sp500 \nDate \n1975-01-31 0.000000 NaN 3.323491 \n1975-02-28 0.000000 NaN 14.049322 \n1975-03-31 6.172840 NaN 19.367785 \n1975-04-30 8.641975 NaN 20.636734 \n1975-05-31 11.111111 NaN 28.287322 \n\n[5 rows x 55 columns]\n AL AK AZ AR CA CO CT \\\nAL 1.000000 0.946418 0.937592 0.995119 0.942310 0.965541 0.953146 \nAK 0.946418 1.000000 0.910237 0.967983 0.921818 0.960382 0.884600 \nAZ 0.937592 0.910237 1.000000 0.936454 0.976776 0.919486 0.917688 \nAR 0.995119 0.967983 0.936454 1.000000 0.945774 0.976929 0.944746 \nCA 0.942310 0.921818 0.976776 0.945774 1.000000 0.938870 0.942463 \n\n DE FL GA ... VA WA WV \\\nAL 0.982824 0.929395 0.978346 ... 0.975277 0.985060 0.982097 \nAK 0.938068 0.900713 0.899525 ... 0.962226 0.958805 0.972376 \nAZ 0.948801 0.994380 0.949167 ... 0.958568 0.956881 0.919514 \nAR 0.977921 0.928081 0.968410 ... 0.979944 0.984041 0.990395 \nCA 0.967911 0.985869 0.951573 ... 0.979699 0.965064 0.936562 \n\n WI WY US M30 Unemployment Rate GDP \\\nAL 0.991161 0.946695 0.983404 -0.796453 -0.313968 0.679765 \nAK 0.930831 0.987373 0.950294 -0.734748 -0.077318 0.831059 \nAZ 0.935717 0.909381 0.969580 -0.673961 -0.363260 0.461065 \nAR 0.986244 0.964090 0.984300 -0.788178 -0.274777 0.738729 \nCA 0.945139 0.919537 0.986693 -0.733805 -0.309076 0.513147 \n\n sp500 \nAL 0.912874 \nAK 0.894500 \nAZ 0.857531 \nAR 0.919558 \nCA 0.873121 \n\n[5 rows x 55 columns]\n"
],
[
"HPI.dropna(inplace = True);\nprint(HPI.head());\nprint(HPI.corr().head());",
" AL AK AZ AR CA \\\nDate \n1990-01-31 97.377466 62.960664 125.925384 90.013079 407.639219 \n1990-02-28 97.306825 64.745987 125.954481 90.292939 409.393746 \n1990-03-31 97.640926 68.843816 125.677184 90.749096 412.358848 \n1990-04-30 98.347335 75.545290 125.537118 91.190719 414.727667 \n1990-05-31 99.135672 84.269489 125.820526 91.693828 415.785301 \n\n CO CT DE FL GA \\\nDate \n1990-01-31 128.607770 289.087700 202.214111 133.668588 116.794322 \n1990-02-28 128.817810 286.729447 203.585164 133.302517 116.293285 \n1990-03-31 129.016696 283.918663 203.903611 133.217207 115.513095 \n1990-04-30 129.123827 280.649292 203.811564 133.568341 115.303254 \n1990-05-31 129.785821 277.504604 203.813374 134.063840 115.679836 \n\n ... VA WA WV WI \\\nDate ... \n1990-01-31 ... 179.224441 225.298844 68.476763 102.969030 \n1990-02-28 ... 178.921132 233.971612 68.825487 104.289628 \n1990-03-31 ... 179.104137 242.490216 69.416380 105.932415 \n1990-04-30 ... 180.099895 250.784574 70.283645 107.582128 \n1990-05-31 ... 181.135479 258.186412 71.468265 108.807348 \n\n WY US M30 Unemployment Rate GDP \\\nDate \n1990-01-31 74.740767 198.780437 4.984093 -33.333333 0.000000 \n1990-02-28 75.353989 199.407324 8.165429 -34.567901 -2.697505 \n1990-03-31 76.287112 200.269331 8.907741 -35.802469 -2.065278 \n1990-04-30 77.735727 201.307039 9.968187 -33.333333 -10.874318 \n1990-05-31 79.764564 202.362724 11.134677 -33.333333 1.006296 \n\n sp500 \nDate \n1990-01-31 384.083446 \n1990-02-28 370.529148 \n1990-03-31 381.937898 \n1990-04-30 381.529238 \n1990-05-31 398.718477 \n\n[5 rows x 55 columns]\n AL AK AZ AR CA CO CT \\\nAL 1.000000 0.972248 0.901313 0.994846 0.877794 0.961665 0.927413 \nAK 0.972248 1.000000 0.839755 0.981834 0.855834 0.922324 0.930099 \nAZ 0.901313 0.839755 1.000000 0.886004 0.958469 0.842439 0.912187 \nAR 0.994846 0.981834 0.886004 1.000000 0.880304 0.969541 0.932475 \nCA 0.877794 0.855834 0.958469 0.880304 1.000000 0.845729 0.968665 \n\n DE FL GA ... VA WA WV \\\nAL 0.956129 0.886165 0.936673 ... 0.953386 0.986904 0.987465 \nAK 0.962020 0.841722 0.851612 ... 0.963128 0.966984 0.992558 \nAZ 0.921635 0.991186 0.934123 ... 0.927210 0.935524 0.850917 \nAR 0.956638 0.879779 0.925560 ... 0.959097 0.977613 0.994293 \nCA 0.949543 0.984781 0.911907 ... 0.958405 0.917135 0.848311 \n\n WI WY US M30 Unemployment Rate GDP \\\nAL 0.984120 0.976964 0.959813 -0.797310 0.065500 0.655117 \nAK 0.947880 0.995386 0.937370 -0.850842 0.259508 0.778482 \nAZ 0.898546 0.835989 0.954788 -0.569351 -0.190969 0.392354 \nAR 0.988203 0.981145 0.961420 -0.831055 0.104737 0.681297 \nCA 0.895063 0.835664 0.975569 -0.620295 -0.027545 0.410782 \n\n sp500 \nAL 0.806501 \nAK 0.731637 \nAZ 0.709411 \nAR 0.797588 \nCA 0.644640 \n\n[5 rows x 55 columns]\n"
],
[
"HPI.to_pickle(\"./data/HPI.pickle\");",
"_____no_output_____"
],
[
"housing_data = pd.read_pickle(\"./data/HPI.pickle\");\nhousing_data = housing_data.pct_change();\nprint(housing_data.head());",
" AL AK AZ AR CA CO \\\nDate \n1990-01-31 NaN NaN NaN NaN NaN NaN \n1990-02-28 -0.000725 0.028356 0.000231 0.003109 0.004304 0.001633 \n1990-03-31 0.003433 0.063291 -0.002202 0.005052 0.007243 0.001544 \n1990-04-30 0.007235 0.097343 -0.001114 0.004866 0.005745 0.000830 \n1990-05-31 0.008016 0.115483 0.002258 0.005517 0.002550 0.005127 \n\n CT DE FL GA ... VA \\\nDate ... \n1990-01-31 NaN NaN NaN NaN ... NaN \n1990-02-28 -0.008158 0.006780 -0.002739 -0.004290 ... -0.001692 \n1990-03-31 -0.009803 0.001564 -0.000640 -0.006709 ... 0.001023 \n1990-04-30 -0.011515 -0.000451 0.002636 -0.001817 ... 0.005560 \n1990-05-31 -0.011205 0.000009 0.003710 0.003266 ... 0.005750 \n\n WA WV WI WY US M30 \\\nDate \n1990-01-31 NaN NaN NaN NaN NaN NaN \n1990-02-28 0.038495 0.005093 0.012825 0.008205 0.003154 0.638298 \n1990-03-31 0.036409 0.008585 0.015752 0.012383 0.004323 0.090909 \n1990-04-30 0.034205 0.012494 0.015573 0.018989 0.005182 0.119048 \n1990-05-31 0.029515 0.016855 0.011389 0.026099 0.005244 0.117021 \n\n Unemployment Rate GDP sp500 \nDate \n1990-01-31 NaN NaN NaN \n1990-02-28 0.037037 -inf -0.035290 \n1990-03-31 0.035714 -0.234375 0.030790 \n1990-04-30 -0.068966 4.265306 -0.001070 \n1990-05-31 0.000000 -1.092539 0.045054 \n\n[5 rows x 55 columns]\n"
],
[
"housing_data.replace([np.inf, -np.inf], np.nan, inplace = True);\nhousing_data.dropna(inplace = True);\nprint(housing_data.head());",
" AL AK AZ AR CA CO \\\nDate \n1990-03-31 0.003433 0.063291 -0.002202 0.005052 0.007243 0.001544 \n1990-04-30 0.007235 0.097343 -0.001114 0.004866 0.005745 0.000830 \n1990-05-31 0.008016 0.115483 0.002258 0.005517 0.002550 0.005127 \n1990-06-30 0.004616 0.103961 0.003969 0.006417 0.003595 0.007326 \n1990-07-31 0.000083 0.069205 0.001866 0.006427 0.004748 0.003203 \n\n CT DE FL GA ... VA \\\nDate ... \n1990-03-31 -0.009803 0.001564 -0.000640 -0.006709 ... 0.001023 \n1990-04-30 -0.011515 -0.000451 0.002636 -0.001817 ... 0.005560 \n1990-05-31 -0.011205 0.000009 0.003710 0.003266 ... 0.005750 \n1990-06-30 -0.007019 -0.000411 0.003081 0.002858 ... 0.003002 \n1990-07-31 -0.003224 -0.003613 0.002929 0.001878 ... 0.002215 \n\n WA WV WI WY US M30 \\\nDate \n1990-03-31 0.036409 0.008585 0.015752 0.012383 0.004323 0.090909 \n1990-04-30 0.034205 0.012494 0.015573 0.018989 0.005182 0.119048 \n1990-05-31 0.029515 0.016855 0.011389 0.026099 0.005244 0.117021 \n1990-06-30 0.018036 0.017798 0.009686 0.027659 0.005103 -0.304762 \n1990-07-31 0.008122 0.012048 0.009039 0.021413 0.003712 -0.164384 \n\n Unemployment Rate GDP sp500 \nDate \n1990-03-31 0.035714 -0.234375 0.030790 \n1990-04-30 -0.068966 4.265306 -0.001070 \n1990-05-31 0.000000 -1.092539 0.045054 \n1990-06-30 0.074074 3.115183 0.036200 \n1990-07-31 -0.103448 0.441476 -0.001226 \n\n[5 rows x 55 columns]\n"
],
[
"housing_data[\"US_HPI_future\"] = housing_data[\"US\"].shift(-1);\nprint(housing_data[[\"US_HPI_future\", \"US\"]].head());",
" US_HPI_future US\nDate \n1990-03-31 0.005182 0.004323\n1990-04-30 0.005244 0.005182\n1990-05-31 0.005103 0.005244\n1990-06-30 0.003712 0.005103\n1990-07-31 0.000489 0.003712\n"
],
[
"def create_labels(cur_hpi, fut_hpi):\n if fut_hpi > cur_hpi:\n return 1;\n else:\n return 0;\n\nhousing_data[\"label\"] = list(map(create_labels, housing_data[\"US\"], housing_data[\"US_HPI_future\"])); # wow\n#pd.Series.map may be useful also\nprint(housing_data[[\"US_HPI_future\", \"US\", \"label\"]].head());",
" US_HPI_future US label\nDate \n1990-03-31 0.005182 0.004323 1\n1990-04-30 0.005244 0.005182 1\n1990-05-31 0.005103 0.005244 0\n1990-06-30 0.003712 0.005103 0\n1990-07-31 0.000489 0.003712 0\n"
],
[
"def moving_average(values):\n return mean(values);\n\nhousing_data[\"ma_apply_example\"] = housing_data[\"M30\"].rolling(window = 10).apply(moving_average);\nprint(housing_data[[\"M30\", \"ma_apply_example\"]]);",
" M30 ma_apply_example\nDate \n1990-03-31 0.090909 NaN\n1990-04-30 0.119048 NaN\n1990-05-31 0.117021 NaN\n1990-06-30 -0.304762 NaN\n1990-07-31 -0.164384 NaN\n1990-08-31 0.098361 NaN\n1990-09-30 0.119403 NaN\n1990-10-31 0.000000 NaN\n1990-11-30 -0.226667 NaN\n1990-12-31 -0.586207 -0.073728\n1991-01-31 -0.125000 -0.095319\n1991-02-28 -1.285714 -0.235795\n1991-03-31 -2.166667 -0.464164\n1991-04-30 -0.142857 -0.447973\n1991-05-31 -0.333333 -0.464868\n1991-06-30 3.750000 -0.099704\n1991-07-31 -0.210526 -0.132697\n1991-08-31 -2.266667 -0.359364\n1991-09-30 1.210526 -0.215644\n1991-10-31 0.357143 -0.121310\n1991-11-30 0.263158 -0.082494\n1991-12-31 0.291667 0.075244\n1992-01-31 0.075269 0.299438\n1992-02-29 -0.330000 0.280724\n1992-03-31 -0.268657 0.287191\n1992-04-30 0.183673 -0.069441\n1992-05-31 0.310345 -0.017354\n1992-06-30 0.210526 0.230365\n1992-07-31 0.413043 0.150617\n1992-08-31 0.115385 0.126441\n... ... ...\n2009-07-31 0.049875 0.025387\n2009-08-31 0.007126 0.030820\n2009-09-30 0.030660 0.030480\n2009-10-31 0.025172 0.010243\n2009-11-30 0.015625 0.005220\n2009-12-31 -0.010989 0.005723\n2010-01-31 -0.022222 0.000478\n2010-02-28 0.009091 -0.002902\n2010-03-31 0.004505 -0.001370\n2010-04-30 -0.029148 0.007969\n2010-05-31 0.048499 0.007832\n2010-06-30 0.033040 0.010423\n2010-07-31 0.038380 0.011195\n2010-08-31 0.026694 0.011347\n2010-09-30 0.016000 0.011385\n2010-10-31 0.023622 0.014846\n2010-11-30 -0.013462 0.015722\n2010-12-31 -0.079922 0.006821\n2011-01-31 -0.010593 0.005311\n2011-02-28 -0.040685 0.004157\n2011-03-31 0.024554 0.001763\n2011-04-30 0.000000 -0.001541\n2011-05-31 0.043573 -0.001022\n2011-06-30 0.027140 -0.000977\n2011-07-31 -0.008130 -0.003390\n2011-08-31 0.057377 -0.000015\n2011-09-30 0.031008 0.004432\n2011-10-31 0.007519 0.013176\n2011-11-30 0.014925 0.015728\n2011-12-31 0.005515 0.020348\n\n[262 rows x 2 columns]\n"
]
],
[
[
"# in fact, this notebook starts here",
"_____no_output_____"
]
],
[
[
"X = np.array(housing_data.drop([\"US_HPI_future\", \"label\", \"ma_apply_example\"], 1));\nprint(X);\nX = preprocessing.scale(X);\nprint(X);",
"[[ 3.43348369e-03 6.32908563e-02 -2.20155868e-03 ..., 3.57142857e-02\n -2.34375000e-01 3.07904247e-02]\n [ 7.23476661e-03 9.73431485e-02 -1.11449223e-03 ..., -6.89655172e-02\n 4.26530612e+00 -1.06996554e-03]\n [ 8.01583787e-03 1.15483024e-01 2.25756447e-03 ..., 0.00000000e+00\n -1.09253876e+00 4.50535306e-02]\n ..., \n [ -1.38395068e-02 -7.53032397e-03 6.90772620e-04 ..., -1.11111111e-01\n 2.01368115e-02 3.02062770e-02]\n [ -6.51605813e-03 -7.08517825e-03 6.67412020e-03 ..., -2.50000000e-01\n 9.48695005e-04 1.68855043e-02]\n [ 3.60639587e-04 -5.30536889e-03 7.65538356e-03 ..., -3.33333333e-01\n -2.21278017e-02 1.46240254e-02]]\n[[ 0.05008063 4.0429684 -0.3829891 ..., 0.1928799 -0.39143479\n 0.59380827]\n [ 0.64096895 6.46761673 -0.30118641 ..., -0.34198895 4.28569477\n -0.18863566]\n [ 0.76238214 7.75924253 -0.04743625 ..., 0.01039524 -1.28344091\n 0.944089 ]\n ..., \n [-2.63490981 -0.99975986 -0.16533868 ..., -0.55733484 -0.1268861\n 0.57946247]\n [-1.49652038 -0.96806385 0.28491336 ..., -1.26699743 -0.14683092\n 0.25232405]\n [-0.42757592 -0.84133488 0.35875427 ..., -1.69279498 -0.17081746\n 0.19678548]]\n"
],
[
"y = np.array(housing_data[\"label\"]);\nprint(y);",
"[1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 0 0 0 1 1 1\n 1 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0 1 0 0 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 0\n 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 1 1 0 1 1 1 1 1 0\n 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0\n 0 0 0 0 0 1 1 1 1 0 1 1 0 0 0 0 0 1 1 1 1 1 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0\n 0 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0\n 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 0 1 1 1 1 1 0 0 0 0 0\n 1 0 0]\n"
],
[
"X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size = 0.2);\nclf = svm.SVC(kernel = \"linear\");\nclf.fit(X_train, y_train);\n\nprint(clf.score(X_test, y_test));",
"0.754716981132\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec92a4ee9953fc29c31f5de7aefcf172a55e7c31 | 28,995 | ipynb | Jupyter Notebook | ode_pve/GITT.ipynb | ode-pve/ODE_PVE | a19b26c2aa260820b8c0e51bae5f654c7de97ba0 | [
"MIT"
] | null | null | null | ode_pve/GITT.ipynb | ode-pve/ODE_PVE | a19b26c2aa260820b8c0e51bae5f654c7de97ba0 | [
"MIT"
] | null | null | null | ode_pve/GITT.ipynb | ode-pve/ODE_PVE | a19b26c2aa260820b8c0e51bae5f654c7de97ba0 | [
"MIT"
] | null | null | null | 123.382979 | 5,690 | 0.676806 | [
[
[
"# import packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom sympy import *\nfrom scipy.optimize import fsolve\n%matplotlib inline",
"_____no_output_____"
],
[
"# set up constants\nlength = 25 #(nm)\ndx = length/12 #(nm)\ndt = 0.016667*332/12 #(s)\nM = 2.75*10**(-15) #(m mol/J s) it should be determined by further aproach but assumed as a constant here\nCmax = 0.02119 #(mol/cm3)\nF = 96500 #(C/mol)\nk2 = -4.8\nb2 = 7.57\nCeaCmax = 0.041 #(Cea/Cmax)\nCebCmax = 0.006 #(Ceb/Cmax)\nEeq = 3.4276",
"_____no_output_____"
],
[
"# set up constant for f(xi)\na = 218414\nb = 288001\nc = 122230\nd = 12466",
"_____no_output_____"
],
[
"# import two-phase data\ndf = pd.read_excel('D.xlsx', sheet_name='4')\nC = df['Li Fraction'] # the concentration in two-phase region\nA = df['SymbolA']",
"_____no_output_____"
],
[
"A",
"_____no_output_____"
],
[
"def myFunctionA(AA):\n \n for i in range(0,12):\n A[i] = AA[i]\n \n F = np.empty((12))\n \n F[0] = A[0] - C[0]\n F[11] = A[11] - 0\n \n for i in range(1,11):\n \n F[i] = (C[i]-2*A[i]) * ( M*( (C[i]-2*A[i])/Cmax*F*(k2*(C[i]-A[i])/Cmax+b2) - (CebCmax-CeaCmax)*F*Eeq + a*(C[i]/Cmax)**3 - b*(C[i]/Cmax)**2 + c*(C[i]/Cmax) + d ) ) - (A[i+1]-A[i])**2/(A[i+1]-2*A[i]+A[i-1])*dx/dt - (C[i+1]-A[i+1]-C[i]+A[i])**2/(C[i+1]-A[i+1]-2*C[i]+2*A[i]+C[i-1]-A[i-1])*dx/dt \n \n return F\n\nAAGuess = np.linspace(0, 1, 12)\nAA = fsolve(myFunctionA,AAGuess)\nprint(AA)",
"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n after removing the cwd from sys.path.\nC:\\Users\\user\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:13: RuntimeWarning: divide by zero encountered in double_scalars\n del sys.path[0]\n"
],
[
"def myFunctionA(AA):\n \n for i in range(0,12):\n A[i] = AA[i]\n \n F = np.empty((12))\n \n F[0] = A[0] - C[0]\n F[11] = A[11] - 0\n \n for i in range(1,11):\n \n F[i] = (C[i]-2*A[i]) * ( a*(C[i]-A[i])**2 + b*(C[i]-A[i]) - a*(C[i]-A[i])*C[i] - b*C[i] + c*C[i]**3 - d*C[i]**2 + e*C[i] - f) - (A[i+1]-A[i])**2/(A[i+1]-2*A[i]+A[i-1])*dx/dt - (C[i+1]-A[i+1]-C[i]+A[i])**2/(C[i+1]-A[i+1]-2*C[i]+2*A[i]+C[i-1]-A[i-1])*dx/dt\n \n return F\n\nAAGuess = np.linspace(0, 1, 12)\nAA = fsolve(myFunctionA,AAGuess)\nprint(AA)",
"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n after removing the cwd from sys.path.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec92be9b334383d4aafa416df759606ab669316c | 19,540 | ipynb | Jupyter Notebook | Code/.ipynb_checkpoints/whale_analysis-checkpoint.ipynb | aliabolhassani/whale-analysis | 9d366681e2e0d502c9c4d174d5ec8c58bb20afa6 | [
"ADSL"
] | null | null | null | Code/.ipynb_checkpoints/whale_analysis-checkpoint.ipynb | aliabolhassani/whale-analysis | 9d366681e2e0d502c9c4d174d5ec8c58bb20afa6 | [
"ADSL"
] | null | null | null | Code/.ipynb_checkpoints/whale_analysis-checkpoint.ipynb | aliabolhassani/whale-analysis | 9d366681e2e0d502c9c4d174d5ec8c58bb20afa6 | [
"ADSL"
] | null | null | null | 23.317422 | 348 | 0.566121 | [
[
[
" # A Whale off the Port(folio)\n ---\n\n In this assignment, you'll get to use what you've learned this week to evaluate the performance among various algorithmic, hedge, and mutual fund portfolios and compare them against the S&P TSX 60 Index.",
"_____no_output_____"
]
],
[
[
"# Initial imports\nimport pandas as pd\nimport numpy as np\nimport datetime as dt\nfrom pathlib import Path\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Data Cleaning\n\nIn this section, you will need to read the CSV files into DataFrames and perform any necessary data cleaning steps. After cleaning, combine all DataFrames into a single DataFrame.\n\nFiles:\n\n* `whale_returns.csv`: Contains returns of some famous \"whale\" investors' portfolios.\n\n* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.\n\n* `sp_tsx_history.csv`: Contains historical closing prices of the S&P TSX 60 Index.",
"_____no_output_____"
]
],
[
[
"## Whale Returns\n\n# Read the Whale Portfolio daily returns and clean the data.\nwhale_returns_path = Path(\"./Resources/whale_returns.csv\")\nalgo_returns_path = Path(\"./Resources/algo_returns.csv\")\nsp_tsx_history_path = Path(\"./Resources/sp_tsx_history.csv\")\n\nwhale_returns_df = pd.read_csv(whale_returns_path, index_col=\"Date\", parse_dates=True, infer_datetime_format=True)\nalgo_returns_df = pd.read_csv(algo_returns_path, index_col=\"Date\", parse_dates=True, infer_datetime_format=True)\nsp_tsx_history_df = pd.read_csv(sp_tsx_history_path, index_col=\"Date\", parse_dates=True, infer_datetime_format=True)\n\n# Visually check if tehre is any anomaly \nwhale_returns_df.dtypes\nalgo_returns_df.dtypes\nsp_tsx_history_df.dtypes\n\n# Converting types to float64\n# whale_returns_df[\"SOROS FUND MANAGEMENT LLC\"] = whale_returns_df[\"SOROS FUND MANAGEMENT LLC\"].astype('float')\n# whale_returns_df[\"SOROS FUND MANAGEMENT LLC\"].dtype\n\n# Cleansing rows having null values\nwhale_returns_df.dropna(inplace=True)\nalgo_returns_df.dropna(inplace=True)\nsp_tsx_history_df.dropna(inplace=True)\n\n# Deduplicating\n# Checking if there is any\nwhale_returns_df.duplicated().sum()\nalgo_returns_df.duplicated()\nsp_tsx_history_df.duplicated().sum()\n\n# Seeing the duplicated rows\n# algo_returns_df.loc[(int(algo_returns_df['Algo 1']) > 0)]\n\n\nwhale_returns_df = whale_returns_df.drop_duplicates().copy()\nalgo_returns_df = algo_returns_df.drop_duplicates().copy()\nsp_tsx_history_df = sp_tsx_history_df.drop_duplicates().copy()\n\n# Checking if duplicates removed\nwhale_returns_df.duplicated().sum()\nalgo_returns_df.duplicated().sum()\nsp_tsx_history_df.duplicated().sum()\n\n\n# Combining all DataFrames into a single DataFrame\n",
"_____no_output_____"
],
[
"# Reading whale returns\n",
"_____no_output_____"
],
[
"# Count nulls\n",
"_____no_output_____"
],
[
"# Drop nulls\n",
"_____no_output_____"
]
],
[
[
"## Algorithmic Daily Returns\n\nRead the algorithmic daily returns and clean the data.",
"_____no_output_____"
]
],
[
[
"# Reading algorithmic returns\n",
"_____no_output_____"
],
[
"# Count nulls\n",
"_____no_output_____"
],
[
"# Drop nulls\n",
"_____no_output_____"
]
],
[
[
"## S&P TSX 60 Returns\n\nRead the S&P TSX 60 historic closing prices and create a new daily returns DataFrame from the data. ",
"_____no_output_____"
]
],
[
[
"# Reading S&P TSX 60 Closing Prices\n",
"_____no_output_____"
],
[
"# Check Data Types\n",
"_____no_output_____"
],
[
"# Fix Data Types\n",
"_____no_output_____"
],
[
"# Calculate Daily Returns\n",
"_____no_output_____"
],
[
"# Drop nulls\n",
"_____no_output_____"
],
[
"# Rename `Close` Column to be specific to this portfolio.\n",
"_____no_output_____"
]
],
[
[
"## Combine Whale, Algorithmic, and S&P TSX 60 Returns",
"_____no_output_____"
]
],
[
[
"# Join Whale Returns, Algorithmic Returns, and the S&P TSX 60 Returns into a single DataFrame with columns for each portfolio's returns.\n",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"# Conduct Quantitative Analysis\n\nIn this section, you will calculate and visualize performance and risk metrics for the portfolios.",
"_____no_output_____"
],
[
"## Performance Anlysis\n\n#### Calculate and Plot the daily returns.",
"_____no_output_____"
]
],
[
[
"# Plot daily returns of all portfolios\n",
"_____no_output_____"
]
],
[
[
"#### Calculate and Plot cumulative returns.",
"_____no_output_____"
]
],
[
[
"# Calculate cumulative returns of all portfolios\n\n# Plot cumulative returns\n",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Risk Analysis\n\nDetermine the _risk_ of each portfolio:\n\n1. Create a box plot for each portfolio. \n2. Calculate the standard deviation for all portfolios.\n4. Determine which portfolios are riskier than the S&P TSX 60.\n5. Calculate the Annualized Standard Deviation.",
"_____no_output_____"
],
[
"### Create a box plot for each portfolio\n",
"_____no_output_____"
]
],
[
[
"# Box plot to visually show risk\n",
"_____no_output_____"
]
],
[
[
"### Calculate Standard Deviations",
"_____no_output_____"
]
],
[
[
"# Calculate the daily standard deviations of all portfolios\n",
"_____no_output_____"
]
],
[
[
"### Determine which portfolios are riskier than the S&P TSX 60",
"_____no_output_____"
]
],
[
[
"# Calculate the daily standard deviation of S&P TSX 60\n\n# Determine which portfolios are riskier than the S&P TSX 60\n",
"_____no_output_____"
]
],
[
[
"### Calculate the Annualized Standard Deviation",
"_____no_output_____"
]
],
[
[
"# Calculate the annualized standard deviation (252 trading days)\n",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Rolling Statistics\n\nRisk changes over time. Analyze the rolling statistics for Risk and Beta. \n\n1. Calculate and plot the rolling standard deviation for the S&P TSX 60 using a 21-day window.\n2. Calculate the correlation between each stock to determine which portfolios may mimick the S&P TSX 60.\n3. Choose one portfolio, then calculate and plot the 60-day rolling beta for it and the S&P TSX 60.",
"_____no_output_____"
],
[
"### Calculate and plot rolling `std` for all portfolios with 21-day window",
"_____no_output_____"
]
],
[
[
"# Calculate the rolling standard deviation for all portfolios using a 21-day window\n\n# Plot the rolling standard deviation\n",
"_____no_output_____"
]
],
[
[
"### Calculate and plot the correlation",
"_____no_output_____"
]
],
[
[
"# Calculate the correlation\n\n# Display de correlation matrix\n",
"_____no_output_____"
]
],
[
[
"### Calculate and Plot Beta for a chosen portfolio and the S&P 60 TSX",
"_____no_output_____"
]
],
[
[
"# Calculate covariance of a single portfolio\n\n# Calculate variance of S&P TSX\n\n# Computing beta\n\n# Plot beta trend\n",
"_____no_output_____"
]
],
[
[
"## Rolling Statistics Challenge: Exponentially Weighted Average \n\nAn alternative way to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half-life.",
"_____no_output_____"
]
],
[
[
"# Use `ewm` to calculate the rolling window\n",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"# Sharpe Ratios\nIn reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. After all, if you could invest in one of two portfolios, and each offered the same 10% return, yet one offered lower risk, you'd take that one, right?\n\n### Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot",
"_____no_output_____"
]
],
[
[
"# Annualized Sharpe Ratios\n",
"_____no_output_____"
],
[
"# Visualize the sharpe ratios as a bar plot\n",
"_____no_output_____"
]
],
[
[
"### Determine whether the algorithmic strategies outperform both the market (S&P TSX 60) and the whales portfolios.\n\nWrite your answer here!",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"# Create Custom Portfolio\n\nIn this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P TSX 60. \n\n1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.\n2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock.\n3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns.\n4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others.\n5. Include correlation analysis to determine which stocks (if any) are correlated.",
"_____no_output_____"
],
[
"## Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.\n\nFor this demo solution, we fetch data from three companies listes in the S&P TSX 60 index.\n\n* `SHOP` - [Shopify Inc](https://en.wikipedia.org/wiki/Shopify)\n\n* `OTEX` - [Open Text Corporation](https://en.wikipedia.org/wiki/OpenText)\n\n* `L` - [Loblaw Companies Limited](https://en.wikipedia.org/wiki/Loblaw_Companies)",
"_____no_output_____"
]
],
[
[
"# Reading data from 1st stock\n",
"_____no_output_____"
],
[
"# Reading data from 2nd stock\n",
"_____no_output_____"
],
[
"# Reading data from 3rd stock\n",
"_____no_output_____"
],
[
"# Combine all stocks in a single DataFrame\n",
"_____no_output_____"
],
[
"# Reset Date index\n",
"_____no_output_____"
],
[
"# Reorganize portfolio data by having a column per symbol\n",
"_____no_output_____"
],
[
"# Calculate daily returns\n\n# Drop NAs\n\n# Display sample data\n",
"_____no_output_____"
]
],
[
[
"## Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock",
"_____no_output_____"
]
],
[
[
"# Set weights\nweights = [1/3, 1/3, 1/3]\n\n# Calculate portfolio return\n\n# Display sample data\n",
"_____no_output_____"
]
],
[
[
"## Join your portfolio returns to the DataFrame that contains all of the portfolio returns",
"_____no_output_____"
]
],
[
[
"# Join your returns DataFrame to the original returns DataFrame\n",
"_____no_output_____"
],
[
"# Only compare dates where return data exists for all the stocks (drop NaNs)\n",
"_____no_output_____"
]
],
[
[
"## Re-run the risk analysis with your portfolio to see how it compares to the others",
"_____no_output_____"
],
[
"### Calculate the Annualized Standard Deviation",
"_____no_output_____"
]
],
[
[
"# Calculate the annualized `std`\n",
"_____no_output_____"
]
],
[
[
"### Calculate and plot rolling `std` with 21-day window",
"_____no_output_____"
]
],
[
[
"# Calculate rolling standard deviation\n\n# Plot rolling standard deviation\n",
"_____no_output_____"
]
],
[
[
"### Calculate and plot the correlation",
"_____no_output_____"
]
],
[
[
"# Calculate and plot the correlation\n",
"_____no_output_____"
]
],
[
[
"### Calculate and Plot the 60-day Rolling Beta for Your Portfolio compared to the S&P 60 TSX",
"_____no_output_____"
]
],
[
[
"# Calculate and plot Beta\n",
"_____no_output_____"
]
],
[
[
"### Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot",
"_____no_output_____"
]
],
[
[
"# Calculate Annualzied Sharpe Ratios\n",
"_____no_output_____"
],
[
"# Visualize the sharpe ratios as a bar plot\n",
"_____no_output_____"
]
],
[
[
"### How does your portfolio do?\n\nWrite your answer here!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ec92c4c94e5c8161bbd4cc1411e32ab27dc5af52 | 12,846 | ipynb | Jupyter Notebook | Denoising Autoencoder.ipynb | sushmit0109/medicalImagingCodes | 795873d4ada3c922ae27a403115a18c2d398e92f | [
"Apache-2.0"
] | null | null | null | Denoising Autoencoder.ipynb | sushmit0109/medicalImagingCodes | 795873d4ada3c922ae27a403115a18c2d398e92f | [
"Apache-2.0"
] | null | null | null | Denoising Autoencoder.ipynb | sushmit0109/medicalImagingCodes | 795873d4ada3c922ae27a403115a18c2d398e92f | [
"Apache-2.0"
] | null | null | null | 30.368794 | 108 | 0.550054 | [
[
[
"import numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom skimage import io\n\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Model\n\nfrom tqdm import tqdm\n\n\ndef noise(array):\n \"\"\"\n Adds random noise to each image in the supplied array.\n \"\"\"\n\n noise_factor = 0.1\n noisy_array = array + noise_factor * np.random.normal(\n loc=0.0, scale=1.0, size=array.shape\n )\n\n return np.clip(noisy_array, 0.0, 1.0)\n\n\ndef display(array1, array2):\n \"\"\"\n Displays ten random images from each one of the supplied arrays.\n \"\"\"\n\n n = 10\n\n indices = np.random.randint(len(array1), size=n)\n images1 = array1[indices, :]\n images2 = array2[indices, :]\n\n plt.figure(figsize=(20, 4))\n for i, (image1, image2) in enumerate(zip(images1, images2)):\n ax = plt.subplot(2, n, i + 1)\n plt.imshow(image1.reshape(256, 256))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n ax = plt.subplot(2, n, i + 1 + n)\n plt.imshow(image2.reshape(256, 256))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n plt.show()\n\n \n \nimport os\nimport sys\nimport random\nimport warnings\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nfrom tqdm import tqdm\nfrom itertools import chain\nfrom skimage.io import imread, imshow, imread_collection, concatenate_images\nfrom skimage.transform import resize\nfrom skimage.morphology import label\nfrom PIL import ImageFile\n\nfrom tensorflow.keras.models import Model, load_model\nfrom tensorflow.keras.layers import Input\nfrom tensorflow.keras.layers import Dropout, Lambda\nfrom tensorflow.keras.layers import Conv2D, Conv2DTranspose, BatchNormalization\nfrom tensorflow.keras.layers import MaxPooling2D\nfrom tensorflow.keras.layers import concatenate\nfrom tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint\nfrom tensorflow.keras import backend as K\n\nimport tensorflow as tf",
"_____no_output_____"
],
[
"from glob import glob \nfrom skimage import data, color\nfrom skimage.transform import rescale, resize, downscale_local_mean\nfrom numpy import reshape\n\n\ntrain_filenames = glob('/kaggle/input/chest-xray-pneumonia/chest_xray/train/NORMAL/*.jpeg')\ntest_filenames = glob('/kaggle/input/chest-xray-pneumonia/chest_xray/train/PNEUMONIA/*.jpeg')",
"_____no_output_____"
],
[
"train = []\ntest = []\n\nfor each in tqdm(train_filenames):\n each = io.imread(each)\n each = resize(each, (256, 256), anti_aliasing=True)\n train.append(each)\n \nfor each in tqdm(test_filenames):\n each = io.imread(each)\n each = resize(each, (256, 256), anti_aliasing=True)\n test.append(each)",
"_____no_output_____"
],
[
"xtrain = np.expand_dims(train, -1)",
"_____no_output_____"
],
[
"train_data = xtrain[:1000]\nvalid_data = xtrain[1000:]",
"_____no_output_____"
],
[
"np.shape(train_data)",
"_____no_output_____"
],
[
"from tensorflow.keras import layers\n\n\ninput = layers.Input(shape=(256, 256, 1))\n\n# Encoder\nx = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(input)\nx = layers.MaxPooling2D((2, 2), padding=\"same\")(x)\nx = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(x)\nx = layers.MaxPooling2D((2, 2), padding=\"same\")(x)\n\n# Decoder\nx = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x)\nx = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x)\nx = layers.Conv2D(1, (3, 3), activation=\"sigmoid\", padding=\"same\")(x)\n\n# Autoencoder\nautoencoder = Model(input, x)\nautoencoder.compile(optimizer=\"adam\", loss=\"mse\")\nautoencoder.summary()\n",
"_____no_output_____"
],
[
"# Unet model: https://www.kaggle.com/advaitsave/tensorflow-2-nuclei-segmentation-unet\n# Any UNET implementation will work. I chose this one because it written using simple logics. \n\n\ninputs = Input((256, 256, 1))\n\n\nc1 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (inputs)\nc1 = BatchNormalization()(c1)\nc1 = Dropout(0.1) (c1)\nc1 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c1)\nc1 = BatchNormalization()(c1)\np1 = MaxPooling2D((2, 2)) (c1)\n\nc2 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p1)\nc2 = BatchNormalization()(c2)\nc2 = Dropout(0.1) (c2)\nc2 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c2)\nc2 = BatchNormalization()(c2)\np2 = MaxPooling2D((2, 2)) (c2)\n\nc3 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p2)\nc3 = BatchNormalization()(c3)\nc3 = Dropout(0.2) (c3)\nc3 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c3)\nc3 = BatchNormalization()(c3)\np3 = MaxPooling2D((2, 2)) (c3)\n\nc4 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p3)\nc4 = BatchNormalization()(c4)\nc4 = Dropout(0.2) (c4)\nc4 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c4)\nc4 = BatchNormalization()(c4)\np4 = MaxPooling2D(pool_size=(2, 2)) (c4)\n\nc5 = Conv2D(512, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p4)\nc5 = BatchNormalization()(c5)\nc5 = Dropout(0.3) (c5)\nc5 = Conv2D(512, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c5)\nc5 = BatchNormalization()(c5)\n\nu6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (c5)\nu6 = concatenate([u6, c4])\nc6 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (u6)\nc6 = BatchNormalization()(c6)\nc6 = Dropout(0.2) (c6)\nc6 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c6)\nc6 = BatchNormalization()(c6)\n\nu7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c6)\nu7 = concatenate([u7, c3])\nc7 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (u7)\nc7 = BatchNormalization()(c7)\nc7 = Dropout(0.2) (c7)\nc7 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c7)\nc7 = BatchNormalization()(c7)\n\nu8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c7)\nu8 = concatenate([u8, c2])\nc8 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (u8)\nc8 = BatchNormalization()(c8)\nc8 = Dropout(0.1) (c8)\nc8 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c8)\nc8 = BatchNormalization()(c8)\n\nu9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c8)\nu9 = concatenate([u9, c1], axis=3)\nc9 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (u9)\nc9 = BatchNormalization()(c9)\nc9 = Dropout(0.1) (c9)\nc9 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c9)\nc9 = BatchNormalization()(c9)\n\noutputs = Conv2D(1, (1, 1), activation='sigmoid') (c9)",
"_____no_output_____"
],
[
"model = Model(inputs=[inputs], outputs=[outputs])\nmodel.compile(optimizer='adam', loss='mse', metrics=['accuracy'])\nmodel.summary()",
"_____no_output_____"
],
[
"# pre training\n\nmodel.fit(\n x=train_data,\n y=train_data,\n epochs=200,\n batch_size=4,\n shuffle=True,\n validation_data=(valid_data, valid_data),\n)",
"_____no_output_____"
],
[
"noisy_train_data = noise(train_data)\nnoisy_valid_data = noise(valid_data)",
"_____no_output_____"
],
[
"model.fit(\n x=noisy_train_data,\n y=train_data,\n epochs=100,\n batch_size=4,\n shuffle=True,\n validation_data=(noisy_valid_data, valid_data),\n)",
"_____no_output_____"
],
[
"predictions = model.predict(noisy_valid_data)\n#display(noisy_test_data, predictions)",
"_____no_output_____"
],
[
"io.imshow(predictions[157])",
"_____no_output_____"
],
[
"io.imshow(noisy_valid_data[157])",
"_____no_output_____"
],
[
"io.imshow(valid_data[157])",
"_____no_output_____"
],
[
"from skimage.metrics import structural_similarity as ssim\n\nssimArr = []\n\nfor idx, each in enumerate(predictions):\n ssimArr.append(ssim(each, valid_data[idx], data_range=1.0 - 0.0, multichannel=True))\n\nnp.mean(ssimArr)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec92d9ce3ffa23840b67f8adc42053ded5be9bb8 | 23,533 | ipynb | Jupyter Notebook | Recursivity/rand_circles.ipynb | zchuri/IntroPython | e6e3a698bc092cab4930a2b4a531261046f5757b | [
"MIT"
] | null | null | null | Recursivity/rand_circles.ipynb | zchuri/IntroPython | e6e3a698bc092cab4930a2b4a531261046f5757b | [
"MIT"
] | null | null | null | Recursivity/rand_circles.ipynb | zchuri/IntroPython | e6e3a698bc092cab4930a2b4a531261046f5757b | [
"MIT"
] | null | null | null | 84.046429 | 5,824 | 0.651468 | [
[
[
"# Generate random circles",
"_____no_output_____"
],
[
"# Draw circles\n# Clear workspace\n%reset -f\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef circulo(x, y, r):\n teta = np.arange(0, 2*np.pi, 0.01)\n X = np.round(x + r*np.cos(teta))\n Y = np.round(y + r*np.sin(teta))\n return {\"X\": X, \"Y\": Y}\n\nres = circulo(100,200,20)\nprint(res)\n",
"{'X': array([120., 120., 120., 120., 120., 120., 120., 120., 120., 120., 120.,\n 120., 120., 120., 120., 120., 120., 120., 120., 120., 120., 120.,\n 120., 119., 119., 119., 119., 119., 119., 119., 119., 119., 119.,\n 119., 119., 119., 119., 119., 119., 118., 118., 118., 118., 118.,\n 118., 118., 118., 118., 118., 118., 118., 117., 117., 117., 117.,\n 117., 117., 117., 117., 117., 117., 116., 116., 116., 116., 116.,\n 116., 116., 116., 115., 115., 115., 115., 115., 115., 115., 114.,\n 114., 114., 114., 114., 114., 114., 113., 113., 113., 113., 113.,\n 113., 113., 112., 112., 112., 112., 112., 112., 111., 111., 111.,\n 111., 111., 111., 110., 110., 110., 110., 110., 110., 109., 109.,\n 109., 109., 109., 109., 108., 108., 108., 108., 108., 107., 107.,\n 107., 107., 107., 106., 106., 106., 106., 106., 106., 105., 105.,\n 105., 105., 105., 104., 104., 104., 104., 104., 103., 103., 103.,\n 103., 103., 102., 102., 102., 102., 102., 101., 101., 101., 101.,\n 101., 100., 100., 100., 100., 100., 99., 99., 99., 99., 99.,\n 98., 98., 98., 98., 98., 97., 97., 97., 97., 97., 96.,\n 96., 96., 96., 96., 95., 95., 95., 95., 95., 94., 94.,\n 94., 94., 94., 94., 93., 93., 93., 93., 93., 92., 92.,\n 92., 92., 92., 91., 91., 91., 91., 91., 91., 90., 90.,\n 90., 90., 90., 90., 89., 89., 89., 89., 89., 89., 88.,\n 88., 88., 88., 88., 88., 87., 87., 87., 87., 87., 87.,\n 87., 86., 86., 86., 86., 86., 86., 86., 85., 85., 85.,\n 85., 85., 85., 85., 84., 84., 84., 84., 84., 84., 84.,\n 84., 84., 83., 83., 83., 83., 83., 83., 83., 83., 83.,\n 82., 82., 82., 82., 82., 82., 82., 82., 82., 82., 82.,\n 82., 81., 81., 81., 81., 81., 81., 81., 81., 81., 81.,\n 81., 81., 81., 81., 81., 81., 80., 80., 80., 80., 80.,\n 80., 80., 80., 80., 80., 80., 80., 80., 80., 80., 80.,\n 80., 80., 80., 80., 80., 80., 80., 80., 80., 80., 80.,\n 80., 80., 80., 80., 80., 80., 80., 80., 80., 80., 80.,\n 80., 80., 80., 80., 80., 80., 80., 81., 81., 81., 81.,\n 81., 81., 81., 81., 81., 81., 81., 81., 81., 81., 81.,\n 81., 81., 82., 82., 82., 82., 82., 82., 82., 82., 82.,\n 82., 82., 83., 83., 83., 83., 83., 83., 83., 83., 83.,\n 83., 84., 84., 84., 84., 84., 84., 84., 84., 85., 85.,\n 85., 85., 85., 85., 85., 85., 86., 86., 86., 86., 86.,\n 86., 86., 87., 87., 87., 87., 87., 87., 88., 88., 88.,\n 88., 88., 88., 89., 89., 89., 89., 89., 89., 90., 90.,\n 90., 90., 90., 90., 91., 91., 91., 91., 91., 91., 92.,\n 92., 92., 92., 92., 93., 93., 93., 93., 93., 93., 94.,\n 94., 94., 94., 94., 95., 95., 95., 95., 95., 96., 96.,\n 96., 96., 96., 97., 97., 97., 97., 97., 98., 98., 98.,\n 98., 98., 99., 99., 99., 99., 99., 100., 100., 100., 100.,\n 100., 101., 101., 101., 101., 101., 102., 102., 102., 102., 102.,\n 103., 103., 103., 103., 103., 104., 104., 104., 104., 104., 105.,\n 105., 105., 105., 105., 105., 106., 106., 106., 106., 106., 107.,\n 107., 107., 107., 107., 108., 108., 108., 108., 108., 108., 109.,\n 109., 109., 109., 109., 110., 110., 110., 110., 110., 110., 111.,\n 111., 111., 111., 111., 111., 112., 112., 112., 112., 112., 112.,\n 113., 113., 113., 113., 113., 113., 113., 114., 114., 114., 114.,\n 114., 114., 114., 115., 115., 115., 115., 115., 115., 115., 116.,\n 116., 116., 116., 116., 116., 116., 116., 116., 117., 117., 117.,\n 117., 117., 117., 117., 117., 117., 118., 118., 118., 118., 118.,\n 118., 118., 118., 118., 118., 118., 118., 119., 119., 119., 119.,\n 119., 119., 119., 119., 119., 119., 119., 119., 119., 119., 119.,\n 119., 120., 120., 120., 120., 120., 120., 120., 120., 120., 120.,\n 120., 120., 120., 120., 120., 120., 120., 120., 120., 120., 120.,\n 120., 120.]), 'Y': array([200., 200., 200., 201., 201., 201., 201., 201., 202., 202., 202.,\n 202., 202., 203., 203., 203., 203., 203., 204., 204., 204., 204.,\n 204., 205., 205., 205., 205., 205., 206., 206., 206., 206., 206.,\n 206., 207., 207., 207., 207., 207., 208., 208., 208., 208., 208.,\n 209., 209., 209., 209., 209., 209., 210., 210., 210., 210., 210.,\n 210., 211., 211., 211., 211., 211., 211., 212., 212., 212., 212.,\n 212., 212., 213., 213., 213., 213., 213., 213., 213., 214., 214.,\n 214., 214., 214., 214., 214., 215., 215., 215., 215., 215., 215.,\n 215., 216., 216., 216., 216., 216., 216., 216., 216., 216., 217.,\n 217., 217., 217., 217., 217., 217., 217., 217., 218., 218., 218.,\n 218., 218., 218., 218., 218., 218., 218., 218., 218., 219., 219.,\n 219., 219., 219., 219., 219., 219., 219., 219., 219., 219., 219.,\n 219., 219., 219., 220., 220., 220., 220., 220., 220., 220., 220.,\n 220., 220., 220., 220., 220., 220., 220., 220., 220., 220., 220.,\n 220., 220., 220., 220., 220., 220., 220., 220., 220., 220., 220.,\n 220., 220., 220., 220., 220., 220., 220., 220., 220., 220., 220.,\n 220., 220., 220., 220., 219., 219., 219., 219., 219., 219., 219.,\n 219., 219., 219., 219., 219., 219., 219., 219., 219., 219., 218.,\n 218., 218., 218., 218., 218., 218., 218., 218., 218., 218., 217.,\n 217., 217., 217., 217., 217., 217., 217., 217., 217., 216., 216.,\n 216., 216., 216., 216., 216., 216., 215., 215., 215., 215., 215.,\n 215., 215., 215., 214., 214., 214., 214., 214., 214., 214., 213.,\n 213., 213., 213., 213., 213., 212., 212., 212., 212., 212., 212.,\n 211., 211., 211., 211., 211., 211., 210., 210., 210., 210., 210.,\n 210., 209., 209., 209., 209., 209., 209., 208., 208., 208., 208.,\n 208., 207., 207., 207., 207., 207., 207., 206., 206., 206., 206.,\n 206., 205., 205., 205., 205., 205., 204., 204., 204., 204., 204.,\n 203., 203., 203., 203., 203., 202., 202., 202., 202., 202., 201.,\n 201., 201., 201., 201., 200., 200., 200., 200., 200., 199., 199.,\n 199., 199., 199., 198., 198., 198., 198., 198., 197., 197., 197.,\n 197., 197., 196., 196., 196., 196., 196., 195., 195., 195., 195.,\n 195., 195., 194., 194., 194., 194., 194., 193., 193., 193., 193.,\n 193., 192., 192., 192., 192., 192., 192., 191., 191., 191., 191.,\n 191., 190., 190., 190., 190., 190., 190., 189., 189., 189., 189.,\n 189., 189., 188., 188., 188., 188., 188., 188., 187., 187., 187.,\n 187., 187., 187., 187., 186., 186., 186., 186., 186., 186., 186.,\n 185., 185., 185., 185., 185., 185., 185., 184., 184., 184., 184.,\n 184., 184., 184., 184., 184., 183., 183., 183., 183., 183., 183.,\n 183., 183., 183., 182., 182., 182., 182., 182., 182., 182., 182.,\n 182., 182., 182., 182., 181., 181., 181., 181., 181., 181., 181.,\n 181., 181., 181., 181., 181., 181., 181., 181., 181., 180., 180.,\n 180., 180., 180., 180., 180., 180., 180., 180., 180., 180., 180.,\n 180., 180., 180., 180., 180., 180., 180., 180., 180., 180., 180.,\n 180., 180., 180., 180., 180., 180., 180., 180., 180., 180., 180.,\n 180., 180., 180., 180., 180., 180., 180., 180., 180., 180., 181.,\n 181., 181., 181., 181., 181., 181., 181., 181., 181., 181., 181.,\n 181., 181., 181., 181., 181., 182., 182., 182., 182., 182., 182.,\n 182., 182., 182., 182., 182., 183., 183., 183., 183., 183., 183.,\n 183., 183., 183., 183., 184., 184., 184., 184., 184., 184., 184.,\n 184., 185., 185., 185., 185., 185., 185., 185., 185., 186., 186.,\n 186., 186., 186., 186., 186., 187., 187., 187., 187., 187., 187.,\n 188., 188., 188., 188., 188., 188., 188., 189., 189., 189., 189.,\n 189., 189., 190., 190., 190., 190., 190., 191., 191., 191., 191.,\n 191., 191., 192., 192., 192., 192., 192., 193., 193., 193., 193.,\n 193., 193., 194., 194., 194., 194., 194., 195., 195., 195., 195.,\n 195., 196., 196., 196., 196., 196., 197., 197., 197., 197., 197.,\n 198., 198., 198., 198., 198., 199., 199., 199., 199., 199., 200.,\n 200., 200.])}\n"
],
[
"# Draw in empty image\nnpix = 256\nimg = np.zeros([npix, npix])\n\nix = res[\"X\"]\niy = res[\"Y\"]\n\nnx = len(ix)\n\nfor i in range(nx):\n if ix[i] >= 1 and ix[i] <= 255 and iy[i] >= 1 and iy[i] <= 255:\n img[int(ix[i]),int(iy[i])] = 255\n\nplt.imshow(img, cmap=\"gray\")\n\n",
"_____no_output_____"
],
[
"# Generate random centers\nrn = 10\nxc = np.random.choice(npix, rn)\nyc = np.random.choice(npix, rn)\nrc = np.random.choice(20, rn)\n\nfor i in range(rn):\n res = circulo(xc[i], yc[i], rc[i])\n ix = res[\"X\"]\n iy = res[\"Y\"]\n nx = len(ix)\n for i in range(nx):\n if ix[i] >= 1 and ix[i] <= 255 and iy[i] >= 1 and iy[i] <= 255:\n img[int(ix[i]),int(iy[i])] = 255\n\nplt.imshow(img, cmap=\"gray\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ec92e19d327acb528b79cbb53218405160c2fc57 | 2,824 | ipynb | Jupyter Notebook | Assignment1.ipynb | MTL2022/Capstone_Physics | ffd0cf6f3820a9b5e35392d9f937133181ffcf3c | [
"MIT"
] | null | null | null | Assignment1.ipynb | MTL2022/Capstone_Physics | ffd0cf6f3820a9b5e35392d9f937133181ffcf3c | [
"MIT"
] | null | null | null | Assignment1.ipynb | MTL2022/Capstone_Physics | ffd0cf6f3820a9b5e35392d9f937133181ffcf3c | [
"MIT"
] | null | null | null | 25.672727 | 355 | 0.559844 | [
[
[
"## Assignment 1: Introduction to Python",
"_____no_output_____"
]
],
[
[
"from numpy import sqrt",
"_____no_output_____"
]
],
[
[
"From <u> Computational Physics </u> by Newman\n\nExercise 2.1:\n\nA ball is dropped from a tower of height $h$ with an initial velocity of zero. \nWrite a function that takes the height of the tower in meters as an argument and then calculates and returns the time it takes until the ball hits the ground (ignoring air resistance). <b> Use $g = 10\\ m/s^2$ </b>\n\nYou may find the following kinematic equation to be helpful:\n$$ x_f = x_0 + v_0 t + \\frac{1}{2} a t^2 $$",
"_____no_output_____"
]
],
[
[
"def time_to_fall(h):\n \"\"\"\n Calculates the amount of time it takes a ball to fall from a tower of height h with intial velocity zero.\n Parameters:\n h (float) - the height of the tower in meters\n Returns:\n (float) time in seconds\n \n \"\"\"\n # TO DO: Complete this function\n return (h/5)**(1/2)",
"_____no_output_____"
]
],
[
[
"Below I've added `assert` statements. These statements are useful ways to test functionality. They are often referred to as unit tests because they test a single unit or function of code. These statements will produce an `AssertionError` if your function does not produce the expected result. Otherwise, they will run silently and produce no result.",
"_____no_output_____"
]
],
[
[
"assert(time_to_fall(0) == 0)\nassert(time_to_fall(20) == 2)",
"_____no_output_____"
]
],
[
[
"<b> ADD ONE MORE ASSERT STATEMENT BELOW </b>",
"_____no_output_____"
]
],
[
[
"#TO DO: Add an assert statement in this cell!\nassert(time_to_fall(125) == 5)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec92e7ff318afd4c907ba72272ed1589c7f1db61 | 5,466 | ipynb | Jupyter Notebook | Trim.ipynb | BaezCrdrm/ProyectoTI2Final | 26b64377a933779160f6f0cfcfb96b7564173db4 | [
"MIT"
] | 1 | 2019-08-06T00:41:19.000Z | 2019-08-06T00:41:19.000Z | Trim.ipynb | BaezCrdrm/ProyectoTI2Final | 26b64377a933779160f6f0cfcfb96b7564173db4 | [
"MIT"
] | null | null | null | Trim.ipynb | BaezCrdrm/ProyectoTI2Final | 26b64377a933779160f6f0cfcfb96b7564173db4 | [
"MIT"
] | 1 | 2019-08-07T03:48:45.000Z | 2019-08-07T03:48:45.000Z | 22.493827 | 98 | 0.398463 | [
[
[
"# Trim",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"ls",
"c3.csv \u001b[0m\u001b[01;34msample_data\u001b[0m/\n"
],
[
"productos = pd.read_csv(\"c3.csv\", )",
"_____no_output_____"
],
[
"productos.columns",
"_____no_output_____"
],
[
"productos['Por'] = productos['Por'].astype('str')\nproductos['Por'] = productos['Por'].replace(r'\\n',' ', regex=True)\nproductos['Por'] = productos['Por'].map(lambda x: ' '.join(x.split()))",
"_____no_output_____"
],
[
"productos['Por'][2]",
"_____no_output_____"
],
[
"productos['Por'] = productos['Por'].str.strip()",
"_____no_output_____"
],
[
"productos['Por'][2]",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
],
[
"productos.to_csv('/content/l1.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9315c5b5aa38d7b8489b539ccb11b6b175e499 | 32,036 | ipynb | Jupyter Notebook | hr/day20-important-features.ipynb | csiu/kaggle | 73bf42e854b6c8b475879def85debf46fe3b357c | [
"MIT"
] | null | null | null | hr/day20-important-features.ipynb | csiu/kaggle | 73bf42e854b6c8b475879def85debf46fe3b357c | [
"MIT"
] | null | null | null | hr/day20-important-features.ipynb | csiu/kaggle | 73bf42e854b6c8b475879def85debf46fe3b357c | [
"MIT"
] | null | null | null | 62.570313 | 18,534 | 0.733893 | [
[
[
"DAY 20 - Mar 16, 2017",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nimport pandas as pd",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n\n# Load libs\nimport psycopg2\nimport pandas.io.sql as pdsql\n\n# Specify our database\ndbname=\"hr\"\nname_of_table = \"survey\"\n\n# Connect to database\nconn = psycopg2.connect(dbname=dbname)\n\n# Make database query\ndf = pdsql.read_sql_query(\"SELECT * FROM %s;\" % name_of_table, conn)\ndf.head()",
"_____no_output_____"
]
],
[
[
"Yesterday we explored the [Kaggle hr data](https://www.kaggle.com/ludobenistant/hr-analytics) to answer a few dashboard type questions. \n\nMoving beyond the scope of a data analyst and into the scope of a data scientist, I asked the following:\n\nGiven the list of available features, are we able to predict a person's salary? And are we able to identify features that are more informative with regards to the person's salary?",
"_____no_output_____"
]
],
[
[
"features = df.columns[1:-2]\nlist(features)",
"_____no_output_____"
],
[
"X = df[features]\nY = df[\"salary\"]",
"_____no_output_____"
],
[
"# Split the data set: train on 80% and test on 20%\nn = len(X)\nn_80 = int(n * .8)\n\nX_train = X[:n_80]\nY_train = Y[:n_80]\nX_test = X[n_80:]\nY_test = Y[n_80:]",
"_____no_output_____"
],
[
"rfc = RandomForestClassifier()\nrfc.fit(X_train, Y_train)",
"_____no_output_____"
],
[
"sum(rfc.predict(X_test) == Y_test)/len(X_test)",
"_____no_output_____"
],
[
"feature_importance = pd.DataFrame({\"Importance\":rfc.feature_importances_}, index=features)\nfeature_importance.sort_values(by=\"Importance\", ascending=False, inplace=True)\nfeature_importance",
"_____no_output_____"
],
[
"feature_importance.plot(kind=\"bar\", legend=False)\nplt.ylabel(\"Importance\")",
"_____no_output_____"
]
],
[
[
"Most important features in predicting salary is (1) average_montly_hours, (2) last_evaluation, and (3) satisfaction_level",
"_____no_output_____"
],
[
"Considering only these features, we are able to make prediction wit 94% accuracy and without those features 57% accuracy.",
"_____no_output_____"
]
],
[
[
"features = [\"average_montly_hours\", \"last_evaluation\", \"satisfaction_level\"]\n\nX = df[features]\nY = df[\"salary\"]\n\nX_train = X[:n_80]\nY_train = Y[:n_80]\nX_test = X[n_80:]\nY_test = Y[n_80:]\n\nrfc = RandomForestClassifier()\nrfc.fit(X_train, Y_train)\n\nsum(rfc.predict(X_test) == Y_test)/len(X_test)",
"_____no_output_____"
],
[
"features = [\"time_spend_company\", \"number_project\", \"work_accident\", \"left_workplace\", \"promotion_last_5years\"]\n\nX = df[features]\nY = df[\"salary\"]\n\nX_train = X[:n_80]\nY_train = Y[:n_80]\nX_test = X[n_80:]\nY_test = Y[n_80:]\n\nrfc = RandomForestClassifier()\nrfc.fit(X_train, Y_train)\n\nsum(rfc.predict(X_test) == Y_test)/len(X_test)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ec931acd70ea12b2b6aa7bc318c2f3614dbcf7ea | 5,068 | ipynb | Jupyter Notebook | notebooks/mpc-playground/mpc_bob.ipynb | xnutsive/OpenMined | a337505276c4bf5c3e2320f2ebf45256832663a4 | [
"Apache-2.0"
] | null | null | null | notebooks/mpc-playground/mpc_bob.ipynb | xnutsive/OpenMined | a337505276c4bf5c3e2320f2ebf45256832663a4 | [
"Apache-2.0"
] | null | null | null | notebooks/mpc-playground/mpc_bob.ipynb | xnutsive/OpenMined | a337505276c4bf5c3e2320f2ebf45256832663a4 | [
"Apache-2.0"
] | 1 | 2020-05-27T10:09:17.000Z | 2020-05-27T10:09:17.000Z | 21.844828 | 69 | 0.488161 | [
[
[
"import notebook_importer",
"_____no_output_____"
],
[
"import spdz\nimport random\nimport numpy as np\nimport zmq",
"importing notebook from spdz.ipynb\n"
],
[
"# Bob is party 1\nparty = 1\n\n# Connect to zmq\ncontext = zmq.Context()\nsocket = context.socket(zmq.REP)\nsocket.bind(\"tcp://*:5555\")\n\n#TODO: tmp solution remove ASAP\nspdz.spdz_socket = socket\nspdz.socket_party = party",
"_____no_output_____"
],
[
"# Input dataset\nX = np.array([\n [0,0,1],\n [0,1,1],\n [1,0,1],\n [1,1,1]\n])\n\n# Output dataset\ny = np.array([[0,0,1,1]]).T\n\n# Split input into shares \n#X_shares = spdz.share_vec(spdz.wrap(X))\nX_alice, X_bob = spdz.secure(X)\n\n# Split output into shares\ny_alice, y_bob = spdz.secure(y)",
"_____no_output_____"
],
[
"# Bob receives initial weights from Alice\nsyn0_bob = spdz.swap_shares(np.array(\"OK\"), party, socket)\n\n# Bob sends X and y to Alice\nprint(spdz.swap_shares(X_alice, party, socket))\nprint(spdz.swap_shares(y_alice, party, socket))",
"OK\nOK\n"
],
[
"# Multiplication Test\na, b = spdz.PrivateValue.secure(3)\nt = spdz.swap_shares(a, party, socket)\nres = t * b\nother = spdz.swap_shares(res, party, socket)\nprint(spdz.decode_vec(other + res))",
"15.0\n"
],
[
"def np_sigmoid(x):\n return 1 / (1 + np.exp(-x))\n\nsigmoid = spdz.SigmoidInterpolated10(party, socket)\n\n#Sigmoid test:\nresult = sigmoid.evaluate(X_bob)\nresult_alice = spdz.swap_shares(result, party, socket)\n\nprint(\"sigmoid result: \")\nprint(spdz.decode_vec(result_alice + result))\n\nprint(\"np_sigmoid result: \")\nprint(np_sigmoid(X))",
"sigmoid result: \n[[ 0.5 0.5 0.7078829]\n [ 0.5 0.7078829 0.7078829]\n [ 0.7078829 0.5 0.7078829]\n [ 0.7078829 0.7078829 0.7078829]]\nnp_sigmoid result: \n[[ 0.5 0.5 0.73105858]\n [ 0.5 0.73105858 0.73105858]\n [ 0.73105858 0.5 0.73105858]\n [ 0.73105858 0.73105858 0.73105858]]\n"
],
[
"# Train in sync with Alice\n\nnetwork = spdz.TwoLayerNetwork(sigmoid, party, socket)\nnetwork.train(X_bob, y_bob, syn0_bob)\n\nweights = network.print_weights()\n\nprint(\"predictions: \")\npreds = network.predict(X_bob)\npreds_alice = spdz.swap_shares(preds, party, socket)\nprint(spdz.decode_vec(preds_alice + preds))",
"Layer 0 weights: \n[[ 4.7238776]\n [-0.2363572]\n [-2.1361938]]\npredictions: \n[[ 0.1111069]\n [ 0.0845082]\n [ 0.9361187]\n [ 0.9132718]]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9323c78ea1a0684333e362cc68bcb6bf8f2a15 | 2,233 | ipynb | Jupyter Notebook | 11 - Introduction to Python/3_Basic Python Syntax/6_Indexing Elements (1:18)/Indexing Elements - Solution_Py2.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | 3 | 2020-03-24T12:58:37.000Z | 2020-08-03T17:22:35.000Z | 11 - Introduction to Python/3_Basic Python Syntax/6_Indexing Elements (1:18)/Indexing Elements - Solution_Py2.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | null | null | null | 11 - Introduction to Python/3_Basic Python Syntax/6_Indexing Elements (1:18)/Indexing Elements - Solution_Py2.ipynb | olayinka04/365-data-science-courses | 7d71215432f0ef07fd3def559d793a6f1938d108 | [
"Apache-2.0"
] | 1 | 2021-10-19T23:59:37.000Z | 2021-10-19T23:59:37.000Z | 16.29927 | 96 | 0.458576 | [
[
[
"## Indexing",
"_____no_output_____"
],
[
"*Suggested Answers follow (usually there are multiple ways to solve a problem in Python).*",
"_____no_output_____"
],
[
"Extract the letter 'B' from \"Bingo!\".",
"_____no_output_____"
]
],
[
[
"'Bingo!'[0]",
"_____no_output_____"
],
[
"\"Bingo!\"[0]",
"_____no_output_____"
]
],
[
[
"Extract the letter \"u\" from \"Constitution\".",
"_____no_output_____"
]
],
[
[
"\"Constitution\"[7]",
"_____no_output_____"
],
[
"'Constitution' [7]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec934ac9150df6f49b53e52baae4c584bb8a47eb | 36,511 | ipynb | Jupyter Notebook | getting-started/spark-jdbc.ipynb | omesser/tutorials | 4ffa0cc474ffe3bb6c673e89aa1361990fdf5bd7 | [
"Apache-2.0"
] | null | null | null | getting-started/spark-jdbc.ipynb | omesser/tutorials | 4ffa0cc474ffe3bb6c673e89aa1361990fdf5bd7 | [
"Apache-2.0"
] | null | null | null | getting-started/spark-jdbc.ipynb | omesser/tutorials | 4ffa0cc474ffe3bb6c673e89aa1361990fdf5bd7 | [
"Apache-2.0"
] | null | null | null | 58.046105 | 542 | 0.51212 | [
[
[
"# Spark JDBC to Databases\n\n- [Overview](#spark-jdbc-overview)\n- [Setup](#spark-jdbc-setup)\n - [Define Environment Variables](#spark-jdbc-define-envir-vars)\n - [Initiate a Spark JDBC Session](#spark-jdbc-init-session)\n - [Load Driver Packages Dynamically](#spark-jdbc-init-dynamic-pkg-load)\n - [Load Driver Packages Locally](#spark-jdbc-init-local-pkg-load)\n- [Connect to Databases Using Spark JDBC](#spark-jdbc-connect-to-dbs)\n - [Connect to a MySQL Database](#spark-jdbc-to-mysql)\n - [Connecting to a Public MySQL Instance](#spark-jdbc-to-mysql-public)\n - [Connecting to a Test or Temporary MySQL Instance](#spark-jdbc-to-mysql-test-or-temp)\n - [Connect to a PostgreSQL Database](#spark-jdbc-to-postgresql)\n - [Connect to an Oracle Database](#spark-jdbc-to-oracle)\n - [Connect to an MS SQL Server Database](#spark-jdbc-to-ms-sql-server)\n - [Connect to a Redshift Database](#spark-jdbc-to-redshift)\n- [Cleanup](#spark-jdbc-cleanup)\n - [Delete Data](#spark-jdbc-delete-data)\n - [Release Spark Resources](#spark-jdbc-release-spark-resources)",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-overview\"></a>\n## Overview\n\nSpark SQL includes a data source that can read data from other databases using Java database connectivity (**JDBC**).\nThe results are returned as a Spark DataFrame that can easily be processed in Spark SQL or joined with other data sources.\nFor more information, see the [Spark documentation](https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#jdbc-to-other-databases).",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-setup\"></a>\n## Setup",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-define-envir-vars\"></a>\n### Define Environment Variables\n\nBegin by initializing some environment variables.\n\n> **Note:** You need to edit the following code to assign valid values to the database variables (`DB_XXX`).",
"_____no_output_____"
]
],
[
[
"import os\n\n# Read Iguazio Data Science Platform (\"the platform\") environment variables into local variables\nV3IO_USER = os.getenv('V3IO_USERNAME')\nV3IO_HOME = os.getenv('V3IO_HOME')\nV3IO_HOME_URL = os.getenv('V3IO_HOME_URL')\n\n# Define database environment variables\n# TODO: Edit the variable definitions to assign valid values for your environment.\n%env DB_HOST = \"\" # Database host as a fully qualified name (FQN)\n%env DB_PORT = \"\" # Database port number\n%env DB_DRIVER = \"\" # Database driver [mysql/postgresql|oracle:thin|sqlserver]\n%env DB_Name = \"\" # Database|schema name\n%env DB_TABLE = \"\" # Table name\n%env DB_USER = \"\" # Database username\n%env DB_PASSWORD = \"\" # Database user password\n\nos.environ[\"PYSPARK_SUBMIT_ARGS\"] = \"--packages mysql:mysql-connector-java:5.1.39 pyspark-shell\"",
"env: DB_HOST=\"\" # Database host's fully qualified name\nenv: DB_PORT=\"\" # Port num of the database\nenv: DB_DRIVER=\"\" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]\nenv: DB_Name=\"\" # Database|Schema Name\nenv: DB_TABLE=\"\" # Table Name\nenv: DB_USER=\"\" # Database User Name\nenv: DB_PASSWORD=\"\" # Database User's Password\n"
]
],
[
[
"<a id=\"spark-jdbc-init-session\"></a>\n### Initiate a Spark JDBC Session\n\nYou can select between two methods for initiating a Spark session with JDBC drivers (\"Spark JDBC session\"):\n\n- [Load Driver Packages Dynamically](#spark-jdbc-init-dynamic-pkg-load) (preferred)\n- [Load Driver Packages Locally](#spark-jdbc-init-local-pkg-load)",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-init-dynamic-pkg-load\"></a>\n#### Load Driver Packages Dynamically\n\nThe preferred method for initiating a Spark JDBC session is to load the required JDBC driver packages dynamically from https://spark-packages.org/ by doing the following:\n\n1. Set the `PYSPARK_SUBMIT_ARGS` environment variable to `\"--packages <group>:<name>:<version> pyspark-shell\"`.\n2. Initiate a new spark session.\n\nThe following example demonstrates how to initiate a Spark session that uses version 5.1.39 of the **mysql-connector-java** MySQL JDBC database driver (`mysql:mysql-connector-java:5.1.39`).",
"_____no_output_____"
]
],
[
[
"from pyspark.conf import SparkConf\nfrom pyspark.sql import SparkSession\n\n# Configure the Spark JDBC driver package\n# TODO: Replace `mysql:mysql-connector-java:5.1.39` with the required driver-pacakge information.\nos.environ[\"PYSPARK_SUBMIT_ARGS\"] = \"--packages mysql:mysql-connector-java:5.1.39 pyspark-shell\"\n\n# Initiate a new Spark session; you can change the application name\nspark = SparkSession.builder.appName(\"Spark JDBC tutorial\").getOrCreate()",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-init-local-pkg-load\"></a>\n#### Load Driver Packages Locally\n\nYou can also load the Spark JDBC driver package from the local file system of your Iguazio Data Science Platform (\"the platform\").\nIt's recommended that you use this method only if you don't have internet connection (\"dark-site installations\") or if there's no official Spark package for your database.\nThe platform comes pre-deployed with MySQL, PostgreSQL, Oracle, Redshift, and MS SQL Server JDBC driver packages, which are found in the **/spark/3rd_party** directory (**$SPARK_HOME/3rd_party**).\nYou can also copy additional driver packages or different versions of the pre-deployed drivers to the platform — for example, from the **Data** dashboard page.\n\nTo load a JDBC driver package locally, you need to set the `spark.driver.extraClassPath` and `spark.executor.extraClassPath` Spark configuration properties to the path to a Spark JDBC driver package in the platform's file system.\nYou can do this using either of the following alternative methods:\n\n- Preconfigure the path to the driver package —\n\n 1. In your Spark-configuration file — **$SPARK_HOME/conf/spark-defaults.conf** — set the `extraClassPath` configuration properties to the path to the relevant driver package:\n ```python\n spark.driver.extraClassPath = \"<path to a JDBC driver package>\"\n spark.executor.extraClassPath = \"<path to a JDBC driver package>\"\n ```\n 2. Initiate a new spark session.\n\n- Configure the path to the driver package as part of the initiation of a new Spark session:\n ```python\n spark = SparkSession.builder. \\\n appName(\"<app name>\"). \\\n config(\"spark.driver.extraClassPath\", \"<path to a JDBC driver package>\"). \\\n config(\"spark.executor.extraClassPath\", \"<path to a JDBC driver package>\"). \\\n getOrCreate()\n ```\n\nThe following example demonstrates how to initiate a Spark session that uses the pre-deployed version 8.0.13 of the **mysql-connector-java** MySQL JDBC database driver (**/spark/3rd_party/mysql-connector-java-8.0.13.jar**)",
"_____no_output_____"
]
],
[
[
"from pyspark.conf import SparkConf\nfrom pyspark.sql import SparkSession\n\n# METHOD I\n# Edit your Spark configuration file ($SPARK_HOME/conf/spark-defaults.conf), set the `spark.driver.extraClassPath` and\n# `spark.executor.extraClassPath` properties to the local file-system path to a pre-deployed Spark JDBC driver package.\n# Replace \"/spark/3rd_party/mysql-connector-java-8.0.13.jar\" with the relevant path.\n# spark.driver.extraClassPath = \"/spark/3rd_party/mysql-connector-java-8.0.13.jar\"\n# spark.executor.extraClassPath = \"/spark/3rd_party/mysql-connector-java-8.0.13.jar\"\n#\n# Then, initiate a new Spark session; you can change the application name.\n# spark = SparkSession.builder.appName(\"Spark JDBC tutorial\").getOrCreate()\n\n# METHOD II\n# Initiate a new Spark Session; you can change the application name.\n# Set the same `extraClassPath` configuration properties as in Method #1 as part of the initiation command.\n# Replace \"/spark/3rd_party/mysql-connector-java-8.0.13.jar\" with the relevant path.\nlocal file-system path to a pre-deployed Spark JDBC driver package\nspark = SparkSession.builder. \\\n appName(\"Spark JDBC tutorial\"). \\\n config(\"spark.driver.extraClassPath\", \"/spark/3rd_party/mysql-connector-java-8.0.13.jar\"). \\\n config(\"spark.executor.extraClassPath\", \"/spark/3rd_party/mysql-connector-java-8.0.13.jar\"). \\\n getOrCreate()",
"_____no_output_____"
],
[
"import pprint\n\n# Verify your configuration: run the following code to list the current Spark configurations, and check the output to verify that the\n# `spark.driver.extraClassPath` and `spark.executor.extraClassPath` properties are set to the correct local driver-pacakge path.\nconf = spark.sparkContext._conf.getAll()\n\npprint.pprint(conf)",
"[('spark.sql.catalogImplementation', 'in-memory'),\n ('spark.driver.extraLibraryPath', '/hadoop/etc/hadoop'),\n ('spark.app.id', 'app-20190704070308-0001'),\n ('spark.executor.memory', '2G'),\n ('spark.executor.id', 'driver'),\n ('spark.jars',\n 'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),\n ('spark.cores.max', '4'),\n ('spark.executorEnv.V3IO_ACCESS_KEY', 'bb79fffa-7582-4fd2-9347-a350335801fc'),\n ('spark.driver.extraClassPath',\n '/spark/3rd_party/mysql-connector-java-8.0.13.jar'),\n ('spark.executor.extraJavaOptions', '\"-Dsun.zip.disableMemoryMapping=true\"'),\n ('spark.driver.port', '33751'),\n ('spark.driver.host', '10.233.92.91'),\n ('spark.executor.extraLibraryPath', '/hadoop/etc/hadoop'),\n ('spark.submit.pyFiles',\n '/igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),\n ('spark.app.name', 'Spark JDBC tutorial'),\n ('spark.repl.local.jars',\n 'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),\n ('spark.rdd.compress', 'True'),\n ('spark.serializer.objectStreamReset', '100'),\n ('spark.files',\n 'file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),\n ('spark.executor.cores', '1'),\n ('spark.executor.extraClassPath',\n '/spark/3rd_party/mysql-connector-java-8.0.13.jar'),\n ('spark.submit.deployMode', 'client'),\n ('spark.driver.extraJavaOptions', '\"-Dsun.zip.disableMemoryMapping=true\"'),\n ('spark.ui.showConsoleProgress', 'true'),\n ('spark.executorEnv.V3IO_USERNAME', 'iguazio'),\n ('spark.master', 'spark://spark-jddcm4iwas-qxw13-master:7077')]\n"
]
],
[
[
"<a id=\"spark-jdbc-connect-to-dbs\"></a>\n## Connect to Databases Using Spark JDBC",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-to-mysql\"></a>\n### Connect to a MySQL Database\n\n- [Connecting to a Public MySQL Instance](#spark-jdbc-to-mysql-public)\n- [Connecting to a Test or Temporary MySQL Instance](#spark-jdbc-to-mysql-test-or-temp)",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-to-mysql-public\"></a>\n#### Connect to a Public MySQL Instance",
"_____no_output_____"
]
],
[
[
"#Loading data from a JDBC source\ndfMySQL = spark.read \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:mysql://mysql-rfam-public.ebi.ac.uk:4497/Rfam\") \\\n .option(\"dbtable\", \"Rfam.family\") \\\n .option(\"user\", \"rfamro\") \\\n .option(\"password\", \"\") \\\n .option(\"driver\", \"com.mysql.jdbc.Driver\") \\\n .load()\n\ndfMySQL.show()",
"+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+\n|rfam_acc| rfam_id|auto_wiki| description| author| seed_source|gathering_cutoff|trusted_cutoff|noise_cutoff| comment| previous_id| cmbuild| cmcalibrate| cmsearch|num_seed|num_full|num_genome_seq|num_refseq| type| structure_source|number_of_species|number_3d_structures|num_pseudonokts|tax_seed|ecmli_lambda| ecmli_mu|ecmli_cal_db|ecmli_cal_hits|maxl|clen|match_pair_node|hmm_tau|hmm_lambda| created| updated|\n+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+\n| RF00001| 5S_rRNA| 1302| 5S ribosomal RNA|Griffiths-Jones S...|Szymanski et al, ...| 38.0| 38.0| 37.9|5S ribosomal RNA ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 712| 139932| 0| 0| Gene; rRNA;|Published; PMID:1...| 8022| 0| null| | 0.59496| -5.32219| 1600000| 213632| 305| 119| true|-3.7812| 0.71822|2013-10-03 20:41:44|2019-01-04 15:01:52|\n| RF00002| 5_8S_rRNA| 1303| 5.8S ribosomal RNA|Griffiths-Jones S...|Wuyts et al, Euro...| 42.0| 42.0| 41.9|5.8S ribosomal RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 61| 4716| 0| 0| Gene; rRNA;|Published; PMID:1...| 587| 0| null| | 0.65546| -9.33442| 1600000| 410495| 277| 154| true|-3.5135| 0.71791|2013-10-03 20:47:00|2019-01-04 15:01:52|\n| RF00003| U1| 1304| U1 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U1 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 100| 15436| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 837| 0| null| | 0.6869| -8.54663| 1600000| 421575| 267| 166| true|-3.7781| 0.71616|2013-10-03 20:57:11|2019-01-04 15:01:52|\n| RF00004| U2| 1305| U2 spliceosomal RNA|Griffiths-Jones S...|The uRNA database...| 46.0| 46.0| 45.9|U2 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 208| 16562| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1102| 0| null| | 0.55222| -9.81359| 1600000| 403693| 301| 193| true|-3.5144| 0.71292|2013-10-03 20:58:30|2019-01-04 15:01:52|\n| RF00005| tRNA| 1306| tRNA|Eddy SR, Griffith...| Eddy SR| 29.0| 29.0| 28.9|Transfer RNA (tRN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 954| 1429129| 0| 0| Gene; tRNA;|Published; PMID:8...| 9934| 0| null| | 0.64375| -4.21424| 1600000| 283681| 253| 71| true|-2.6167| 0.73401|2013-10-03 21:00:26|2019-01-04 15:01:52|\n| RF00006| Vault| 1307| Vault RNA|Bateman A, Gardne...|Published; PMID:1...| 34.0| 34.1| 33.9|This family of RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 73| 564| 0| 0| Gene;|Published; PMID:1...| 94| 0| null| | 0.63669| -4.8243| 1600000| 279629| 406| 101| true|-3.5531| 0.71855|2013-10-03 22:04:04|2019-01-04 15:01:52|\n| RF00007| U12| 1308|U12 minor spliceo...|Griffiths-Jones S...|Shukla GC and Pad...| 53.0| 53.0| 52.9|The U12 small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 62| 531| 0| 0|Gene; snRNA; spli...|Predicted; Griffi...| 336| 0| null| | 0.55844| -9.95163| 1600000| 493455| 520| 155| true|-3.1678| 0.71782|2013-10-03 22:04:07|2019-01-04 15:01:52|\n| RF00008| Hammerhead_3| 1309|Hammerhead ribozy...| Bateman A| Bateman A| 29.0| 29.0| 28.9|The hammerhead ri...| Hammerhead|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 82| 3098| 0| 0| Gene; ribozyme;|Published; PMID:7...| 176| 0| null| | 0.63206| -3.83325| 1600000| 199872| 394| 58| true| -4.375| 0.71923|2013-10-03 22:04:11|2019-01-04 15:01:52|\n| RF00009| RNaseP_nuc| 1310| Nuclear RNase P|Griffiths-Jones S...|Brown JW, The Rib...| 28.0| 28.0| 27.9|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 116| 1237| 0| 0| Gene; ribozyme;|Published; PMID:9...| 763| 0| null| | 0.7641| -8.04053| 1600000| 274636|1082| 303| true|-4.3673| 0.70576|2013-10-03 22:04:14|2019-01-04 15:01:52|\n| RF00010|RNaseP_bact_a| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 100.0| 100.5| 99.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 458| 6023| 0| 0| Gene; ribozyme;|Published; PMID:9...| 6324| 0| null| | 0.76804| -8.48988| 1600000| 366265| 873| 367| true|-4.3726| 0.70355|2013-10-03 22:04:21|2019-01-04 15:01:52|\n| RF00011|RNaseP_bact_b| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 97.0| 97.1| 96.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 114| 676| 0| 0| Gene; ribozyme;|Published; PMID:9...| 767| 0| null| | 0.69906| -8.4903| 1600000| 418092| 675| 366| true|-4.0357| 0.70361|2013-10-03 22:04:51|2019-01-04 15:01:52|\n| RF00012| U3| 1312|Small nucleolar R...| Gardner PP, Marz M|Published; PMID:1...| 34.0| 34.0| 33.9|Small nucleolar R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 87| 3924| 0| 0|Gene; snRNA; snoR...|Published; PMID:1...| 416| 0| null| | 0.59795| -9.77278| 1600000| 400072| 326| 218| true|-3.8301| 0.71077|2013-10-03 22:04:54|2019-01-04 15:01:52|\n| RF00013| 6S| 2461| 6S / SsrS RNA|Bateman A, Barric...| Barrick JE| 48.0| 48.0| 47.9|E. coli 6S RNA wa...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 149| 3576| 0| 0| Gene;|Published; PMID:1...| 3309| 0| null| | 0.56243|-10.04259| 1600000| 331091| 277| 188| true|-3.5895| 0.71351|2013-10-03 22:05:06|2019-01-04 15:01:52|\n| RF00014| DsrA| 1237| DsrA RNA| Bateman A| Bateman A| 60.0| 61.5| 57.6|DsrA RNA regulate...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 5| 35| 0| 0| Gene; sRNA;|Published; PMID:9...| 39| 0| null| | 0.53383| -8.38474| 1600000| 350673| 177| 85| true|-3.3562| 0.71888|2013-02-01 11:56:19|2019-01-04 15:01:52|\n| RF00015| U4| 1314| U4 spliceosomal RNA| Griffiths-Jones SR|Zwieb C, The uRNA...| 46.0| 46.0| 45.9|U4 small nuclear ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 170| 7522| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1025| 0| null| | 0.58145| -8.85604| 1600000| 407516| 575| 140| true|-3.5007| 0.71795|2013-10-03 22:05:22|2019-01-04 15:01:52|\n| RF00016| SNORD14| 1242|Small nucleolar R...|Griffiths-Jones S...| Griffiths-Jones SR| 64.0| 64.1| 63.9|U14 small nucleol...| U14|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 18| 1182| 0| 0|Gene; snRNA; snoR...| Predicted; PFOLD| 221| 0| null| | 0.63073| -3.65386| 1600000| 232910| 229| 116| true| -3.128| 0.71819|2013-02-01 11:56:23|2019-01-04 15:01:52|\n| RF00017| Metazoa_SRP| 1315|Metazoan signal r...| Gardner PP|Published; PMID:1...| 70.0| 70.0| 69.9|The signal recogn...|SRP_euk_arch; 7SL...|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 91| 42386| 0| 0| Gene;|Published; PMID:1...| 402| 0| null| | 0.64536| -9.85267| 1600000| 488632| 514| 301| true|-4.0177| 0.70604|2013-10-03 22:07:53|2019-01-04 15:01:52|\n| RF00018| CsrB| 2460|CsrB/RsmB RNA family|Bateman A, Gardne...| Bateman A| 71.0| 71.4| 70.9|The CsrB RNA bind...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 38| 254| 0| 0| Gene; sRNA;|Predicted; PFOLD;...| 196| 0| null| | 0.69326| -9.81172| 1600000| 546392| 555| 356| true|-4.0652| 0.70388|2013-10-03 23:07:27|2019-01-04 15:01:52|\n| RF00019| Y_RNA| 1317| Y RNA|Griffiths-Jones S...|Griffiths-Jones S...| 38.0| 38.0| 37.9|Y RNAs are compon...| Y1; Y2; Y3; Y5|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 104| 8521| 0| 0| Gene;|Published; PMID:1...| 123| 0| null| | 0.59183| -5.14312| 1600000| 189478| 249| 98| true|-2.8418| 0.7187|2013-10-03 23:07:38|2019-01-04 15:01:52|\n| RF00020| U5| 1318| U5 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U5 RNA is a compo...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 180| 7524| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1001| 0| null| | 0.50732| -5.54774| 1600000| 339349| 331| 116| true|-4.1327| 0.7182|2013-10-03 23:08:43|2019-01-04 15:01:52|\n+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+\nonly showing top 20 rows\n\n"
]
],
[
[
"<a id=\"spark-jdbc-to-mysql-test-or-temp\"></a>\n#### Connect to a Test or Temporary MySQL Instance\n\n> **Note:** The following code won't work if the MySQL instance has been shut down.",
"_____no_output_____"
]
],
[
[
"dfMySQL = spark.read \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:mysql://172.31.33.215:3306/db1\") \\\n .option(\"dbtable\", \"db1.fruit\") \\\n .option(\"user\", \"root\") \\\n .option(\"password\", \"my-secret-pw\") \\\n .option(\"driver\", \"com.mysql.jdbc.Driver\") \\\n .load()\n\ndfMySQL.show()",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-to-postgresql\"></a>\n### Connect to a PostgreSQL Database",
"_____no_output_____"
]
],
[
[
"# Load data from a JDBC source\ndfPS = spark.read \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:postgresql:dbserver\") \\\n .option(\"dbtable\", \"schema.tablename\") \\\n .option(\"user\", \"username\") \\\n .option(\"password\", \"password\") \\\n .load()\n\ndfPS2 = spark.read \\\n .jdbc(\"jdbc:postgresql:dbserver\", \"schema.tablename\",\n properties={\"user\": \"username\", \"password\": \"password\"})\n\n# Specify DataFrame column data types on read\ndfPS3 = spark.read \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:postgresql:dbserver\") \\\n .option(\"dbtable\", \"schema.tablename\") \\\n .option(\"user\", \"username\") \\\n .option(\"password\", \"password\") \\\n .option(\"customSchema\", \"id DECIMAL(38, 0), name STRING\") \\\n .load()\n\n# Save data to a JDBC source\ndfPS.write \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:postgresql:dbserver\") \\\n .option(\"dbtable\", \"schema.tablename\") \\\n .option(\"user\", \"username\") \\\n .option(\"password\", \"password\") \\\n .save()\n\ndfPS2.write \\\n properties={\"user\": \"username\", \"password\": \"password\"})\n\n# Specify create table column data types on write\ndfPS.write \\\n .option(\"createTableColumnTypes\", \"name CHAR(64), comments VARCHAR(1024)\") \\\n .jdbc(\"jdbc:postgresql:dbserver\", \"schema.tablename\", properties={\"user\": \"username\", \"password\": \"password\"})",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-to-oracle\"></a>\n### Connect to an Oracle Database",
"_____no_output_____"
]
],
[
[
"# Read a table from Oracle (table: hr.emp)\ndfORA = spark.read \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:oracle:thin:username/password@//hostname:portnumber/SID\") \\\n .option(\"dbtable\", \"hr.emp\") \\\n .option(\"user\", \"db_user_name\") \\\n .option(\"password\", \"password\") \\\n .option(\"driver\", \"oracle.jdbc.driver.OracleDriver\") \\\n .load()\n\ndfORA.printSchema()\n\ndfORA.show()\n\n# Read a query from Oracle\nquery = \"(select empno,ename,dname from emp, dept where emp.deptno = dept.deptno) emp\"\n\ndfORA1 = spark.read \\\n .format(\"jdbc\") \\\n .option(\"url\", \"jdbc:oracle:thin:username/password@//hostname:portnumber/SID\") \\\n .option(\"dbtable\", query) \\\n .option(\"user\", \"db_user_name\") \\\n .option(\"password\", \"password\") \\\n .option(\"driver\", \"oracle.jdbc.driver.OracleDriver\") \\\n .load()\n\ndfORA1.printSchema()\n\ndfORA1.show()",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-to-ms-sql-server\"></a>\n### Connect to an MS SQL Server Database",
"_____no_output_____"
]
],
[
[
"# Read a table from MS SQL Server\ndfMS = spark.read \\\n .format(\"jdbc\") \\\n .options(url=\"jdbc:sqlserver:username/password@//hostname:portnumber/DB\") \\\n .option(\"dbtable\", \"db_table_name\") \\\n .option(\"user\", \"db_user_name\") \\\n .option(\"password\", \"password\") \\\n .option(\"driver\", \"com.microsoft.sqlserver.jdbc.SQLServerDriver\" ) \\\n .load()\n\ndfMS.printSchema()\n\ndfMS.show()",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-to-redshift\"></a>\n### Connect to a Redshift Database",
"_____no_output_____"
]
],
[
[
"# Read data from a table\ndfRS = spark.read \\\n .format(\"com.databricks.spark.redshift\") \\\n .option(\"url\", \"jdbc:redshift://redshifthost:5439/database?user=username&password=pass\") \\\n .option(\"dbtable\", \"my_table\") \\\n .option(\"tempdir\", \"s3n://path/for/temp/data\") \\\n .load()\n\n# Read data from a query\ndfRS = spark.read \\\n .format(\"com.databricks.spark.redshift\") \\\n .option(\"url\", \"jdbc:redshift://redshifthost:5439/database?user=username&password=pass\") \\\n .option(\"query\", \"select x, count(*) my_table group by x\") \\\n .option(\"tempdir\", \"s3n://path/for/temp/data\") \\\n .load()\n\n# Write data back to a table\ndfRS.write \\\n .format(\"com.databricks.spark.redshift\") \\\n .option(\"url\", \"jdbc:redshift://redshifthost:5439/database?user=username&password=pass\") \\\n .option(\"dbtable\", \"my_table_copy\") \\\n .option(\"tempdir\", \"s3n://path/for/temp/data\") \\\n .mode(\"error\") \\\n .save()\n\n# Use IAM role-based authentication\ndfRS.write \\\n .format(\"com.databricks.spark.redshift\") \\\n .option(\"url\", \"jdbc:redshift://redshifthost:5439/database?user=username&password=pass\") \\\n .option(\"dbtable\", \"my_table_copy\") \\\n .option(\"tempdir\", \"s3n://path/for/temp/data\") \\\n .option(\"aws_iam_role\", \"arn:aws:iam::123456789000:role/redshift_iam_role\") \\\n .mode(\"error\") \\\n .save()",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-cleanup\"></a>\n## Cleanup\n\nPrior to exiting, release disk space, computation, and memory resources consumed by the active session:\n\n- [Delete Data](#spark-jdbc-delete-data)\n- [Release Spark Resources](#spark-jdbc-release-spark-resources)",
"_____no_output_____"
],
[
"<a id=\"spark-jdbc-delete-data\"></a>\n### Delete Data\n\nYou can optionally delete any of the directories or files that you created.\nSee the instructions in the [Creating and Deleting Container Directories](https://www.iguazio.com/docs/tutorials/latest-release/getting-started/containers/#create-delete-container-dirs) tutorial.\nFor example, the following code uses a local file-system command to delete a **<running user>/examples/spark-jdbc** directory in the \"users\" container.\nEdit the path, as needed, then remove the comment mark (`#`) and run the code.",
"_____no_output_____"
]
],
[
[
"# !rm -rf /User/examples/spark-jdbc/",
"_____no_output_____"
]
],
[
[
"<a id=\"spark-jdbc-release-spark-resources\"></a>\n### Release Spark Resources\n\nWhen you're done, run the following command to stop your Spark session and release its computation and memory resources:",
"_____no_output_____"
]
],
[
[
"spark.stop()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec93548cb41575f9f21bd2522c674f540f4af8d9 | 399,885 | ipynb | Jupyter Notebook | hw_5/hw_5.ipynb | benstahl92/python-ay250-homeworks | 268643d904f94f0ca12244236fa9e21b56b8f88f | [
"MIT"
] | null | null | null | hw_5/hw_5.ipynb | benstahl92/python-ay250-homeworks | 268643d904f94f0ca12244236fa9e21b56b8f88f | [
"MIT"
] | null | null | null | hw_5/hw_5.ipynb | benstahl92/python-ay250-homeworks | 268643d904f94f0ca12244236fa9e21b56b8f88f | [
"MIT"
] | null | null | null | 110.771468 | 102,996 | 0.751233 | [
[
[
"# imports\nimport pandas as pd\nimport sqlite3\nfrom urllib.request import urlopen\nfrom urllib.error import URLError\nfrom bs4 import BeautifulSoup\nfrom datetime import datetime\nfrom collections import OrderedDict\nimport numpy as np\nimport itertools as it\nfrom geopy.distance import great_circle # for calculating distances\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Read, explore and organize supplied airport data",
"_____no_output_____"
]
],
[
[
"# read supplied csv files of airport info\nica = pd.read_csv('data/ICAO_airports.csv') # info for all (I think) airports\nta = pd.read_csv('data/top_airports.csv') # info for top 50 airports",
"_____no_output_____"
],
[
"# explore ica data\nica.head()",
"_____no_output_____"
],
[
"# explore ta data\nta.head()",
"_____no_output_____"
],
[
"# we can join the data from the two files by merging the dataframes on the common airport codes\nica['ICAO'] = ica['ident']\ncombined_info = ta.merge(ica, on='ICAO')\ncombined_info",
"_____no_output_____"
]
],
[
[
"## Build out weather scraping functionality",
"_____no_output_____"
]
],
[
[
"def wund_ex_query(airport, start_date, day, mnth, yr, syr, smnth, keep_cols, results):\n '''\n helper function to fn: wund_ap_hist_scrape - not intended to be used as a standalone\n '''\n \n # assemble url\n url = 'https://www.wunderground.com/history/airport/{}/{}/CustomHistory.html?dayend={}&monthend={}&yearend={}'.format(\n airport, start_date, day, mnth, yr)\n \n # go to url and read html response\n response = urlopen(url)\n html = response.read()\n response.close()\n \n # parse response into beautiful soup format\n soup = BeautifulSoup(html, \"html.parser\")\n \n # the table we need is the second one - extract it\n table = soup.findAll('table')[1]\n \n # iterate through all rows in table\n for row_cnt, row in enumerate(table.findAll('tr')):\n \n # all header rows contain the word 'sum' and will be ignored\n if 'sum' not in row.text:\n \n # make list to hold results for row\n row_res = []\n \n # iterate through all columns in current row\n for idx, col in enumerate(row.findAll('td')):\n \n # check if index is one of the columns requested in keep_cols\n if idx in keep_cols.values():\n \n # extract column value as string and append to row_res\n row_res.append(''.join([s.rstrip() for s in col.findAll(text=True)]))\n \n # deal with date formatting\n if (row_res == []) and (row_cnt != 0):\n if smnth < 12:\n smnth += 1\n elif smnth == 12:\n syr += 1\n smnth = 1\n elif row_res != []:\n # deal with missing date rows by using the previous day's values\n if (results != []) and (int(row_res[0]) != 1) and (int(row_res[0]) > int(results[-1][0].split('/')[2]) + 1):\n for i in range(int(row_res[0]) - int(results[-1][0].split('/')[2]) - 1):\n tmp = results[-1]\n old_date = tmp[0].split('/')\n new_date = '{}/{}/{}'.format(old_date[0], old_date[1], 1 + int(old_date[2]))\n results.append([new_date] + tmp[1:])\n row_res[0] = '{}/{}/{}'.format(syr, smnth, row_res[0])\n results.append(row_res)\n else:\n row_res[0] = '{}/{}/{}'.format(syr, smnth, row_res[0])\n results.append(row_res)",
"_____no_output_____"
],
[
"def wund_ap_hist_scrape(airport = 'KSFO', start_date = '2008/1/1', end_date = 'today',\n keep_cols = {'day_of_month': 0, 'high_temp': 1, 'avg_temp': 2, 'low_temp': 3,\n 'avg_humidity': 8, 'precipitation': 19}):\n '''\n scrapes weather underground historical data for a given airport over a specified date range\n and returns dataframe with results\n \n Parameters\n -----------\n airport : airport code (four letter string)\n start_date : starting date required for history (fmt: YYYY/MM/DD)\n end_date : ending date required for history (fmt: YYYY/MM/DD or 'today')\n keep_cols : dictionary with keys of strings to be used as column names in output df and values that\n are the indices of the columns in weather underground table that correspond to them\n \n Returns\n --------\n pandas dataframe of results over specified date range with columns labeled from keep_cols dictionary\n '''\n \n # we will need the keep_cols dict to be ordered by value\n keep_cols = OrderedDict(sorted(keep_cols.items(), key = lambda t:t[1]))\n \n # determine end date based on input and put into required format\n if end_date == 'today':\n now = datetime.now()\n yr = now.year\n mnth = now.month\n day = now.day\n end_date = '{}/{}/{}'.format(yr, mnth, day)\n else:\n tmp = end_date.split('/')\n yr = tmp[0]\n mnth = tmp[1]\n day = tmp[2]\n \n # extract start yr and month\n tmp = start_date.split('/')\n syr = int(tmp[0])\n smnth = int(tmp[1])\n \n # make list to hold results\n results = []\n \n # calculate difference in years between start and end to determine how many queries to make\n yr_diff = (datetime.strptime(end_date, '%Y/%m/%d') - datetime.strptime(start_date, '%Y/%m/%d')).total_seconds() / (60 * 60 * 24 * 365.25)\n \n # break request into the appropraite number of queries\n if yr_diff < 0:\n raise ValueError('start date must be earlier than end date!')\n elif yr_diff <= 1:\n wund_ex_query(airport, start_date, day, mnth, yr, syr, smnth, keep_cols, results)\n elif yr_diff > 1:\n wund_ex_query(airport, start_date, 31, 12, syr, syr, smnth, keep_cols, results)\n for yy in range(syr + 1, syr + int(yr_diff)):\n wund_ex_query(airport, '{}/1/1'.format(yy), 31, 12, yy, yy, 1, keep_cols, results)\n wund_ex_query(airport, '{}/1/1'.format(yr), day, mnth, yr, yr, 1, keep_cols, results)\n \n # collect results into dataframe, cast columns to appropriate type, and return\n df = pd.DataFrame(results, columns=keep_cols.keys())\n for col in ['high_temp', 'avg_temp', 'low_temp', 'avg_humidity', 'precipitation']:\n df[col] = pd.to_numeric(df[col], errors='coerce')\n return df.fillna(0)",
"_____no_output_____"
]
],
[
[
"## Create and write data to sqlite3 database",
"_____no_output_____"
]
],
[
[
"# create database and structure\n\nconnection = sqlite3.connect('data/airports.db')\ncursor = connection.cursor()\n\n# command to create airport_info table\nsql_airport_info_cmd = \"\"\"CREATE TABLE airport_info (\n id INTEGER PRIMARY KEY AUTOINCREMENT,\n ICAO TEXT,\n name TEXT,\n nearest_city TEXT,\n latitude_deg FLOAT,\n longitude_deg FLOAT,\n elev FLOAT,\n enplanements INT)\"\"\"\n\n# commad to create airport_weather table\nsql_airport_weather_cmd = \"\"\"CREATE TABLE airport_weather (\n id INTEGER PRIMARY KEY AUTOINCREMENT,\n ICAO TEXT,\n date DATE,\n high_temp FLOAT,\n avg_temp FLOAT,\n low_temp FLOAT,\n avg_humidity FLOAT,\n precipitation FLOAT)\"\"\"\n\ncursor.execute(sql_airport_info_cmd)\ncursor.execute(sql_airport_weather_cmd)\nconnection.commit()",
"_____no_output_____"
],
[
"# NB: this will take a WHILE to run\n# had to resort to a while loop with error handling b/c weather underground kept interfering with the scrape\n\n# populate tables\nfor i, row in combined_info.iterrows():\n \n print('Iteration: {}'.format(i))\n \n # get params from combined info dataframe\n info_params = (row['ICAO'], row['Airport'], row['City'], row['latitude_deg'], row['longitude_deg'],\n row['elevation_ft'], row['Enplanements'])\n \n # write into airport_info table\n cursor.execute('INSERT INTO airport_info' + \\\n '(ICAO, name, nearest_city, latitude_deg, longitude_deg, elev, enplanements) ' +\\\n 'VALUES (?, ?, ?, ?, ?, ?, ?)', info_params)\n \n # query for dataframe of weather info\n while True:\n try:\n weather_df = wund_ap_hist_scrape(airport = row['ICAO'])\n break\n except URLError:\n print('Interrupted: trying again...')\n pass\n \n # add all rows from weather_df to airport_weather table\n for wi, wrow in weather_df.iterrows():\n \n # get weather params from row\n w_params = (row['ICAO'], datetime.strptime(wrow['day_of_month'], '%Y/%m/%d'), wrow['high_temp'], \n wrow['avg_temp'], wrow['low_temp'], wrow['avg_humidity'], wrow['precipitation'])\n \n # write into table\n cursor.execute('INSERT INTO airport_weather' + \\\n '(ICAO, date, high_temp, avg_temp, low_temp, avg_humidity, precipitation) ' + \\\n 'VALUES (?, ?, ?, ?, ?, ? , ?)', w_params)\n\nconnection.commit()",
"Iteration: 0\nIteration: 1\nInterrupted: trying again...\nIteration: 2\nIteration: 3\nIteration: 4\nIteration: 5\nIteration: 6\nIteration: 7\nIteration: 8\nIteration: 9\nIteration: 10\nIteration: 11\nIteration: 12\nIteration: 13\nIteration: 14\nIteration: 15\nIteration: 16\nIteration: 17\nIteration: 18\nIteration: 19\nInterrupted: trying again...\nIteration: 20\nIteration: 21\nIteration: 22\nIteration: 23\nIteration: 24\nIteration: 25\nIteration: 26\nIteration: 27\nIteration: 28\nInterrupted: trying again...\nInterrupted: trying again...\nIteration: 29\nIteration: 30\nIteration: 31\nIteration: 32\nIteration: 33\nIteration: 34\nIteration: 35\nIteration: 36\nIteration: 37\nIteration: 38\nIteration: 39\nIteration: 40\nIteration: 41\nIteration: 42\nInterrupted: trying again...\nIteration: 43\nIteration: 44\nIteration: 45\nIteration: 46\nIteration: 47\nIteration: 48\nIteration: 49\n"
]
],
[
[
"## analysis",
"_____no_output_____"
]
],
[
[
"# set up connetion to db so that lists of values can be returned instead of lists of tuples\nconnection.row_factory = lambda cursor, row: row[0]\nc = connection.cursor()",
"_____no_output_____"
],
[
"# pull airport code from database\n#code_cmd = 'SELECT ICAO FROM airport_info where ICAO != \"KIAH\" and ICAO != \"KLGA\" and ICAO != \"KMSY\" and ICAO != \"KIND\"'\ncode_cmd = 'SELECT ICAO FROM airport_info'\ncodes = c.execute(code_cmd).fetchall()\nlen(codes)",
"_____no_output_____"
],
[
"# iterate through all permutations of pairs of airport codes, calculate correlation coefficients, and store results\nres_dict = {'pair': [], 'hT_1': [], 'hT_3': [], 'hT_7': [], 'p_1': [], 'p_3': [], 'p_7': [], 'dist': [], 'long_diff': []}\nfor pair in it.permutations(codes, 2):\n \n # pull needed weather info for each airport\n p0T = c.execute('SELECT high_temp FROM airport_weather WHERE ICAO = \"{}\"'.format(pair[0])).fetchall()\n p0pr = c.execute('SELECT precipitation FROM airport_weather WHERE ICAO = \"{}\"'.format(pair[0])).fetchall()\n p1T = c.execute('SELECT high_temp FROM airport_weather WHERE ICAO = \"{}\"'.format(pair[1])).fetchall()\n p1pr = c.execute('SELECT precipitation FROM airport_weather WHERE ICAO = \"{}\"'.format(pair[1])).fetchall()\n \n # correlation strengths are computed as the average of the off-diagonal elements computed by\n # corrcoef between data for one airport and data for the other airport advanced by 1,3, or 7 days\n res_dict['hT_1'].append(np.mean(np.corrcoef(p0T[:-1], p1T[1:])[0,1]))\n res_dict['hT_3'].append(np.mean(np.corrcoef(p0T[:-3], p1T[3:])[0,1]))\n res_dict['hT_7'].append(np.mean(np.corrcoef(p0T[:-7], p1T[7:])[0,1]))\n res_dict['p_1'].append(np.mean(np.corrcoef(p0pr[:-1], p1pr[1:])[0,1]))\n res_dict['p_3'].append(np.mean(np.corrcoef(p0pr[:-3], p1pr[3:])[0,1]))\n res_dict['p_7'].append(np.mean(np.corrcoef(p0pr[:-7], p1pr[7:])[0,1]))\n \n # pull location coordinates and calculate distances\n lat0 = c.execute('SELECT latitude_deg FROM airport_info WHERE ICAO = \"{}\"'.format(pair[0])).fetchall()[0]\n long0 = c.execute('SELECT longitude_deg FROM airport_info WHERE ICAO = \"{}\"'.format(pair[0])).fetchall()[0]\n lat1 = c.execute('SELECT latitude_deg FROM airport_info WHERE ICAO = \"{}\"'.format(pair[1])).fetchall()[0]\n long1 = c.execute('SELECT longitude_deg FROM airport_info WHERE ICAO = \"{}\"'.format(pair[1])).fetchall()[0]\n \n # calculate and store distance between airports\n res_dict['dist'].append(great_circle((lat0,long0),(lat1,long1)).miles)\n \n # calculate and store longitude difference\n res_dict['long_diff'].append(long1-long0)\n \n # store pair\n res_dict['pair'].append(pair)\n \ncorrelations = pd.DataFrame(res_dict)",
"_____no_output_____"
],
[
"correlations",
"_____no_output_____"
]
],
[
[
"## Plot Results",
"_____no_output_____"
]
],
[
[
"def plot_res_one_each(df, x_vals = {'dist': 'Distance (mi)', 'long_diff': 'Longitude Difference (Deg.)'},\n y_vals = {'hT_1': 'High Temperature (1 Day)', 'hT_3': 'High Temperature (3 Day)', 'hT_7': 'High Temperature (7 Day)'}):\n '''\n make plots of results\n '''\n \n # generate figure and axes\n fig, ax = plt.subplots(len(x_vals), len(y_vals), figsize = (4* len(y_vals), 4* len(x_vals)))\n \n # iterate through all parameters\n for xi, x in enumerate(x_vals.keys()):\n for yi, y in enumerate(y_vals.keys()):\n \n # label plots\n ax[xi,yi].set_xlabel(x_vals[x])\n ax[xi,yi].set_ylabel('Correlation Strength')\n ax[xi,yi].set_title(y_vals[y])\n \n # select needed subset of data for plot\n tmp_df = df.sort_values(y, ascending = False).iloc[:10]\n \n # plot data and add labels\n ax[xi,yi].plot(tmp_df[x], tmp_df[y], 'k*')\n for i, row in tmp_df.iterrows():\n xlim = ax[xi,yi].get_xlim()\n ax[xi,yi].text(row[x] + 0.03*(xlim[1] - xlim[0]), row[y],\n '{}\\n{}'.format(row['pair'][0], row['pair'][1]), size='smaller', va='center')\n \n plt.tight_layout()",
"_____no_output_____"
],
[
"plot_res_one_each(correlations)",
"_____no_output_____"
],
[
"plot_res_one_each(correlations, y_vals = {'p_1': 'Precipitation (1 Day)', 'p_3': 'Precipitation (3 Day)', 'p_7': 'Precipitation (7 Day)'})",
"_____no_output_____"
],
[
"# I think the data may be easier to interpret by combining each set of three horizontal panes into one\ndef plot_res_together(df, x_vals = {'dist': 'Distance (mi)', 'long_diff': 'Longitude Difference (Deg.)'},\n y_vals = {'hT_1': 'High Temperature (1 Day)', 'hT_3': 'High Temperature (3 Day)', 'hT_7': 'High Temperature (7 Day)'},\n colors = ['k','r','b']):\n '''\n make plots of results\n '''\n \n # generate figure and axes\n fig, ax = plt.subplots(1, len(x_vals), figsize = (12, 3 * len(x_vals)))\n \n # iterate through all parameters\n for xi, x in enumerate(x_vals.keys()):\n for yi, y in enumerate(y_vals.keys()):\n \n # label plots\n ax[xi].set_xlabel(x_vals[x])\n ax[xi].set_ylabel('Correlation Strength')\n #ax[xi].set_title(y_vals[y])\n \n # select needed subset of data for plot\n tmp_df = df.sort_values(y, ascending = False).iloc[:10]\n \n # plot data and add labels\n ax[xi].plot(tmp_df[x], tmp_df[y], '*', label = y_vals[y])\n ax[xi].legend(loc = 'best')\n #for i, row in tmp_df.iterrows():\n # xlim = ax[xi].get_xlim()\n # ax[xi].text(row[x] + 0.01*(xlim[1] - xlim[0]), row[y],\n # '{}\\n{}'.format(row['pair'][0], row['pair'][1]), size='smaller', va='center')\n \n plt.tight_layout()",
"_____no_output_____"
],
[
"plot_res_together(correlations)\nprint('Temperature Results')",
"Temperature Results\n"
]
],
[
[
"By visualizing our temperature results in the plots above, there are several observations to make:\n* Maximum temperature correlations show a hierarchy - in terms of the top ten pairs, cities were always more correlated at one day than at three, and always more correlated at three days than seven. This is exactly what I had expected.\n* The correlations between cities after one day were much more localized in terms the parameter they are plotted against (ie distance or longitude difference) than after three days or seven days. In terms of distances between cities, the best correlated pairs after one day were roughly 400 miles apart on average with relatively small variance. In terms of lingitude difference they were between 5 and 10 degrees, again with relatively low variance.\n* The correlations appear to have relatively similar variances in terms of correlation strength, though they also show a hierarchy towards being more dispersed for later times.",
"_____no_output_____"
]
],
[
[
"plot_res_together(correlations, y_vals = {'p_1': 'Precipitation (1 Day)', 'p_3': 'Precipitation (3 Day)', 'p_7': 'Precipitation (7 Day)'})\nprint('Precipitation Results')",
"Precipitation Results\n"
]
],
[
[
"When looking at the precipitatation results, we can again see qualitatively similar features.\n* There is again a hierarchy in correlation strengths, although the correlations at one day are markedly higher than those at three and seven days which are much closer together. It is worth noting that the correlations were much weaker in an absolute sense for precipitation than for high temperature. In the precipitation case, the strongest correlation might have a strength of roughly 0.45 where as with high temperature the highest is closer to 0.95.\n* Again correlations between cities are much more localized in terms of distance or longitude distance after one day than after three or seven days. In the case of precipitation after three or seven days, there appear to be outliers that boost the variances considerably. For there to be a correlation at distances so large as 3000 mi seems hard to believe (though with a strength of 0.1, the presence of these points is not very telling).\n* In terms of the variances for correlation strength, one day has perhaps the largest variance for precipitation, where as for high temperature this was the opposite.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9355b0c8ffe60bb82b267b5948e50853f08d67 | 9,183 | ipynb | Jupyter Notebook | preprocessing/pre-covid.ipynb | CarlOwOs/Apolo-COVID-cough-predictor | 090dfc0527f70756aad3085011f7d60786c0c65f | [
"MIT"
] | 3 | 2020-12-20T14:33:02.000Z | 2020-12-20T15:19:33.000Z | preprocessing/pre-covid.ipynb | CarlOwOs/Apolo-COVID-cough-predictor | 090dfc0527f70756aad3085011f7d60786c0c65f | [
"MIT"
] | null | null | null | preprocessing/pre-covid.ipynb | CarlOwOs/Apolo-COVID-cough-predictor | 090dfc0527f70756aad3085011f7d60786c0c65f | [
"MIT"
] | 3 | 2020-12-19T12:21:46.000Z | 2021-05-06T10:56:22.000Z | 33.271739 | 494 | 0.543395 | [
[
[
"import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchaudio\nimport torchvision \nfrom tqdm import tqdm\nfrom torch.utils.data import DataLoader\nimport matplotlib.pyplot as plt\nimport os\nimport shutil\n\n\ndevice = 'cpu'\nif torch.cuda.is_available():\n device = 'cuda'\n torch.cuda.manual_seed_all(1)",
"/Users/Carlos/Documents/Anaconda/anaconda3/lib/python3.6/site-packages/torchaudio/backend/utils.py:54: UserWarning: \"sox\" backend is being deprecated. The default backend will be changed to \"sox_io\" backend in 0.8.0 and \"sox\" backend will be removed in 0.9.0. Please migrate to \"sox_io\" backend. Please refer to https://github.com/pytorch/audio/issues/903 for the detail.\n '\"sox\" backend is being deprecated. '\n"
],
[
"basepath = './Cough_dataset/'\nlabeled_path = basepath + 'Labeled_audio/'\npos_path = labeled_path + 'Pos/'\nneg_path = labeled_path + 'Neg/'\npos_asymp_path = labeled_path + 'Pos_asymp/'",
"_____no_output_____"
],
[
"'''\nWe can extract both cough audios recordings from each participant in a single directory.\n'''\n\npositives_path = labeled_path + 'Positives_audios/'\nif not os.path.exists(positives_path):\n os.makedirs(positives_path)\n\nfor i in os.listdir(pos_path):\n if i != '.DS_Store':\n\n participant_path = pos_path + i\n \n if ('cough-heavy.wav' in os.listdir(participant_path)):\n old_path = participant_path + '/cough-heavy.wav'\n new_path = positives_path + i + '_cough-heavy.wav'\n shutil.copy(old_path, new_path)\n\n if ('cough-shallow.wav' in os.listdir(participant_path)):\n old_path = participant_path + '/cough-shallow.wav'\n new_path = positives_path + i + '_cough-shallow.wav'\n shutil.copy(old_path, new_path)",
"_____no_output_____"
],
[
"'''\nWe can extract both cough audios recordings from each participant in a single directory.\n'''\n\nnegative_path = labeled_path + 'Negative_audios/'\nif not os.path.exists(negative_path):\n os.makedirs(negative_path)\n\nfor i in os.listdir(neg_path):\n if i != '.DS_Store':\n\n participant_path = neg_path + i\n\n if ('cough-heavy.wav' in os.listdir(participant_path)):\n old_path = participant_path + '/cough-heavy.wav'\n new_path = negative_path + i + '_cough-heavy.wav'\n shutil.copy(old_path, new_path)\n\n if ('cough-shallow.wav' in os.listdir(participant_path)):\n old_path = participant_path + '/cough-shallow.wav'\n new_path = negative_path + i + '_cough-shallow.wav'\n shutil.copy(old_path, new_path)",
"_____no_output_____"
],
[
"'''\nWe can extract both cough audios recordings from each participant in a single directory.\n'''\n\nasymp_path = labeled_path + 'Asymp_audios/'\nif not os.path.exists(asymp_path):\n os.makedirs(asymp_path)\n\nfor i in os.listdir(pos_asymp_path):\n\n participant_path = pos_asymp_path + i\n\n if ('cough-heavy.wav' in os.listdir(participant_path)):\n old_path = participant_path + '/cough-heavy.wav'\n new_path = asymp_path + i + '_cough-heavy.wav'\n shutil.copy(old_path, new_path)\n\n if ('cough-shallow.wav' in os.listdir(participant_path)):\n old_path = participant_path + '/cough-shallow.wav'\n new_path = asymp_path + i + '_cough-shallow.wav'\n shutil.copy(old_path, new_path)",
"_____no_output_____"
],
[
"labeled_path = './Cough_dataset/Labeled_audio/cough/'\n# move folders to new dir",
"_____no_output_____"
],
[
"train_path = labeled_path + 'TRAIN/'\ntest_path = labeled_path + 'TEST/'\n\nif not os.path.exists(train_path):\n os.makedirs(train_path)\n os.makedirs(train_path + 'covid/')\n os.makedirs(train_path + 'no_covid/')\n #os.makedirs(train_path + 'asymp/')\n\n\nif not os.path.exists(test_path):\n os.makedirs(test_path)\n os.makedirs(test_path + 'covid/')\n os.makedirs(test_path + 'no_covid/')\n #os.makedirs(test_path + 'asymp/')\n \n# The partition of the data is defined as 70%\nlen(os.listdir(labeled_path + 'Positives_audios'))\ncovid_path = labeled_path + 'Positives_audios/'\n\nlen(os.listdir(labeled_path + 'Negative_audios'))\nnocovid_path = labeled_path + 'Negative_audios/'\n\n#len(os.listdir(labeled_path + 'Asymp_audios'))\n#asymp_path = labeled_path + 'Asymp_audios/'\n\nmax_len = int(len(os.listdir(covid_path))*0.7)\nfor i in os.listdir(covid_path):\n\n len_train = len(os.listdir(train_path + 'covid/'))\n old_path = covid_path + i\n\n if (len_train >= max_len):\n new_path = test_path + 'covid/' + i\n shutil.move(old_path, new_path)\n else:\n new_path = train_path + 'covid/' + i\n shutil.move(old_path, new_path)\n\n if len(os.listdir(covid_path)) == 0:\n os.rmdir(covid_path)\n \nmax_len = int(len(os.listdir(nocovid_path))*0.7)\nfor i in os.listdir(nocovid_path):\n\n len_train = len(os.listdir(train_path + 'no_covid/'))\n old_path = nocovid_path + i\n\n if (len_train >= max_len):\n new_path = test_path + 'no_covid/' + i\n shutil.move(old_path, new_path)\n else:\n new_path = train_path + 'no_covid/' + i\n shutil.move(old_path, new_path)\n\n if len(os.listdir(nocovid_path)) == 0:\n os.rmdir(nocovid_path)\n\n'''\nmax_len = int(len(os.listdir(asymp_path))*0.7)\nfor i in os.listdir(asymp_path):\n\n len_train = len(os.listdir(train_path + 'asymp/'))\n old_path = asymp_path + i\n\n if (len_train >= max_len):\n new_path = test_path + 'asymp/' + i\n shutil.move(old_path, new_path)\n else:\n new_path = train_path + 'asymp/' + i\n shutil.move(old_path, new_path)\n\n if len(os.listdir(asymp_path)) == 0:\n os.rmdir(asymp_path)\n'''",
"_____no_output_____"
],
[
"#anxufa totes les noves carpetes a una carpeta que se digue cough",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9357093219a336ef94a3b6ff162c89f1412db6 | 11,674 | ipynb | Jupyter Notebook | docs/examples/custom/README.ipynb | axsaucedo/MLServer | c11b1beda321b7070b510a0d47ec3fa6cfd78e27 | [
"ECL-2.0",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 191 | 2020-06-16T18:07:27.000Z | 2022-03-29T05:56:23.000Z | docs/examples/custom/README.ipynb | axsaucedo/MLServer | c11b1beda321b7070b510a0d47ec3fa6cfd78e27 | [
"ECL-2.0",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 347 | 2020-08-21T02:22:04.000Z | 2022-03-31T12:27:26.000Z | docs/examples/custom/README.ipynb | axsaucedo/MLServer | c11b1beda321b7070b510a0d47ec3fa6cfd78e27 | [
"ECL-2.0",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 56 | 2020-06-22T14:29:25.000Z | 2022-03-25T21:58:48.000Z | 32.518106 | 225 | 0.553195 | [
[
[
"# Serving a custom model\n\nThe `mlserver` package comes with inference runtime implementations for `scikit-learn` and `xgboost` models.\nHowever, some times we may also need to roll out our own inference server, with custom logic to perform inference.\nTo support this scenario, MLServer makes it really easy to create your own extensions, which can then be containerised and deployed in a production environment.",
"_____no_output_____"
],
[
"## Overview\n\nIn this example, we will train a [`numpyro` model](http://num.pyro.ai/en/stable/). \nThe `numpyro` library streamlines the implementation of probabilistic models, abstracting away advanced inference and training algorithms.\n\nOut of the box, `mlserver` doesn't provide an inference runtime for `numpyro`.\nHowever, through this example we will see how easy is to develop our own.",
"_____no_output_____"
],
[
"## Training\n\nThe first step will be to train our model.\nThis will be a very simple bayesian regression model, based on an example provided in the [`numpyro` docs](https://nbviewer.jupyter.org/github/pyro-ppl/numpyro/blob/master/notebooks/source/bayesian_regression.ipynb).\n\nSince this is a probabilistic model, during training we will compute an approximation to the posterior distribution of our model using MCMC.",
"_____no_output_____"
]
],
[
[
"# Original source code and more details can be found in:\n# https://nbviewer.jupyter.org/github/pyro-ppl/numpyro/blob/master/notebooks/source/bayesian_regression.ipynb\n\n\nimport numpyro\nimport numpy as np\nimport pandas as pd\n\nfrom numpyro import distributions as dist\nfrom jax import random\nfrom numpyro.infer import MCMC, NUTS\n\nDATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'\ndset = pd.read_csv(DATASET_URL, sep=';')\n\nstandardize = lambda x: (x - x.mean()) / x.std()\n\ndset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)\ndset['MarriageScaled'] = dset.Marriage.pipe(standardize)\ndset['DivorceScaled'] = dset.Divorce.pipe(standardize)\n\ndef model(marriage=None, age=None, divorce=None):\n a = numpyro.sample('a', dist.Normal(0., 0.2))\n M, A = 0., 0.\n if marriage is not None:\n bM = numpyro.sample('bM', dist.Normal(0., 0.5))\n M = bM * marriage\n if age is not None:\n bA = numpyro.sample('bA', dist.Normal(0., 0.5))\n A = bA * age\n sigma = numpyro.sample('sigma', dist.Exponential(1.))\n mu = a + M + A\n numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)\n\n# Start from this source of randomness. We will split keys for subsequent operations.\nrng_key = random.PRNGKey(0)\nrng_key, rng_key_ = random.split(rng_key)\n\nnum_warmup, num_samples = 1000, 2000\n\n# Run NUTS.\nkernel = NUTS(model)\nmcmc = MCMC(kernel, num_warmup, num_samples)\nmcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)\nmcmc.print_summary()",
"_____no_output_____"
]
],
[
[
"### Saving our trained model\n\nNow that we have _trained_ our model, the next step will be to save it so that it can be loaded afterwards at serving-time.\nNote that, since this is a probabilistic model, we will only need to save the traces that approximate the posterior distribution over latent parameters.\n\nThis will get saved in a `numpyro-divorce.json` file.",
"_____no_output_____"
]
],
[
[
"import json\n\nsamples = mcmc.get_samples()\nserialisable = {}\nfor k, v in samples.items():\n serialisable[k] = np.asarray(v).tolist()\n \nmodel_file_name = \"numpyro-divorce.json\"\nwith open(model_file_name, 'w') as model_file:\n json.dump(serialisable, model_file)",
"_____no_output_____"
]
],
[
[
"## Serving\n\nThe next step will be to serve our model using `mlserver`. \nFor that, we will first implement an extension which serve as the _runtime_ to perform inference using our custom `numpyro` model.",
"_____no_output_____"
],
[
"### Custom inference runtime\n\nOur custom inference wrapper should be responsible of:\n\n- Loading the model from the set samples we saved previously.\n- Running inference using our model structure, and the posterior approximated from the samples.\n",
"_____no_output_____"
]
],
[
[
"%%writefile models.py\nimport json\nimport numpyro\nimport numpy as np\n\nfrom typing import Dict\nfrom jax import random\nfrom mlserver import MLModel, types\nfrom mlserver.utils import get_model_uri\nfrom numpyro.infer import Predictive\nfrom numpyro import distributions as dist\n\n\nclass NumpyroModel(MLModel):\n async def load(self) -> bool:\n model_uri = await get_model_uri(self._settings)\n with open(model_uri) as model_file:\n raw_samples = json.load(model_file)\n\n self._samples = {}\n for k, v in raw_samples.items():\n self._samples[k] = np.array(v)\n\n self._predictive = Predictive(self._model, self._samples)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n inputs = self._extract_inputs(payload)\n predictions = self._predictive(rng_key=random.PRNGKey(0), **inputs)\n\n obs = predictions[\"obs\"]\n obs_mean = obs.mean()\n\n return types.InferenceResponse(\n id=payload.id,\n model_name=self.name,\n model_version=self.version,\n outputs=[\n types.ResponseOutput(\n name=\"obs_mean\",\n shape=obs_mean.shape,\n datatype=\"FP32\",\n data=np.asarray(obs_mean).tolist(),\n )\n ],\n )\n\n def _extract_inputs(self, payload: types.InferenceRequest) -> Dict[str, np.ndarray]:\n inputs = {}\n for inp in payload.inputs:\n inputs[inp.name] = np.array(inp.data)\n\n return inputs\n\n def _model(self, marriage=None, age=None, divorce=None):\n a = numpyro.sample(\"a\", dist.Normal(0.0, 0.2))\n M, A = 0.0, 0.0\n if marriage is not None:\n bM = numpyro.sample(\"bM\", dist.Normal(0.0, 0.5))\n M = bM * marriage\n if age is not None:\n bA = numpyro.sample(\"bA\", dist.Normal(0.0, 0.5))\n A = bA * age\n sigma = numpyro.sample(\"sigma\", dist.Exponential(1.0))\n mu = a + M + A\n numpyro.sample(\"obs\", dist.Normal(mu, sigma), obs=divorce)",
"_____no_output_____"
]
],
[
[
"### Settings files\n\nThe next step will be to create 2 configuration files: \n\n- `settings.json`: holds the configuration of our server (e.g. ports, log level, etc.).\n- `model-settings.json`: holds the configuration of our model (e.g. input type, runtime to use, etc.).",
"_____no_output_____"
],
[
"#### `settings.json`",
"_____no_output_____"
]
],
[
[
"%%writefile settings.json\n{\n \"debug\": \"true\"\n}",
"_____no_output_____"
]
],
[
[
"#### `model-settings.json`",
"_____no_output_____"
]
],
[
[
"%%writefile model-settings.json\n{\n \"name\": \"numpyro-divorce\",\n \"implementation\": \"models.NumpyroModel\",\n \"parameters\": {\n \"uri\": \"./numpyro-divorce.json\",\n \"version\": \"v0.1.0\",\n }\n}",
"_____no_output_____"
]
],
[
[
"### Start serving our model\n\nNow that we have our config in-place, we can start the server by running `mlserver start .`. This needs to either be ran from the same directory where our config files are or pointing to the folder where they are.\n\n```shell\nmlserver start .\n```\n\nSince this command will start the server and block the terminal, waiting for requests, this will need to be ran in the background on a separate terminal.",
"_____no_output_____"
],
[
"### Send test inference request\n\n\nWe now have our model being served by `mlserver`.\nTo make sure that everything is working as expected, let's send a request from our test set.\n\nFor that, we can use the Python types that `mlserver` provides out of box, or we can build our request manually.",
"_____no_output_____"
]
],
[
[
"import requests\n\nx_0 = [28.0]\ninference_request = {\n \"inputs\": [\n {\n \"name\": \"marriage\",\n \"shape\": [1],\n \"datatype\": \"FP32\",\n \"data\": x_0\n }\n ]\n}\n\nendpoint = \"http://localhost:8080/v2/models/numpyro-divorce/versions/v0.1.0/infer\"\nresponse = requests.post(endpoint, json=inference_request)\n\nresponse.json()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ec935e822b5e7a8c9ad98d5467aecc246bfc89a5 | 345,868 | ipynb | Jupyter Notebook | notebooks/ob18/c1/julia.ipynb | jpfairbanks/epicookbook | 2879766110fcd683a420df6612d179e718cc4aa7 | [
"MIT"
] | 1 | 2019-01-11T14:43:53.000Z | 2019-01-11T14:43:53.000Z | notebooks/ob18/c1/julia.ipynb | jpfairbanks/epicookbook | 2879766110fcd683a420df6612d179e718cc4aa7 | [
"MIT"
] | null | null | null | notebooks/ob18/c1/julia.ipynb | jpfairbanks/epicookbook | 2879766110fcd683a420df6612d179e718cc4aa7 | [
"MIT"
] | 2 | 2019-01-11T14:47:30.000Z | 2019-01-11T14:48:22.000Z | 121.229583 | 5,331 | 0.577397 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec937cc2b501af8c53ea28e1677aa2398a14038d | 44,137 | ipynb | Jupyter Notebook | docs/reprocess.ipynb | Darkonia/pydsge | 90d42edf53ef029d2476b5fb86b236554a1f6208 | [
"MIT"
] | 2 | 2020-08-10T20:59:25.000Z | 2021-04-08T18:05:33.000Z | docs/reprocess.ipynb | Darkonia/pydsge | 90d42edf53ef029d2476b5fb86b236554a1f6208 | [
"MIT"
] | 3 | 2020-06-20T01:25:22.000Z | 2021-01-17T18:46:00.000Z | docs/reprocess.ipynb | Darkonia/pydsge | 90d42edf53ef029d2476b5fb86b236554a1f6208 | [
"MIT"
] | 2 | 2020-08-10T20:59:38.000Z | 2020-12-24T01:47:38.000Z | 164.078067 | 36,252 | 0.89016 | [
[
[
"# Processing Estimation Results\n\nThis section shows how to obtain and process estimation results.",
"_____no_output_____"
]
],
[
[
"# only necessary if you run this in a jupyter notebook\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n# import the base class:\nfrom pydsge import * \n# import all the useful stuff from grgrlib:\nfrom grgrlib import *",
"_____no_output_____"
]
],
[
[
"### Loading and printing stats\n\nThe meta data on estimation results is stored in the numpy-fileformat ``*.npz`` and is by default suffixed by the tag ``_meta``. An example is uploaded with the package. Lets load it...",
"_____no_output_____"
]
],
[
[
"print(meta_data)\nmod = DSGE.load(meta_data)",
"/home/gboehl/repos/pydsge/pydsge/examples/dfi_doc0_meta.npz\n"
]
],
[
[
"As before, the `mod` object now collects all information and methods for the estimated model. That means you can do all the stuff that you could to before, like running `irfs` or the filter. It also stores some info on the estimation:",
"_____no_output_____"
]
],
[
[
"info = mod.info()",
"Title: dfi_doc0\nDate: 2020-10-23 13:18:26.014793\nDescription: dfi, crisis sample\nParameters: 11\nChains: 40\nLast 200 of 705 samples\n\n"
]
],
[
[
"The ``mod`` object provides access to the estimation stats:",
"_____no_output_____"
]
],
[
[
"summary = mod.mcmc_summary()",
" distribution pst_mean sd/df mean sd mode hpd_5 hpd_95 \\\ntheta beta 0.500 0.100 0.784 0.027 0.808 0.740 0.827 \nsigma normal 1.500 0.375 2.174 0.248 2.213 1.728 2.549 \nphi_pi normal 1.500 0.250 2.183 0.137 2.050 1.963 2.401 \nphi_y normal 0.125 0.050 0.108 0.011 0.107 0.090 0.126 \nrho_u beta 0.500 0.200 0.953 0.006 0.954 0.943 0.961 \nrho_r beta 0.500 0.200 0.452 0.067 0.465 0.352 0.568 \nrho_z beta 0.500 0.200 0.995 0.002 0.996 0.992 0.999 \nrho beta 0.750 0.100 0.803 0.023 0.812 0.767 0.842 \nsig_u inv_gamma_dynare 0.100 2.000 0.152 0.018 0.151 0.123 0.180 \nsig_r inv_gamma_dynare 0.100 2.000 0.099 0.009 0.089 0.085 0.114 \nsig_z inv_gamma_dynare 0.100 2.000 0.121 0.031 0.091 0.070 0.168 \n\n mc_error \ntheta 0.001 \nsigma 0.008 \nphi_pi 0.004 \nphi_y 0.000 \nrho_u 0.000 \nrho_r 0.002 \nrho_z 0.000 \nrho 0.001 \nsig_u 0.001 \nsig_r 0.000 \nsig_z 0.001 \nMarginal data density: -30.5771\nMean acceptance fraction: 0.245\n"
]
],
[
[
"The ``summary`` is a `pandas.DataFrame` object, so you can do fancy things with it like ``summary.to_latex()``. Give it a try.\n\n## Posterior sampling\n\nOne major interest is of course to be able to sample from the posterior. Get a sample of 100 draws:",
"_____no_output_____"
]
],
[
[
"pars = mod.get_par('posterior', nsamples=100, full=True)",
"_____no_output_____"
]
],
[
[
"Now, essentially everything is quite similar than before, when we sampled from the prior distribution. Let us run a batch of impulse response functions with these",
"_____no_output_____"
]
],
[
[
"ir0 = mod.irfs(('e_r',1,0), pars)\n\n# plot them:\nv = ['y','Pi','r','x']\nfig, ax, _ = pplot(ir0[0][...,mod.vix(v)], labels=v)",
"_____no_output_____"
]
],
[
[
"Note that you can also alter the parameters from the posterior. Lets assume you want to see what would happen if sigma is always one. Then you could create a parameter set like:",
"_____no_output_____"
]
],
[
[
"pars_sig1 = [mod.set_par('sigma',1,p) for p in pars]",
"_____no_output_____"
]
],
[
[
"This is in particular interesting if you e.g. want to study the effects of structural monetary policy. We can also extract the smoothened shocks to do some more interesting exercises. But before that, we have to load the filter used during the estimation:",
"_____no_output_____"
]
],
[
[
"# load filter:\nmod.load_estim()\n# extract shocks:\nepsd = mod.extract(pars, nsamples=1, bound_sigma=4)",
"[estimation:] Model operational. 12 states, 3 observables, 81 data points.\n[extract:] Extraction requires filter in non-reduced form. Recreating filter instance.\n"
]
],
[
[
"``epsd`` is a dictionary containing the smoothed means, smoothened observables, the respective shocks and the parameters used for that, as explained in the previous section. Now that we have the shocks, we can again do a historical decomposition or run counterfactual experiments. The `bound_sigma` parameter adjusts the range in which the CMAES algoritm searches for a the set of shocks to fit the time series (in terms of shock standard deviations). A good model combined with a data set without strong irregularities (such as financial crises) should not need a high `bound_sigma`. The default value is to search within 4 shock standard deviations.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec93925b5d689fbe5c95f3a3bcecd1bcb4a6da21 | 21,309 | ipynb | Jupyter Notebook | nbs/012_callback.gblend.ipynb | sjdlloyd/tsai | 98d9c02b8429708819d373b475deb9e99f0ab7df | [
"Apache-2.0"
] | null | null | null | nbs/012_callback.gblend.ipynb | sjdlloyd/tsai | 98d9c02b8429708819d373b475deb9e99f0ab7df | [
"Apache-2.0"
] | null | null | null | nbs/012_callback.gblend.ipynb | sjdlloyd/tsai | 98d9c02b8429708819d373b475deb9e99f0ab7df | [
"Apache-2.0"
] | null | null | null | 61.765217 | 2,094 | 0.584166 | [
[
[
"# default_exp callback.gblend",
"_____no_output_____"
]
],
[
[
"# Gradient Blending\n\n> Callback used to apply gradient blending to multi-modal models.",
"_____no_output_____"
],
[
"This is an unofficial PyTorch implementation by Ignacio Oguiza ([email protected]) based on: Wang, W., Tran, D., & Feiszli, M. (2020). **What Makes Training Multi-Modal Classification Networks Hard?**. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12695-12705).",
"_____no_output_____"
]
],
[
[
"#export\nfrom fastai.callback.all import *\nfrom tsai.imports import *\nfrom tsai.utils import *\nfrom tsai.data.preprocessing import *\nfrom tsai.data.transforms import *\nfrom tsai.models.layers import *",
"_____no_output_____"
],
[
"#export\nclass GBlendLoss(Module):\n \"Wrapper loss used by the gradient blending callback to allow weights applied to each modality.\"\n\n def __init__(self, crit=None, w=None):\n self.crit = ifnone(crit, CrossEntropyLossFlat())\n self.w = w\n \n def forward(self, preds, target):\n # unweighted loss\n if not is_listy(preds): return self.crit(preds, target)\n \n # weighted loss\n if self.w is None: self.w = tensor([1.] * len(preds))\n loss = 0\n for i, pred in enumerate(preds): loss += self.crit(pred, target) * self.w[i]\n return loss / sum(self.w)",
"_____no_output_____"
],
[
"# export\nclass GBlend(Callback):\n r\"\"\"A callback to implement multi-modal gradient blending.\n \n This is an unofficial PyTorch implementation by Ignacio Oguiza of - [email protected] based on: Wang, W., Tran, D., & Feiszli, M. (2020). \n What Makes Training Multi-Modal Classification Networks Hard?. In Proceedings of the IEEE/CVF Conference on Computer Vision and \n Pattern Recognition (pp. 12695-12705).\n \"\"\"\n\n def __init__(self, V_pct=.1, n:Union[None, int, tuple, list]=(10, 5), sel_metric:Optional[str]=None, show_plot:bool=False, path:str='./data/gblend'): \n \n r\"\"\"\n Args:\n V_pct : subset of train where OGR will be measured (to estimate L*)\n n : None: offline learning, int: super-epoch (online learning), tuple: (warmup super-epoch, super-epoch)(online learning with warm up)\n sel_metric : which metric will be used to calculate overfitting and generalization during training. If None, loss will be used.\n show_plot : will show a plot with the wieghts at the end of training\n \"\"\"\n assert V_pct < 1, 'V_pct must be < 1'\n self.V_pct, self.n, self.sel_metric, self.show_plot = V_pct, n, sel_metric, show_plot\n self.metric_idx = None\n self.path = Path(path)\n if not os.path.exists(self.path): os.makedirs(self.path)\n\n def before_fit(self):\n \n # model\n self.M = self.model.M \n self.old_multi_output = self.learn.model.multi_output\n self.learn.model.multi_output = True\n\n #loss\n if cls_name(self.learn.loss_func) != 'GBlendLoss': self.learn.loss_func = GBlendLoss(crit=self.learn.loss_func)\n\n # calculate super_epochs\n if self.n is None: \n self.super_epochs = [0]\n else: \n if is_listy(self.n): \n self.wu_n = self.n[0]\n self.n = self.n[1]\n else: \n self.wu_n = self.n\n rng = range(int(max(0, self.n_epoch - self.wu_n) / self.n + 1))\n self.super_epochs = []\n for i in rng: \n self.super_epochs.append((i * self.wu_n) if i <= 1 else int((i + self.wu_n / self.n - 1) * self.n))\n self.super_epochs.append(self.n_epoch)\n \n # create T'(Tp) and V dataloaders\n n_out = len(self.learn.dls.train.dataset.ptls) - self.learn.dls.train.dataset.n_inp\n train_targets = self.learn.dls.train.dataset.ptls[-n_out]\n Tp_idx, V_idx = get_splits(train_targets, valid_size=self.V_pct)\n _Tp_train_dls = []\n _V_train_dls = []\n self.learn.new_dls = []\n for dl in self.learn.dls[0].loaders: # train MixedDataLoaders\n _Tp_dl = get_subset_dl(dl, Tp_idx)\n _V_dl = get_subset_dl(dl, V_idx)\n _Tp_train_dls.append(_Tp_dl)\n _V_train_dls.append(_V_dl) \n self.learn.new_dls.append(DataLoaders(_Tp_dl, _V_dl, device=self.learn.dls.device))\n self.learn.new_dls.append(MixedDataLoaders(MixedDataLoader(*_Tp_train_dls, shuffle=True), # train - train\n MixedDataLoader(*_V_train_dls, shuffle=False), # train - valid\n device=self.learn.dls.device))\n \n # prepare containers\n self.learn.LT = []\n self.learn.LV = []\n\n def before_train(self):\n if self.epoch in self.super_epochs[:-1] and not 'LRFinder' in [cls_name(cb) for cb in self.learn.cbs]: \n self.train_epochs = np.diff(self.super_epochs)[self.super_epochs.index(self.epoch)]\n \n #compute weights\n self.learn.save('gblend_learner')\n torch.save(self.learn.model, self.path/'gblend_model')\n w = self.compute_weights()\n if self.epoch == 0: self.learn.ws = [w]\n else: self.learn.ws.append(w)\n self.learn = self.learn.load('gblend_learner')\n self.learn.loss_func.w = w\n\n def compute_weights(self):\n\n # _LT0 = []\n # _LV0 = []\n _LT = []\n _LV = []\n for i in range(self.M + 1): \n model = torch.load(self.path/'gblend_model')\n learn = Learner(self.learn.new_dls[i], model.m[i], loss_func=GBlendLoss(), \n opt_func=self.learn.opt_func, metrics=self.learn.metrics)\n learn.model.multi_output = False\n learn.remove_cbs(learn.cbs[1])\n learn.add_cb(Recorder(train_metrics=True))\n with learn.no_bar():\n with learn.no_logging(): \n learn.fit_one_cycle(self.train_epochs, pct_start=0)\n if self.metric_idx is None and self.sel_metric is not None:\n metric_names = learn.recorder.metric_names[1:-1]\n self.metric_idx = [i for i,m in enumerate(metric_names) if self.sel_metric in m]\n else: self.metric_idx = [0, 1]\n metric_values = learn.recorder.values[-1][self.metric_idx]\n _LT.append(metric_values[0])\n _LV.append(metric_values[1])\n\n # if self.epoch == 0: self.compute_previous_metrics()\n self.compute_previous_metrics()\n self.learn.LT.append(_LT)\n self.learn.LV.append(_LV)\n\n LT1 = array(self.learn.LT[-2])\n LT2 = array(self.learn.LT[-1])\n LV1 = array(self.learn.LV[-2])\n LV2 = array(self.learn.LV[-1])\n\n ΔG = (LV1 - LV2) if self.metric_idx[0] == 0 else (LV2 - LV1)\n O1 = (LV1 - LT1) if self.metric_idx[0] == 0 else (LT1 - LV1)\n O2 = (LV2 - LT2) if self.metric_idx[0] == 0 else (LT2 - LV2)\n\n ΔG = np.maximum(0, ΔG)\n\n ΔO = O2 - O1\n ΔO2 = np.maximum(1e-8, (O2 - O1)**2)\n w = np.maximum(1e-8, np.nan_to_num(ΔG / ΔO2))\n w = w / w.sum()\n w = w.tolist()\n return w\n\n def compute_previous_metrics(self):\n if self.metric_idx[0] == 0: metric = self.loss_func\n else: metric = self.learn.metrics[(min(array(self.metric_idx) - 2) - 1) // 2]\n _LT = []\n _LV = []\n with torch.no_grad():\n for i in range(self.M + 1):\n model = torch.load(self.path/'gblend_model')\n model.multi_output = False\n model = model.m[i]\n _train_metrics = []\n _valid_metrics = []\n for j,dl in enumerate(self.learn.new_dls[i]):\n it = iter(dl)\n _preds = []\n _targets = []\n for b in it: \n _preds.extend(model(*b[:-1]))\n _targets.extend(b[-1])\n _preds, _targets = stack(_preds), stack(_targets)\n try: _metric_values = metric(_preds, _targets).cpu().item()\n except: _metric_values = metric(torch.argmax(_preds, 1), _targets).cpu().item()\n if j == 0: _LT.append(_metric_values)\n else: _LV.append(_metric_values)\n self.learn.LT.append(_LT)\n self.learn.LV.append(_LV)\n\n def after_fit(self):\n if hasattr(self.learn, \"ws\") and self.show_plot:\n widths = np.diff(self.super_epochs)\n cum_ws = 0\n for i in range(self.M + 1):\n plt.bar(self.super_epochs[:-1] + widths/2, stack(self.learn.ws)[:, i], bottom=cum_ws, width=widths, \n label=f'k={i+1}' if i < self.M else f'fused')\n cum_ws += stack(self.learn.ws)[:, i]\n plt.xlim(0, self.super_epochs[-1])\n plt.ylim(0, 1)\n plt.xticks(self.super_epochs)\n plt.legend(loc='best')\n plt.title('Online G-Blend Weights by modality')\n plt.show()\n\n self.learn.model.multi_output = self.old_multi_output",
"_____no_output_____"
],
[
"from fastai.data.transforms import *\nfrom tsai.data.all import *\nfrom tsai.models.utils import *\nfrom tsai.models.XCM import *\nfrom tsai.models.TabModel import *\nfrom tsai.models.MultiInputNet import *",
"_____no_output_____"
],
[
"dsid = 'NATOPS'\nX, y, splits = get_UCR_data(dsid, split_data=False)\nts_features_df = get_ts_features(X, y)",
"Feature Extraction: 100%|██████████| 40/40 [00:05<00:00, 7.22it/s]\n"
],
[
"# raw ts\ntfms = [None, [Categorize()]]\nbatch_tfms = TSStandardize()\nts_dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)\nts_model = build_ts_model(XCM, dls=ts_dls, window_perc=.5)\n\n# ts features\ncat_names = None\ncont_names = ts_features_df.columns[:-2]\ny_names = 'target'\ntab_dls = get_tabular_dls(ts_features_df, cat_names=cat_names, cont_names=cont_names, y_names=y_names, splits=splits)\ntab_model = build_tabular_model(TabModel, dls=tab_dls)\n\n# mixed\nmixed_dls = get_mixed_dls(ts_dls, tab_dls)\nMultiModalNet = MultiInputNet(ts_model, tab_model, c_out=mixed_dls.c)\ngblend = GBlend(V_pct=.5, n=(10, 5), sel_metric=None)\nlearn = Learner(mixed_dls, MultiModalNet, metrics=[accuracy, RocAuc()], cbs=gblend)\nlearn.fit_one_cycle(1, 1e-3)",
"_____no_output_____"
],
[
"#hide\nout = create_scripts(); beep(out)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9396dd2c317a8ef3de80a00fc36ad0aba2e0fb | 22,194 | ipynb | Jupyter Notebook | Notebook/Text & Semantic Analysis.ipynb | SidharthRai/Semantics-Analysis | e16f137ffe592ac9b41c6d8cabe1ee73223ff42f | [
"MIT"
] | null | null | null | Notebook/Text & Semantic Analysis.ipynb | SidharthRai/Semantics-Analysis | e16f137ffe592ac9b41c6d8cabe1ee73223ff42f | [
"MIT"
] | null | null | null | Notebook/Text & Semantic Analysis.ipynb | SidharthRai/Semantics-Analysis | e16f137ffe592ac9b41c6d8cabe1ee73223ff42f | [
"MIT"
] | null | null | null | 46.045643 | 2,834 | 0.568262 | [
[
[
"#!pip install Algorithmia\n#!pip install --upgrade aylien-apiclient",
"_____no_output_____"
]
],
[
[
"Different APIs for text analytics and SEMANTIC ANALYSIS using machine learning were tried including :\n\nAlgorithmia - Many text analytics, NLP and entity extraction algorithms are available as part of their cloud based offering Algorithmia algorithms tried out include:\n\nPart of speech tagging using OpenNLP: http://opennlp.apache.org/ The Part of Speech Tagger marks tokens with their corresponding word type based on the token itself and the context of the token. A token might have multiple pos tags depending on the token and the context. The OpenNLP POS Tagger uses a probability model to predict the correct pos tag out of the tag set. To limit the possible tags for a token a tag dictionary can be used which increases the tagging and runtime performance of the tagger. Parts are tagged according to the conventions of the Penn Treebank Project (https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html). For example, a plural noun is denoted NNS, a singular or mass noun is NN, and a determiner (such as a/an, every, no, the,another, any, some, each, etc.) as DT.\nTokenizer: https://algorithmia.com/algorithms/ApacheOpenNLP/TokenizeBySentence\nAuto tagging of text: Algorithm uses a variant of nlp/LDA to extract tags / keywords - https://algorithmia.com/algorithms/nlp/AutoTag\nAylien - Classification by Taxonomy: https://developer.aylien.com/\n\nUse LDA to Classify Text Documents - LDA is an algorithm that can be used to generate topics to understand a document’s general theme: http://blog.algorithmia.com/lda-algorithm-classify-text-documents/\n\nMonkeyLearn: Taxonomy Classifier: https://app.monkeylearn.com/main/classifiers/cl_b7qAkDMz/tab/tree-sandbox/\n\nOutput - Python Dictionary data structure inside Algorithmia\n\nTesseract OCR in Algorithmia: https://algorithmia.com/algorithms/tesseractocr/OCR\n\nCreate PDF using ReportLab PLUS: https://www.reportlab.com/reportlabplus/",
"_____no_output_____"
]
],
[
[
"#Text Analysis or Natural Language Processing (NLP) - Algorithmia API\nimport Algorithmia\nclient = Algorithmia.client('sim3x6PzEv6m2icRR+23rqTTcOo1')\n\n#from Algorithmia.acl import ReadAcl, AclType\n#Next create a data collection called nlp_directory:\n\nimport os\nos.listdir(\".\")\n\n# Set your Data URI -- a jpg \ninput = {\"src\":\"data://shamitb/ocr/ai.jpg\"}\n\n#setting up a client object \nclient_algo = Algorithmia.client('sim3x6PzEv6m2icRR+23rqTTcOo1')\n\n#passing the algo name for OCR detection\nalgo = client_algo.algo('tesseractocr/OCR/0.1.0')\n\n#applying the algorithm\nresponse = algo.pipe(input).result\n\n#input object after being processsed by algorithm is produced\nprint(response['result'])\n\n\n",
"BETWEEN THE LINES 5v ALEERYGUFFANTI\n\n \n\nLearn to Love Artificial Intelligence\n\nHOW COGNITIVE COMPUTING IS CHANGING THE WAY CPG\nCOMPANIES CONDUCT BUSINESS\n\nThe Oscar-winning film \"Her\" tells the story ofa man who falls in love with the\nartificial intelligence (Al) personal assistant on his smartphone. That may sound\nfar-fetched, but many consumer packaged goods (CPG) companies are already\nenamored. Relying on cognitive computing systems that help harness big data\ninsights provides a competitive edge in a changing business landscape. Really,\nWhat's not to love? We asked Stephen DeAngelis, chief executive officer of En-\nterra Solutions lwww.entenasnlutinns.com), to break down the appeal.\n\nWhat is cognitive computing and how can it further data initiatives?\n\nDEANGELIS: Researchers, hospitals and especially businesses are amassing\nterabytes of data every day that could reveal everything from more effective\ndisease treatments to which products consumers are likely to purchase. To get at\nthat information, big data analytics have become de rigueur, and global spend'\ning on big data is expected to hit $1 18 billion in 2018. But all that data requires a\nnew degree ofanalysis. That's where cognitive computing —computer systems\n\nAnt-amt,“ m thinl/ Ma Inavn i mm“ in annlltArc am “an. at Humanism\"\n\nSTEPHEN\nDEANGELIS\nOED\n\nEnterra Solutions\n\n\"Computers are\nadept at recog'\nnizing patterns\nand making\nconnections.\nSetting one\nloose to sift\nthrough moun-\ntains of data,\n"
],
[
"print('\\n*******************************\\n TAXONOMY : \\n')\n#classification of data takes place here into categories, by the algorithm\n# **************** Aylien - Taxonomy ****************\nfrom aylienapiclient import textapi\nclient_aylien = textapi.Client(\"a19bb245\", \"2623b77754833e2711998a0b0bdad9db\")\n#response object from requested api is made the input for algo.\ntext = response\nclassifications = client_aylien.ClassifyByTaxonomy({\"text\": text, \"taxonomy\": \"iab-qag\"})\nfor category in classifications['categories']:\n print(category['label'])",
"\n*******************************\n TAXONOMY : \n\nTechnology & Computing\nData Centers\n"
],
[
"print('\\n*******************************\\n AUTO TAGS : \\n')\n#tags are being generated from the result of last \n# ************** Algorithmia - Auto - tag *******************\ninput = text['result']\n#tags are created over results from the text read by ocr algo over words which occur the most\nalgo = client.algo('nlp/AutoTag/1.0.0')\nresponse2 = algo.pipe(input)\nfor category in response2.result:\n print(category)\nprint(response2.result)",
"\n*******************************\n AUTO TAGS : \n\nbig\ncognitive\ncomputing\ndata\ndeangelis\nlove\nsolutions\nstephen\n['big', 'cognitive', 'computing', 'data', 'deangelis', 'love', 'solutions', 'stephen']\n"
],
[
"print('\\n*******************************\\n ENTITIES : \\n')\n# **************** Algorithmia - Entities ****************\ntext = response['result']\ntext.encode('ascii', 'ignore')\n#the OCR resutlt is again used to indentify entities like numbers, organizaions, date in the data by classifying them\nalgo = client.algo('StanfordNLP/NamedEntityRecognition/0.2.0')\nentities = algo.pipe(text)\nprint(entities.result)\nentities = entities.result",
"\n*******************************\n ENTITIES : \n\n[[['Oscar-winning', 'MISC']], [], [], [], [['Stephen', 'PERSON'], ['DeAngelis', 'PERSON']], [], [['DEANGELIS', 'PERSON'], ['every', 'SET'], ['day', 'SET']], [['$', 'MONEY'], ['1', 'MONEY'], ['18', 'MONEY'], ['billion', 'MONEY'], ['2018', 'DATE']], [], [['Ma', 'PERSON'], ['Inavn', 'PERSON']], [['STEPHEN', 'PERSON'], ['DEANGELIS', 'PERSON'], ['Enterra', 'ORGANIZATION'], ['Solutions', 'ORGANIZATION']], [['one', 'NUMBER']]]\n"
],
[
"print('\\n*******************************\\n DOCUMENT SIMILARITY : \\n')\n# **************** Algorithmia - TextSimilarity ****************\ninput = {\"files\": [[\"doc1\", \"the document about tigers\"], [\"doc2\", \"the movie about cars\"], [\"doc3\", \"the document about cats\"]]}\nprint(input)\n#document similarity is checked from three other documents doc1, doc2, doc3 compared to files/our data \nalgo = client.algo('PetiteProgrammer/TextSimilarity/0.1.2')\nprint(algo.pipe(input).result)",
"\n*******************************\n DOCUMENT SIMILARITY : \n\n{'files': [['doc1', 'the document about tigers'], ['doc2', 'the movie about cars'], ['doc3', 'the document about cats']]}\n[[0.6463776036916613, 'doc1', 'doc3'], [0.2595456571080647, 'doc2', 'doc3'], [0.1548141369835987, 'doc1', 'doc2']]\n"
],
[
"print('\\n*******************************\\n SENTENCE PARSING : \\n')\n\"\"\"\nParsing is a traditional grammatical exercise that involves breaking down a text into its component\nparts of speech with an explanation of the form, function, and syntactic relationship of each part.\n\"\"\"\n# **************** Algorithmia - SENTENCE PARSING ****************\ninput = {\n \"src\":\"Algorithmia is the best online platform available for machine learning and text analytics.\",\n \"format\":\"conll\",\n \"language\":\"english\"\n}\n#prepositions, nouns, verbs have been omitted from the text.\nalgo = client.algo('deeplearning/Parsey/1.0.2')\nprint(algo.pipe(input).result)",
"\n*******************************\n SENTENCE PARSING : \n\n{'output': {'sentences': [{'words': [{'dep_relation': 'nsubj', 'extra_deps': [''], 'features': {'Number': 'Sing', 'fPOS': 'PROPN++NNP'}, 'form': 'Algorithmia', 'head': 6, 'index': 1, 'language_pos': 'NNP', 'lemma': '', 'misc': '', 'universal_pos': 'PROPN'}, {'dep_relation': 'cop', 'extra_deps': [''], 'features': {'Mood': 'Ind', 'Number': 'Sing', 'Person': '3', 'Tense': 'Pres', 'VerbForm': 'Fin', 'fPOS': 'VERB++VBZ'}, 'form': 'is', 'head': 6, 'index': 2, 'language_pos': 'VBZ', 'lemma': '', 'misc': '', 'universal_pos': 'VERB'}, {'dep_relation': 'det', 'extra_deps': [''], 'features': {'Definite': 'Def', 'PronType': 'Art', 'fPOS': 'DET++DT'}, 'form': 'the', 'head': 6, 'index': 3, 'language_pos': 'DT', 'lemma': '', 'misc': '', 'universal_pos': 'DET'}, {'dep_relation': 'amod', 'extra_deps': [''], 'features': {'Degree': 'Sup', 'fPOS': 'ADJ++JJS'}, 'form': 'best', 'head': 6, 'index': 4, 'language_pos': 'JJS', 'lemma': '', 'misc': '', 'universal_pos': 'ADJ'}, {'dep_relation': 'amod', 'extra_deps': [''], 'features': {'Degree': 'Pos', 'fPOS': 'ADJ++JJ'}, 'form': 'online', 'head': 6, 'index': 5, 'language_pos': 'JJ', 'lemma': '', 'misc': '', 'universal_pos': 'ADJ'}, {'dep_relation': 'ROOT', 'extra_deps': [''], 'features': {'Number': 'Sing', 'fPOS': 'NOUN++NN'}, 'form': 'platform', 'head': 0, 'index': 6, 'language_pos': 'NN', 'lemma': '', 'misc': '', 'universal_pos': 'NOUN'}, {'dep_relation': 'amod', 'extra_deps': [''], 'features': {'Degree': 'Pos', 'fPOS': 'ADJ++JJ'}, 'form': 'available', 'head': 6, 'index': 7, 'language_pos': 'NN', 'lemma': '', 'misc': '', 'universal_pos': 'NOUN'}, {'dep_relation': 'mark', 'extra_deps': [''], 'features': {'fPOS': 'ADP++IN'}, 'form': 'for', 'head': 10, 'index': 8, 'language_pos': 'IN', 'lemma': '', 'misc': '', 'universal_pos': 'ADP'}, {'dep_relation': 'nsubj', 'extra_deps': [''], 'features': {'Number': 'Sing', 'fPOS': 'NOUN++NN'}, 'form': 'machine', 'head': 10, 'index': 9, 'language_pos': 'NN', 'lemma': '', 'misc': '', 'universal_pos': 'NOUN'}, {'dep_relation': 'advcl', 'extra_deps': [''], 'features': {'VerbForm': 'Ger', 'fPOS': 'VERB++VBG'}, 'form': 'learning', 'head': 7, 'index': 10, 'language_pos': 'VBG', 'lemma': '', 'misc': '', 'universal_pos': 'VERB'}, {'dep_relation': 'cc', 'extra_deps': [''], 'features': {'fPOS': 'CONJ++CC'}, 'form': 'and', 'head': 10, 'index': 11, 'language_pos': 'CC', 'lemma': '', 'misc': '', 'universal_pos': 'CONJ'}, {'dep_relation': 'compound', 'extra_deps': [''], 'features': {'Number': 'Sing', 'fPOS': 'NOUN++NN'}, 'form': 'text', 'head': 13, 'index': 12, 'language_pos': 'NN', 'lemma': '', 'misc': '', 'universal_pos': 'NOUN'}, {'dep_relation': 'conj', 'extra_deps': [''], 'features': {'fPOS': 'PUNCT++.'}, 'form': 'analytics.', 'head': 10, 'index': 13, 'language_pos': 'ADD', 'lemma': '', 'misc': '', 'universal_pos': 'X'}]}]}}\n"
],
[
"print('\\n*******************************\\n CO-REFERENCE : \\n')\n# ****************** CO REFERENCE **********************\nalgo = client.algo('StanfordNLP/DeterministicCoreferenceResolution/0.1.1')\n\"\"\"\nClassifications where references are found, meaning full details that could be used from the data are resulted by this algo\n{'terra Solutions lwww.entenasnlutinns.com': ['it']}\n\"\"\"\nprint(algo.pipe(text).result)",
"\n*******************************\n CO-REFERENCE : \n\n[{'THE LINES': ['That']}, {'Love Artificial Intelligence HOW COGNITIVE COMPUTING IS CHANGING THE WAY CPG COMPANIES CONDUCT BUSINESS': ['Her']}, {'THE WAY CPG COMPANIES': ['companies']}, {'the artificial intelligence -LRB- Al -RRB- personal assistant': ['his']}, {'Stephen DeAngelis , chief executive officer of En - terra Solutions lwww.entenasnlutinns.com -RRB-': ['Stephen DeAngelis', 'DEANGELIS', 'STEPHEN DEANGELIS']}, {'terra Solutions lwww.entenasnlutinns.com': ['it']}, {'data': []}, {'a new degree ofanalysis': ['That']}]\n"
],
[
"print('\\n*******************************\\n PART-OF-SPEECH (POS) TAGGER : \\n')\n# ****************** PART-OF-SPEECH (POS) TAGGER **********************\nalgo = client.algo('ApacheOpenNLP/POSTagger/0.1.1')\nprint(text)\n#tags part-of-speech and returns an array but throwing an error - all the inputs to be in json -- our reslut is string not getting fixes for that\ntext = response['result']\nprint(algo.pipe(text).result)",
"_____no_output_____"
],
[
"print('\\n*******************************\\n TOKENIZE : \\n')\n# ****************** TOKENIZE **********************\nalgo = client.algo('ApacheOpenNLP/TokenizeBySentence/0.1.0')\nprint(algo.pipe(text))",
"_____no_output_____"
],
[
"print('\\n*******************************\\n LDA : \\n')\n# ****************** LDA **********************\n#classify text in a document to a particular topic.\nalgo = client.algo('ApacheOpenNLP/SentenceDetection/0.1.0')\nsentences = algo.pipe(text)\n#print(sentences)\nalgo = client.algo('nlp/LDA/1.0.0')\ninput = {\n \"docsList\": sentences.result,\n \"mode\": \"quality\"\n}\n\nprint(input)\nLDA = algo.pipe(input).result\nprint(LDA)",
"\n*******************************\n LDA : \n\n{'docsList': ['BETWEEN THE LINES 5v ALEERYGUFFANTI\\n\\n \\n\\nLearn to Love Artificial Intelligence\\n\\nHOW COGNITIVE COMPUTING IS CHANGING THE WAY CPG\\nCOMPANIES CONDUCT BUSINESS\\n\\nThe Oscar-winning film \"Her\" tells the story ofa man who falls in love with the\\nartificial intelligence (Al) personal assistant on his smartphone.', 'That may sound\\nfar-fetched, but many consumer packaged goods (CPG) companies are already\\nenamored.', 'Relying on cognitive computing systems that help harness big data\\ninsights provides a competitive edge in a changing business landscape.', \"Really,\\nWhat's not to love?\", 'We asked Stephen DeAngelis, chief executive officer of En-\\nterra Solutions lwww.entenasnlutinns.com), to break down the appeal.', 'What is cognitive computing and how can it further data initiatives?', 'DEANGELIS: Researchers, hospitals and especially businesses are amassing\\nterabytes of data every day that could reveal everything from more effective\\ndisease treatments to which products consumers are likely to purchase.', \"To get at\\nthat information, big data analytics have become de rigueur, and global spend'\\ning on big data is expected to hit $1 18 billion in 2018.\", 'But all that data requires a\\nnew degree ofanalysis.', 'That\\'s where cognitive computing —computer systems\\n\\nAnt-amt,“ m thinl/ Ma Inavn i mm“ in annlltArc am “an. at Humanism\"\\n\\nSTEPHEN\\nDEANGELIS\\nOED\\n\\nEnterra Solutions\\n\\n\"Computers are\\nadept at recog\\'\\nnizing patterns\\nand making\\nconnections.', 'Setting one\\nloose to sift\\nthrough moun-\\ntains of data,'], 'mode': 'quality'}\n[{'ant-amt': 1, 'cognitive': 2, 'computer': 1, 'connections': 1, 'data': 2, 'nizing': 1, 'ofanalysis': 1, 'recog': 1}, {'companies': 2, 'humanism': 1, 'loose': 1, 'love': 2, 'moun': 1, 'setting': 1, 'sift': 1, 'tains': 1}, {'break': 1, 'data': 2, 'disease': 1, 'hospitals': 1, 'initiatives': 1, 'lwww.entenasnlutinns.com': 1, 'requires': 1, 'solutions': 1}, {'adept': 1, 'big': 2, 'computers': 1, 'data': 2, 'enterra': 1, 'making': 1, 'oed': 1, 'patterns': 1}]\n"
],
[
"summ_text = response['result']\n#summarizes the text - result of OCR\nalgo = client.algo('nlp/Summarizer/0.1.8')\nsumm = algo.pipe(summ_text).result\nprint(algo.pipe(summ_text).result)",
"BETWEEN THE LINES 5v ALEERYGUFFANTI\n\n \n\nLearn to Love Artificial Intelligence\n\nHOW COGNITIVE COMPUTING IS CHANGING THE WAY CPG\nCOMPANIES CONDUCT BUSINESS\n\nThe Oscar-winning film \"Her\" tells the story ofa man who falls in love with the\nartificial intelligence (Al) personal assistant on his smartphone. Relying on cognitive computing systems that help harness big data\ninsights provides a competitive edge in a changing business landscape. We asked Stephen DeAngelis, chief executive officer of En-\nterra Solutions lwww.\n"
],
[
"#Sentiment Analysis\nprint('\\n*******************************\\n Sentiments : \\n')\nalgo = client.algo('nlp/SentimentAnalysis/1.0.5')\nsentiment = []\nfor category in response2.result:\n s = algo.pipe(category).result\n print(\"Sentiment Score (\",category,\"): \", s)\n sentiment.append(s)\n\n#checking the level of sentiment\nimport numpy\nsentiment = numpy.asarray(sentiment)\nhow_much_senti = sentiment.var()\n#Var returns the variance of the array elements, a measure of the spread of a distribution. \n#The variance is computed for the flattened array by default, otherwise over the specified axis.\nprint(how_much_senti)\n#this variance value can affect the result of our classification",
"\n*******************************\n Sentiments : \n\nSentiment Score ( big ): 2\nSentiment Score ( cognitive ): 2\nSentiment Score ( computing ): 2\nSentiment Score ( data ): 2\nSentiment Score ( deangelis ): 2\nSentiment Score ( love ): 4\nSentiment Score ( solutions ): 3\nSentiment Score ( stephen ): 2\n0.484375\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9399e2762ee71f2ecb69ac35f5918c995819af | 268,888 | ipynb | Jupyter Notebook | ANTARES-TOM-AEON-GEM.ipynb | lchjoel1031/aas237-splinter-session | 5fae938b3dfb3dfca04a009e51964ffefe033d7e | [
"BSD-3-Clause"
] | null | null | null | ANTARES-TOM-AEON-GEM.ipynb | lchjoel1031/aas237-splinter-session | 5fae938b3dfb3dfca04a009e51964ffefe033d7e | [
"BSD-3-Clause"
] | null | null | null | ANTARES-TOM-AEON-GEM.ipynb | lchjoel1031/aas237-splinter-session | 5fae938b3dfb3dfca04a009e51964ffefe033d7e | [
"BSD-3-Clause"
] | null | null | null | 1,088.615385 | 201,716 | 0.956592 | [
[
[
"# Triggering observation of ANTARES locus objects with TOM\n\n\n**This notebook requires the installation of ANTARES client (https://noao.gitlab.io/antares/client/) and TOMtoolkit (https://tom-toolkit.readthedocs.io/en/latest/introduction/getting_started.html#installing-the-tom-toolkit-and-django). For more detail on programmatic access of TOMtoolkit, please see (https://tom-toolkit.readthedocs.io/en/stable/common/scripts.html).**\n\n\nWe can arrange follow up observations of intriguing ANTARES locus/alert using the facilities within the Astronomical Event Observatory Network (AEON). This can be conviently done with the TOMtoolkit as follows.\n\nThe first step is to define the target information (name, ra, dec, etc.)",
"_____no_output_____"
]
],
[
[
"from antares_client.search import get_by_id, get_by_ztf_object_id\n#get locus by ANTARES ID\nlocus = get_by_id(\"ANT2018c7igm\")\n\n#get locus by ZTF ID\n#locus = get_by_ztf_id(\"ZTF18abhjrcf\")\n\nprint(locus.locus_id, locus.ra, locus.dec)\n\nimport os\nos.environ[\"DJANGO_ALLOW_ASYNC_UNSAFE\"] = \"true\"\nfrom tom_targets.models import Target\nt = Target.objects.create(name=locus.locus_id, type='SIDEREAL', ra=locus.ra, dec=locus.dec)",
"ANT2018c7igm 280.69272450119047 -12.904123628571426\n"
]
],
[
[
"The next step is to populate the observation form",
"_____no_output_____"
]
],
[
[
"from tom_observations.facilities.gemini import GEMFacility, GEMObservationForm\n\ntarget = Target.objects.get(name=locus.locus_id)\n\nif target.dec < 0.0:\n obsid = 'GS-2019A-TOO-1-2'\nelse:\n obsid = 'GN-2019A-TOO-1-1'\n\nform = GEMObservationForm({\n 'target_id': target.id,\n 'obsid': [ obsid ],\n 'ready': 'true',\n 'posangle': 0.,\n 'exptimes': '',\n 'brightness': None,\n 'group': 'ANTARES',\n 'note': '',\n 'window_start': '',\n 'eltype': 'none',\n 'gstarg': '',\n}) ",
"_____no_output_____"
]
],
[
[
"We can check if there is any error of the observation form using form.is_valid()",
"_____no_output_____"
]
],
[
[
"form.is_valid()",
"_____no_output_____"
]
],
[
[
"Once the form is validated, we can submit it using the following command:",
"_____no_output_____"
]
],
[
[
"observation_ids = GEMFacility().submit_observation(form.observation_payload())\n\nprint(observation_ids)",
"['11']\n"
]
],
[
[
"We can also create a record of the observation request",
"_____no_output_____"
]
],
[
[
"from tom_observations.models import ObservationRecord\nfor observation_id in observation_ids:\n print(observation_id + ' triggered!')\n record = ObservationRecord.objects.create(\n target=target,\n facility='GEM',\n parameters=form.serialize_parameters(),\n observation_id=observation_id\n )\n print(record)",
"Observation change state hook: ANT2018c7igm @ GEM from None to \n"
]
],
[
[
"Now we can see a pending observation in TOM\n\n",
"_____no_output_____"
],
[
"We can also see the observation request in the Gemini OT:\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ec93a6b2a2bba457866ac10cfa332c94c70d8f48 | 1,869 | ipynb | Jupyter Notebook | Kadane's Algorithm.ipynb | kkoo1122/Leetcode_Practice | f9500d561d4747dc0df3472bbc5e21b51431cac8 | [
"BSD-2-Clause"
] | null | null | null | Kadane's Algorithm.ipynb | kkoo1122/Leetcode_Practice | f9500d561d4747dc0df3472bbc5e21b51431cac8 | [
"BSD-2-Clause"
] | null | null | null | Kadane's Algorithm.ipynb | kkoo1122/Leetcode_Practice | f9500d561d4747dc0df3472bbc5e21b51431cac8 | [
"BSD-2-Clause"
] | null | null | null | 21.482759 | 55 | 0.452648 | [
[
[
"def kadane(A):\n max_current , max_global = A[0] , A[0]\n for i in range(1,len(A)):\n max_current=max(A[i],max_current+A[i])\n if max_current>max_global:\n max_global=max_current \n return max_global\n\nkadane([-2,1,-3,4,-1,2,1,-5,4])",
"_____no_output_____"
],
[
"def kadane(A):\n max_current , max_global = A[0] , A[0]\n res=[]\n start=0\n for i in range(1,len(A)):\n if A[i]>max_current+A[i]:\n start=i \n max_current=max(A[i],max_current+A[i])\n if max_current>max_global:\n res=A[start:i+1]\n max_global=max_current \n return max_global,res\n\nkadane([-2,1,-3,4,-1,2,1,-5,4])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
ec93b1499137c2ce2ae82fb1e286e56fd7458a17 | 15,651 | ipynb | Jupyter Notebook | Modulo1/5. Flujo de Control.ipynb | IsabelMamani/PythonMamani | c9213388caf47474fa95b4ba3f38d9c1f220f279 | [
"Apache-2.0"
] | null | null | null | Modulo1/5. Flujo de Control.ipynb | IsabelMamani/PythonMamani | c9213388caf47474fa95b4ba3f38d9c1f220f279 | [
"Apache-2.0"
] | null | null | null | Modulo1/5. Flujo de Control.ipynb | IsabelMamani/PythonMamani | c9213388caf47474fa95b4ba3f38d9c1f220f279 | [
"Apache-2.0"
] | null | null | null | 22.71553 | 401 | 0.49639 | [
[
[
"# FLUJO DE CONTROL",
"_____no_output_____"
],
[
"El flujo de control ayudará a nuestro programa en la toma de decisiones",
"_____no_output_____"
],
[
"<center> <img src='https://4.bp.blogspot.com/_QWwb06N-r1Y/TMeaj_jf7aI/AAAAAAAAARA/CUtpkzy0Pjo/s1600/selectiva.png' width=\"400\" height=\"250\"></center>",
"_____no_output_____"
],
[
"## 1. Flujo de control (if - else) ",
"_____no_output_____"
],
[
"El uso de condiciones nos permite tener un mayor control sobre el flujo del programa.",
"_____no_output_____"
]
],
[
[
"# Uso de la sentencia if (si)\nif True:\n print('hola')",
"hola\n"
],
[
"# Sentencia else (si no)\nx=9\nif x==8 :\n print('el valor de x es 8')\nelse:\n print('el valor de x es distinto de 8')",
"el valor de x es distinto de 8\n"
],
[
"# uso de If anidado\na = 5\nb = 10\nif a == 5:\n print(\"a vale\",a)\n \n if b == 10:\n print('y b vale',b)",
"a vale 5\ny b vale 10\n"
]
],
[
[
"## 2. Sentencia elif (sino si)",
"_____no_output_____"
],
[
"Se encadena a un if u otro elif para comprobar múltiples condiciones, siempre que las anteriores no se ejecuten:",
"_____no_output_____"
]
],
[
[
"# Sentencia else (si no)\nx=9\nif x==8 :\n print('el valor de x es 8')\nelif x==9:\n print('el valor de x es 9')\nelif x==10:\n print('el valor de x es 10')\nelse:\n print('el valor de x es distinto de 8, 9 o 10')",
"el valor de x es 9\n"
]
],
[
[
"# EJERCICIOS",
"_____no_output_____"
],
[
"#### 1.\nCrear un programa que permita decidir a una persona cruzar la calle o no según:\n- Si semáforo esta en verde cruzar la calle\n- Si semáforo esta en rojo o amarillo no cruzar\n\nLa persona debe poder ingresar el estado del semáforo por teclado",
"_____no_output_____"
]
],
[
[
"semaforo = input('El semaforo tiene color: ')",
"El semaforo tiene color: rojo\n"
],
[
"#semaforo.lower() # a minusculas\n\nsemaforo.upper() # a mayusculas",
"_____no_output_____"
],
[
"semaforo = semaforo.lower()",
"_____no_output_____"
],
[
"if semaforo == 'verde':\n print('cruzar la calle')\nelif semaforo == 'rojo'or semaforo == 'amarillo':\n print('no cruzar')\nelse:\n print('no entiendo')",
"no cruzar\n"
]
],
[
[
"#### 2.\nEscribir un programa que pregunte al usuario su edad y muestre por pantalla si es mayor de edad o no.",
"_____no_output_____"
]
],
[
[
"# 1. preguntando la edad de una persona\n\nedad = int(input('Ingrese su edad: '))",
"Ingrese su edad: 16\n"
],
[
"if edad >=18:\n print('La persona es mayor de edad')\nelse:\n print('La persona es menor de edad')",
"La persona es menor de edad\n"
]
],
[
[
"#### 3.\nEscribir un programa que almacene la cadena de caracteres <b>contraseña</b> en una variable, pregunte al usuario por la contraseña e imprima por pantalla si la contraseña introducida por el usuario coincide con la guardada en la variable sin tener en cuenta mayúsculas y minúsculas.",
"_____no_output_____"
]
],
[
[
"c='contraseña'\ncu=input('Introduir contraseña:')\nif cu==c:\n print(\"contraseña válida\")\nelse:\n print(\"contraseña inválida\")",
"Introduir contraseña: vale\n"
]
],
[
[
"### 4. \nEscribir un programa que pida al usuario un número entero y muestre por pantalla si es par o impar.\n",
"_____no_output_____"
]
],
[
[
"numero = int(input('Ingrese un numero entero: '))",
"Ingrese un numero entero: 6\n"
],
[
"# numero % 2 -> resto de un numero\n\nif numero % 2 == 0:\n print(f'El numero ingresado {numero} es par')\nelse:\n print('el numero ingresado {} NO es par'.format(numero))",
"El numero ingresado 6 es par\n"
],
[
"'el numero ingresado {} NO es par'.format(numero)",
"_____no_output_____"
],
[
"f'El numero ingresado {numero} es par'",
"_____no_output_____"
]
],
[
[
"#### 5.\nLos tramos impositivos para la declaración de la renta en un determinado país son los siguientes:\n\n<table>\n <thead>\n <th style=\"text-align: center\">Renta</th>\n <th style=\"text-align: center\">% de Impuesto</th>\n </thead>\n <tbody>\n <tr>\n <td style=\"text-align: center\">Menos de 10000€</td>\n <td style=\"text-align: center\">5%</td>\n </tr>\n <tr>\n <td style=\"text-align: center\">Entre 10000€ y 20000€</td>\n <td style=\"text-align: center\">15%</td>\n </tr>\n <tr>\n <td style=\"text-align: center\">Entre 20000€ y 35000€</td>\n <td style=\"text-align: center\">20%</td>\n </tr>\n <tr>\n <td style=\"text-align: center\">Entre 35000€ y 60000€</td>\n <td style=\"text-align: center\">30%</td>\n </tr>\n <tr>\n <td style=\"text-align: center\">Más de 60000€</td>\n <td style=\"text-align: center\">45%</td>\n </tr>\n </tbody>\n</table>\n",
"_____no_output_____"
],
[
"Realizar un programa que pueda decir el % de impuestos que una persona deba pagar según su sueldo",
"_____no_output_____"
]
],
[
[
"sueldo=float(input(\"Ingresar sueldo:\"))\nif sueldo<10000:\n print(\"Impuesto es 5%\")\nelif 10000<=sueldo<20000:\n print(\"Impuesto es 15%\")\nelif 20000<=sueldo<35000:\n print(\"Impuesto es 20%\")\nelif 35000<=sueldo<60000:\n print(\"Impuesto es 30%\")\nelse:\n print(\"Impuesto es de 45%\")",
"Ingresar sueldo: 930\n"
]
],
[
[
"#### 6. \nRealiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:\n\n- Mostrar una suma de los dos números\n- Mostrar una resta de los dos números (el primero menos el segundo)\n- Mostrar una multiplicación de los dos números\n- En caso de introducir una opción inválida, el programa informará de que no es correcta.\n",
"_____no_output_____"
]
],
[
[
"a=float(input(\"Ingresar primer numero\"))\nb=float(input(\"Ingresar segundo numero\"))\nop=int(input(print(\"\"\"\nEscribe 1: Sumar los números\nEscribe 2: Restar los números\nEscribe 3: Multiplicar los números\n\"\"\")))\nif op==1:\n print(\"La suma es: \",a+b)\nelif op==2:\n print(\"La resta es: \",a-b)\nelif op==3:\n print(\"La multiplicación es: \",a*b)\nelse:\n print(\"Opción inválida\")",
"Ingresar primer numero 30\nIngresar segundo numero 4\n"
]
],
[
[
"#### 7.\nLa pizzería Bella Napoli ofrece pizzas vegetarianas y no vegetarianas a sus clientes. Los ingredientes para cada tipo de pizza aparecen a continuación.\n\n- Ingredientes vegetarianos: Pimiento y tofu.\n- Ingredientes no vegetarianos: Peperoni, Jamón y Salmón.\n\nEscribir un programa que pregunte al usuario si quiere una pizza vegetariana o no, y en función de su respuesta le muestre un menú con los ingredientes disponibles para que elija. Solo se puede eligir un ingrediente además de la mozzarella y el tomate que están en todas la pizzas. Al final se debe mostrar por pantalla si la pizza elegida es vegetariana o no y todos los ingredientes que lleva.",
"_____no_output_____"
]
],
[
[
"op=str(input(\"¿Quiere pizza vegetariana?\"))\nif op.lower()=='si':\n ing=input(\"1. Pimiento, 2. Tofu\")\n if ing==1:\n print('''Pizza vegetariana. \n Ingredientes:Pimiento,mozzarella,tomate''')\n else:\n print('''Pizza vegetariana. \n Ingredientes:Tofu,mozzarella,tomate''')\nelse:\n ing2=input(\"\"\"Escoger sólo un ingrediente: 1. Peperoni, 2. Jamón, 3. Salmón\"\"\")\n if ing2==1:\n print('''Pizza NO vegetariana. \n Ingredientes:Peperoni,mozzarella,tomate''') \n elif ing2==2:\n print('''Pizza NO vegetariana. \n Ingredientes:Jamón,mozzarella,tomate''')\n else:\n print('''Pizza NO vegetariana. \n Ingredientes:Salmón,mozzarella,tomate''')",
"¿Quiere pizza vegetariana? NO\nEscoger sólo un ingrediente: 1. Peperoni, 2. Jamón, 3. Salmón 2\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec93c5d61444af86b3cfc6c2dcdd469db2fbd3e4 | 6,772 | ipynb | Jupyter Notebook | 2020_Python_Programming_Practical_3.ipynb | wenhong25/2020python | 145fed1179ec4914cea818f13ab12a61ef3c0364 | [
"MIT"
] | null | null | null | 2020_Python_Programming_Practical_3.ipynb | wenhong25/2020python | 145fed1179ec4914cea818f13ab12a61ef3c0364 | [
"MIT"
] | null | null | null | 2020_Python_Programming_Practical_3.ipynb | wenhong25/2020python | 145fed1179ec4914cea818f13ab12a61ef3c0364 | [
"MIT"
] | null | null | null | 25.94636 | 306 | 0.450975 | [
[
[
"<a href=\"https://colab.research.google.com/github/Vraidd/2020python/blob/master/2020_Python_Programming_Practical_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# **2020 Python Programming Practical 3**\n\nIf you do not have one already, create a [GitHub](https://github.com) account using your DHS Mail.\n\nCreate a public repository 2020python\n\nFile --> Save a copy in GitHub under your 2020python repository\n\nAlso share this colab file with edit access with [email protected]\n\n",
"_____no_output_____"
],
[
"**Q1. (Displaying an integer reversed)**\n\nWrite a function reverse_int(n) to display an integer in reverse order:\n\nFor example, reverse_int(3456) displays 6543.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"**Q2. (Displaying patterns)**\n\nWrite a function display_pattern(n) to display a pattern as follows:\n```\n 1\n 2 1\n 3 2 1\n...\nn n-1 ... 3 2 1 \n```\n\n\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"**Q3. (Computing GCD)**\n\nWrite a function gcd(m, n) that returns the greatest common divisor between two positive integers:\n\nTest your program with gcd(24, 16) and gcd(255, 25).",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"**Q4. (Summing series)**\n\nWrite a function m_series(i) to compute the following series:\n\nm(i) = ½ + ⅔ + … + i/(i+1)\n\nGenerate the following table:\n```\ni m(i) \n1 0.5000 \n2 1.1667 \n... \n19 16.4023 \n20 17.3546 \n```",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"**Q5. (Prime number)**\n\nWrite a function to determine whether an integer is a prime number. An integer greater than 1 is a prime number if its only divisor is 1 or itself. For example, is_prime(11) returns True, and is_prime(9) returns False.\n\nUse the is_prime(n) function to find the first thousand prime numbers and display every ten prime numbers in a row, as follows:\n```\n2 3 5 7 11 13 17 19 23 29\n31 37 41 43 47 53 59 61 67 71\n73 79 83 89 97 ...\n...\n```",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"**Q6. (Convert from kilograms to pounds)**\n\nWrite a function print_matrix(n) that displays an n by n matrix, where n is a positive integer entered by the user. Each element is 0 or 1, which is generated randomly. A 3 by 3 matrix may look like this:\n```\n0 1 0\n0 0 0\n1 1 1 \n```",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"**Q7. (Converting milliseconds to hours, minutes, and seconds)**\n\nWrite a method convert_ms(n) that converts milliseconds to hours, minutes, and seconds. The method returns a string as hours:minutes:seconds. For example, convert_ms(5500) returns a string 0:0:5, convert_ms(100000) returns a string 0:1:40, and convert_ms(555550000) returns a string 154:19:10.\n\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec93d3c652acb4dd30875a3b4f8a62885382fb36 | 12,929 | ipynb | Jupyter Notebook | nlp/keywords_extraction.ipynb | luozhouyang/machine-learning-notes | 332bea905398891fed4a98aa139eac02c88cb5ae | [
"Apache-2.0"
] | 73 | 2018-09-07T06:47:18.000Z | 2022-01-25T06:14:41.000Z | nlp/keywords_extraction.ipynb | luozhouyang/machine-learning-notes | 332bea905398891fed4a98aa139eac02c88cb5ae | [
"Apache-2.0"
] | 2 | 2018-10-18T06:40:19.000Z | 2019-11-16T01:48:39.000Z | nlp/keywords_extraction.ipynb | luozhouyang/machine-learning-notes | 332bea905398891fed4a98aa139eac02c88cb5ae | [
"Apache-2.0"
] | 47 | 2018-09-27T10:50:21.000Z | 2022-01-25T06:20:23.000Z | 29.585812 | 131 | 0.460592 | [
[
[
"# 关键词抽取\n\n\n* TF-IDF\n* TextRank\n* [EmbedRank](https://github.com/luozhouyang/embedrank)",
"_____no_output_____"
]
],
[
[
"import os\nimport re",
"_____no_output_____"
],
[
"!pip install -q jieba",
"_____no_output_____"
],
[
"import jieba",
"_____no_output_____"
]
],
[
[
"## TF-IDF\n\n* **TF**: term frequency, 词语在文档中出现的次数\n* **IDF**: inverse doucment frequence, 包含改词语的文档占总文档数量的比例的倒数\n\n$$tf = \\frac{count(w)}{\\sum_{w_i} count(w_i)}$$\n\n$$idf = \\log{\\frac{N}{\\sum_{i=1}^N I(w, N_i)}}$$\n\n防止分母为零,需要平滑处理,一般采用 **+1** 平滑\n\n$$idf = \\log{\\frac{N+1}{\\sum_{i=1}^N I(w, N_i) + 1}}$$",
"_____no_output_____"
]
],
[
[
"class KeywordsExtractor:\n \n def __init__(self, stopwords_file=None):\n self.stopwords = self._load_stopwords(stopwords_file) if stopwords_file else None\n \n def _load_stopwords(self, file):\n words = set()\n if not os.path.exists(file):\n print('File %s does not exist.' % file)\n return words\n with open(file, mode='rt', encoding='utf8') as fin:\n for lin in fin:\n line = line.strip('\\n').strip()\n if not line:\n continue\n words.add(line)\n return words\n \n def extract_keywords(self, document, *args, **kwargs):\n raise NotImplementedError()",
"_____no_output_____"
],
[
"class TFIDFKeywordsExtractor(KeywordsExtractor):\n \n def __init__(self, idf_file, stopwords_file=None):\n super().__init__(stopwords_file=stopwords_file)\n self.idfmap = self._load_idf(idf_file) if idf_file else dict()\n self.median_idf = sorted(self.idfmap.values())[len(self.idfmap)//2]\n \n def _load_idf(self, file):\n m = dict()\n if not os.path.exists(file):\n print('File %s does not exist.' % file)\n return m\n with open(file, mode='rt', encoding='utf8') as fin:\n for line in fin:\n line = line.strip('\\n').strip()\n parts = line.split(' ')\n if len(parts) != 2:\n continue\n m[parts[0]] = float(parts[1])\n return m\n \n def extract_keywords(self, document, topk=20):\n freq = {}\n for word in jieba.cut(document):\n word = word.strip()\n if len(word) < 2:\n continue\n if word in self.stopwords:\n continue\n freq[word] = freq.get(word, 0) + 1\n \n total_freq = sum(freq.values())\n idf = {}\n for k in freq.keys():\n idf[k] = freq[k] * self.idfmap.get(k, self.median_idf)\n return sorted(idf.items(), key=lambda x:x[1], reverse=True)\n ",
"_____no_output_____"
]
],
[
[
"idf一般需要大量的数据统计得到。\n\npyspark提供了教程:\n\n* [ml-features](https://spark.apache.org/docs/latest/ml-features)\n* [tf-idf](https://spark.apache.org/docs/latest/ml-features#tf-idf)\n\n以下是一个使用spark在Hadoop统计idf的代码:",
"_____no_output_____"
]
],
[
[
"!pip install -q -i https://mirrors.aliyun.com/pypi/simple pyspark numpy",
"_____no_output_____"
],
[
"import re\nimport logging\n\nimport jieba\n\nfrom pyspark import SparkConf, SparkContext\nfrom pyspark.sql import HiveContext, SparkSession\nfrom pyspark import Row\nfrom pyspark.ml.feature import IDF, HashingTF, Tokenizer\n\n\njieba.initialize()\n\n\ndef get_spark(master='local[*]', app_name='idf'):\n spark = SparkSession.builder \\\n .appName(app_name) \\\n .master(master) \\\n .config('spark.executor.memory', '8g') \\\n .config('spark.executor.cores', '8') \\\n .config('spark.cores.max', '8') \\\n .config('spark.driver.memory', '8g') \\\n .getOrCreate()\n return spark\n\n\ndef _collect_documents(x):\n segs = x.split('\\t')\n if len(segs) != 9:\n return []\n jd_json = segs[8]\n jd_json = re.sub(r'\\n\\t', '', jd_json)\n jd_json = re.sub(r'\\\\s+', ' ', jd_json)\n jd_json = jd_json.lower()\n return [jd_json] # 整个JD作为一个document\n\n\ndef _tokenize(x):\n words = []\n for w in jieba.cut(x):\n w = w.strip()\n if not w:\n continue\n words.append(w)\n return words\n\n\ndef _idf_flat_map(x):\n items = []\n for w, _id, tf, idf in zip(x.words, x.tf.indices, x.tf.values, x.idf.values):\n items.append((w, idf))\n return items\n\n\ndef _debug(x):\n print(type(x))\n return x\n\n\ndef _filter_idf(x):\n w, v = x[0], x[1]\n if len(w) <= 1:\n return False\n if re.match(r'^[0-9]+$', x):\n return False\n if re.match(r'[0-9]{6,}', x):\n return False\n if re.match(r'^[0-9]+.[0-9]+$', w):\n return False\n return True\n\n\ndef calculate(input_path, output_path, parts=16):\n spark = get_spark()\n sc = spark.sparkContext\n\n rdd = sc.textFile(input_path)\n rdd = rdd.filter(lambda x: len(x.split('\\t')) == 9)\n rdd = rdd.flatMap(_collect_documents).filter(lambda x: x)\n rdd = rdd.map(_tokenize).filter(lambda x: x).map(lambda x: Row(words=x))\n # rdd = rdd.map(_debug)\n\n df = rdd.toDF()\n # numFeatures即hash桶数\n hashingTF = HashingTF(inputCol='words', outputCol='tf', numFeatures=2 << 20)\n featuredData = hashingTF.transform(df)\n\n idf = IDF(inputCol='tf', outputCol='idf')\n idfModel = idf.fit(featuredData)\n res = idfModel.transform(featuredData)\n\n rdd = res.rdd.flatMap(_idf_flat_map).reduceByKey(lambda a, b: a).sortBy(lambda x: x[0], ascending=True)\n rdd = rdd.filter(_filter_idf)\n rdd = rdd.map(lambda x: x[0] + '\\t' + str(x[1]))\n rdd.repartition(parts).saveAsTextFile(output_path)\n\n\nif __name__ == \"__main__\":\n input_file = 'hdfs:///basic_data/tob/tob_ats/recruit_step_v3/part-00099-8d87777f-34ee-431a-be5d-8a6f0b92fea9-c000.txt'\n output_file = 'hdfs:///user/kdd_luozhouyang/idf/jd/20200509'\n calculate(input_file, output_file, parts=1)\n",
"_____no_output_____"
]
],
[
[
"## TextRank\n\n",
"_____no_output_____"
]
],
[
[
"!pip install -q -i https://mirrors.aliyun.com/pypi/simple networkx",
"_____no_output_____"
],
[
"import itertools\n\nimport networkx as nx\nimport jieba.posseg as jp",
"_____no_output_____"
],
[
"class TextRankKeywordsExtractor(KeywordsExtractor):\n \n def _unique_tokens(self, all_words):\n words = []\n for k in all_words:\n if k in words:\n continue\n words.append(k)\n return words\n \n def _edit_distance(self, a, b):\n m, n = len(a)-1, len(b)-1\n dp = [[0]*(n+1) for _ in range(m+1)] # (m+1)*(n+1)\n for i in range(m+1):\n dp[i][0] = i\n for j in range(n+1):\n dp[0][j] = j\n for i in range(1, m+1):\n for j in range(1, n+1):\n if a[i-1] == b[j-1]:\n dp[i][j] = dp[i-1][j-1]\n else:\n dp[i][j] = 1 + max(dp[i-1][j], dp[i][j-1])\n return dp[m][n]\n \n def _build_graph(self, words):\n g = nx.Graph()\n g.add_nodes_from(words)\n pairs = list(itertools.combinations(words, 2))\n \n for p in pairs:\n first, second = p[0], p[1]\n # 使用编辑距离来作为词语的相似度衡量,可以使用其他方式\n ed = self._edit_distance(first, second)\n g.add_edge(first, second, weight=ed)\n \n return g\n \n def extract_keywords(self, document):\n words = [w.strip() for w in jieba.cut(document) if w.strip()]\n unique_words = self._unique_tokens(words)\n \n graph = self._build_graph(unique_words)\n textrank = nx.pagerank(graph, weight='weight')\n print(textrank)\n # 所有的节点\n keyphrase = sorted(textrank, key=textrank.get, reverse=True)\n # 取1/3\n keyphrase = keyphrase[0:len(unique_words)//3 + 1]\n print(keyphrase)\n \n # 相邻的词合并成短语\n res, dealt = set(), set()\n i, j = 0, 0\n while j < len(words):\n a, b = words[i], words[j]\n if a in keyphrase and b in keyphrase:\n res.add(a + ' ' + b)\n dealt.add(a)\n dealt.add(b)\n else:\n if a in keyphrase and a not in dealt:\n res.add(a)\n if j == len(words)-1 and b in keyphrase and b not in dealt:\n res.add(b)\n i += 1\n j += 1\n return res\n",
"_____no_output_____"
],
[
"textrank = TextRankKeywordsExtractor()",
"_____no_output_____"
],
[
"res = textrank.extract_keywords('java开发工程师')\nprint(res)",
"{'java': 0.37078347266331135, '开发': 0.2962171231279811, '工程师': 0.3329994042087073}\n['java', '工程师']\n{'java java', '工程师 工程师'}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ec93fbdecfa2ac5d2c1d2ef32aa7786e58823036 | 344,010 | ipynb | Jupyter Notebook | 08-machine_learning_jupyter/seaborn_demo.ipynb | iproduct/coulse-ml | 65577fd4202630d3d5cb6333ddc51cede750fb5a | [
"Apache-2.0"
] | 1 | 2020-10-02T15:48:42.000Z | 2020-10-02T15:48:42.000Z | 08-machine_learning_jupyter/seaborn_demo.ipynb | iproduct/coulse-ml | 65577fd4202630d3d5cb6333ddc51cede750fb5a | [
"Apache-2.0"
] | null | null | null | 08-machine_learning_jupyter/seaborn_demo.ipynb | iproduct/coulse-ml | 65577fd4202630d3d5cb6333ddc51cede750fb5a | [
"Apache-2.0"
] | null | null | null | 6,035.263158 | 342,927 | 0.965021 | [
[
[
"import numpy as np\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n\nif __name__ == \"__main__\":\n df = sns.load_dataset(\"penguins\")\n sns.pairplot(df, hue=\"species\")\n plt.show()\n\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ec93fd0b9688f7e4f9537991404b39cef5d27673 | 32,921 | ipynb | Jupyter Notebook | s3-backfill.ipynb | marshmellow77/sm-feature-store-backfill | 3959188580dcd3f254a9ba26580b26605f817947 | [
"MIT"
] | 2 | 2021-06-16T22:33:14.000Z | 2021-07-15T16:29:53.000Z | s3-backfill.ipynb | marshmellow77/sm-feature-store-backfill | 3959188580dcd3f254a9ba26580b26605f817947 | [
"MIT"
] | null | null | null | s3-backfill.ipynb | marshmellow77/sm-feature-store-backfill | 3959188580dcd3f254a9ba26580b26605f817947 | [
"MIT"
] | null | null | null | 35.628788 | 161 | 0.4341 | [
[
[
"import pandas as pd\nimport random, string\nfrom time import gmtime, strftime, sleep\nimport boto3\nfrom sagemaker.session import Session\nfrom sagemaker.feature_store.feature_group import FeatureGroup\nfrom sagemaker import get_execution_role",
"_____no_output_____"
],
[
"df = pd.read_csv('s3://sagemaker-sample-files/datasets/tabular/fraud_detection/synthethic_fraud_detection_SA/sampled_transactions.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"gm_time = gmtime()\nfg_timestamp = strftime(\"%Y-%m-%d'T'%H:%M:%SZ\", gm_time)\ndf['EventTime'] = fg_timestamp",
"_____no_output_____"
],
[
"def cast_object_to_string(data_frame):\n for label in data_frame.columns:\n if data_frame.dtypes[label] == 'object':\n data_frame[label] = data_frame[label].astype(\"str\").astype(\"string\")",
"_____no_output_____"
],
[
"cast_object_to_string(df)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"role = get_execution_role()\nregion = boto3.Session().region_name\nboto_session = boto3.Session(region_name=region)\nsagemaker_client = boto_session.client(service_name='sagemaker', region_name=region)\nfeaturestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region)\nfeaturegroup_name = 'transactions-fg-manual-ingest'\naccount_id = boto3.client('sts').get_caller_identity()[\"Account\"]\n\nfeature_store_session = Session(\n boto_session=boto_session,\n sagemaker_client=sagemaker_client,\n sagemaker_featurestore_runtime_client=featurestore_runtime\n)",
"_____no_output_____"
],
[
"feature_group = FeatureGroup(name=featuregroup_name, sagemaker_session=feature_store_session)\nfeature_group.load_feature_definitions(data_frame=df)",
"_____no_output_____"
],
[
"record_identifier_feature_name = \"TransactionID\"\nevent_time_feature_name = \"EventTime\"\n\nbucket = feature_store_session.default_bucket()\ns3_folder = 'feature-store-manual-ingestion10'\n\ndef wait_for_feature_group_creation_complete(feature_group):\n status = feature_group.describe().get(\"FeatureGroupStatus\")\n while status == \"Creating\":\n print(\"Waiting for Feature Group Creation\")\n sleep(5)\n status = feature_group.describe().get(\"FeatureGroupStatus\")\n if status != \"Created\":\n raise RuntimeError(f\"Failed to create feature group {feature_group.name}\")\n print(f\"FeatureGroup {feature_group.name} successfully created.\")\n\nfeature_group.create(\n s3_uri=f\"s3://{bucket}/{s3_folder}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=event_time_feature_name,\n role_arn=role,\n enable_online_store=False\n)\n\nwait_for_feature_group_creation_complete(feature_group=feature_group)",
"Waiting for Feature Group Creation\nWaiting for Feature Group Creation\nFeatureGroup transactions-fg-manual-ingest successfully created.\n"
],
[
"query = feature_group.athena_query()\nfg_table = query.table_name",
"_____no_output_____"
],
[
"year, month, day, hour = strftime('%Y-%m-%d-%H', gm_time).split('-')",
"_____no_output_____"
],
[
"df['write_time'] = df['api_invocation_time'] = pd.to_datetime(fg_timestamp)\ndf['is_deleted'] = False",
"_____no_output_____"
],
[
"filepath = f\"s3://{bucket}/{s3_folder}/{account_id}/sagemaker/{region}/offline-store/{fg_table}/data/year={year}/month={month}/day={day}/hour={hour}/\"\nfilename = strftime(\"%Y%m%dT%H%M%SZ_\", gm_time)\nfilename += ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in range(16))\nfilename += '.parquet'",
"_____no_output_____"
],
[
"df.to_parquet(filepath + filename)",
"_____no_output_____"
],
[
"query_string = f'SELECT * FROM \"{fg_table}\"'\n\nquery.run(query_string=query_string, output_location=f's3://{bucket}/{s3_folder}/query_results/')\nquery.wait()\ndataset = query.as_dataframe()\n\ndataset.head()",
"_____no_output_____"
],
[
"feature_group.delete()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9420e6dfa1ad00a43335ab6111b9bd2c918922 | 92,933 | ipynb | Jupyter Notebook | labs/12_Regression For Prediction and Data Splitting/12_Regression_solutions.ipynb | jdmarshl/Legal-123-Sp20 | 11de39c916ae1d385b1cc675dee2a984ecb3931d | [
"BSD-3-Clause"
] | 3 | 2021-01-20T19:08:40.000Z | 2022-01-19T18:27:00.000Z | labs/12_Regression For Prediction and Data Splitting/12_Regression_solutions.ipynb | jdmarshl/Legal-123-Sp20 | 11de39c916ae1d385b1cc675dee2a984ecb3931d | [
"BSD-3-Clause"
] | null | null | null | labs/12_Regression For Prediction and Data Splitting/12_Regression_solutions.ipynb | jdmarshl/Legal-123-Sp20 | 11de39c916ae1d385b1cc675dee2a984ecb3931d | [
"BSD-3-Clause"
] | 4 | 2021-03-08T09:54:36.000Z | 2022-02-01T03:44:51.000Z | 110.766389 | 22,348 | 0.850527 | [
[
[
"# [LEGALST-123] Lab 12: Regression for Prediction and Data Splitting",
"_____no_output_____"
],
[
"# Intro to scikit-learn\n\n<img src=\"https://www.cityofberkeley.info/uploadedImages/Public_Works/Level_3_-_Transportation/DSC_0637.JPG\" style=\"width: 500px; height: 275px;\" />\n---\n\n** Regression** is useful for predicting a value that varies on a continuous scale from a bunch of features. This lab will introduce the regression methods available in the scikit-learn extension to scipy, focusing on ordinary least squares linear regression, LASSO, and Ridge regression.\n\n*Estimated Time: 45 minutes*\n\n---\n\n\n### Table of Contents\n\n\n1 - [The Test-Train-Validation Split](#section 1)<br>\n\n2 - [Linear Regression](#section 2)<br>\n\n3 - [LASSO Regression](#section 3)<br>\n\n4 - [Ridge Regression](#section 4)<br>\n\n5 - [Choosing a Model](#section 5)<br>\n\n\n\n**Dependencies:**",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport datetime as dt\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import Ridge, Lasso, LinearRegression\nfrom sklearn.model_selection import KFold",
"_____no_output_____"
]
],
[
[
"## The Data: Bike Sharing",
"_____no_output_____"
],
[
"In your time at Cal, you've probably passed by one of the many bike sharing station around campus. Bike sharing systems have become more and more popular as traffic and concerns about global warming rise. This lab's data describes one such bike sharing system in Washington D.C., from [UC Irvine's Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset).",
"_____no_output_____"
]
],
[
[
"bike = pd.read_csv('data/Bike-Sharing-Dataset/day.csv')\n\n# reformat the date column to integers representing the day of the year, 001-366\nbike['dteday'] = pd.to_datetime(np.array(bike['dteday'])).strftime('%j')\n\n# get rid of the index column\nbike = bike.drop(0)\n\nbike.head(4)",
"_____no_output_____"
]
],
[
[
"Take a moment to get familiar with the data set. In data science, you'll often hear rows referred to as **records** and columns as **features**. Before you continue, make sure you can answer the following:\n\n- How many records are in this data set?\n- What does each record represent?\n- What are the different features?\n- How is each feature represented? What values does it take, and what are the data types of each value?\n\nExplore the dataset and answer these questions.",
"_____no_output_____"
]
],
[
[
"# explore the data set here",
"_____no_output_____"
]
],
[
[
"---\n## 1. The Test-Train-Validation Split <a id='section 1'></a>",
"_____no_output_____"
],
[
"When we train a model on a data set, we run the risk of [**over-fitting**](http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html). Over-fitting happens when a model becomes so complex that it makes very accurate predictions for the data it was trained on, but it can't generalize to make good predictions on new data.\n\nWe can reduce the risk of overfitting by using a **test-train split**. \n\n1. Randomly divide our data set into two smaller sets: one for training and one for testing\n2. Train the data on the training set, changing our model along the way to increase accuracy\n3. Test the data's predictions using the test set.\n\nScikit-learn's [`test_train_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function will help here. First, separate your data into two parts: a dataframe containing the features used to make our prediction, and an array of the true values. To start, let's predict the *total number of riders* (y) using *every feature that isn't a rider count* (X).\n\nStandardization is important for Ridge and LASSO because the penalty term is applied uniformly across the features. Having features on different scales unevely penalizes the coefficients. ",
"_____no_output_____"
]
],
[
[
"# the features used to predict riders\nX = bike.drop(['casual', 'registered', 'cnt'], axis=1)\n\n# standardize the features so that they have zero mean and unit variance \nscaler = StandardScaler()\nX = pd.DataFrame(scaler.fit_transform(X.values), columns=X.columns, index=X.index)\n\n# the number of riders\ny = bike['cnt']",
"_____no_output_____"
]
],
[
[
"Next, set the random seed using `np.random.seed(...)`. This will affect the way numpy pseudo-randomly generates the numbers it uses to decide how to split the data into training and test sets. Any seed number is fine- the important thing is to document the number you used in case we need to recreate this pseudorandom split in the future.\n\nThen, call `train_test_split` on your X and y. Also set the parameters `train_size=` and `test_size=` to set aside 80% of the data for training and 20% for testing.",
"_____no_output_____"
]
],
[
[
"# set the random seed\nnp.random.seed(10)\n\n# split the data\n# train_test_split returns 4 values: X_train, X_test, y_train, y_test\n\nX_train, X_test, y_train, y_test = train_test_split(X, y,\n train_size=0.80, test_size=0.20)",
"_____no_output_____"
]
],
[
[
"### The Validation Set\n\nOur test data should only be used once: after our model has been selected, trained, and tweaked. Unfortunately, it's possible that in the process of tweaking our model, we could still overfit it to the training data and only find out when we return a poor test data score. What then?\n\nA **validation set** can help here. By trying your trained models on a validation set, you can (hopefully) weed out models that don't generalize well.\n\nCall `train_test_split` again, this time on your X_train and y_train. We want to set aside 25% of the data to go to our validation set, and keep the remaining 75% for our training set.\n\nNote: This means that out of the original data, 20% is for testing, 20% is for validation, and 60% is for training.",
"_____no_output_____"
]
],
[
[
"# split the data\n# Returns 4 values: X_train, X_validate, y_train, y_validate\n\nX_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train,\n train_size=0.75, test_size=0.25)",
"_____no_output_____"
]
],
[
[
"## 2. Linear Regression (Ordinary Least Squares) <a id='section 2'></a>",
"_____no_output_____"
],
[
"Now, we're ready to start training models and making predictions. We'll start with a **linear regression** model.\n\n[Scikit-learn's linear regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score) is built around scipy's ordinary least squares, which you used in the last lab. The syntax for each scikit-learn model is very similar:\n1. Create a model by calling its constructor function. For example, `LinearRegression()` makes a linear regression model.\n2. Train the model on your training data by calling `.fit(train_X, train_y)` on the model\n\nCreate a linear regression model in the cell below.",
"_____no_output_____"
]
],
[
[
"# create a model\nlin_reg = LinearRegression()\n\n# fit the model\nlin_model = lin_reg.fit(X_train, y_train)\n",
"_____no_output_____"
]
],
[
[
"With the model fit, you can look at the best-fit slope for each feature using `.coef_`, and you can get the intercept of the regression line with `.intercept_`.",
"_____no_output_____"
]
],
[
[
"print(lin_model.coef_)\nprint(lin_model.intercept_)",
"[ 327.98962368 -651.18963625 567.64760386 754.69960592 304.30001338\n -93.97148786 103.61114174 30.06390558 -251.99701864 -384.56412529\n 1368.52685666 -195.51060318 -191.60780902]\n4487.977904767134\n"
]
],
[
[
"Now, let's get a sense of how good our model is. We can do this by looking at the difference between the predicted values and the actual values, also called the error.\n\nWe can see this graphically using a scatter plot.\n\n- Call `.predict(X)` on your linear regression model, using your training X and training y, to return a list of predicted number of riders per hour. Save it to a variable `lin_pred`.\n- Using a scatter plot (`plt.scatter(...)`), plot the predicted values against the actual values (`y_train`)",
"_____no_output_____"
]
],
[
[
"# predict the number of riders\nlin_pred = lin_model.predict(X_train)\n\n# plot the residuals on a scatter plot\nplt.scatter(y_train, lin_pred)\nplt.title('Linear Model (OLS)')\nplt.xlabel('actual value')\nplt.ylabel('predicted value')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Question: what should our scatter plot look like if our model was 100% accurate?",
"_____no_output_____"
],
[
"**ANSWER:** All points (i.e. errors) would fall on a line with a slope of one: the predicted value would always equal the actual value.",
"_____no_output_____"
],
[
"We can also get a sense of how well our model is doing by calculating the **root mean squared error**. The root mean squared error (RMSE) represents the average difference between the predicted and the actual values.\n\nTo get the RMSE:\n- subtract each predicted value from its corresponding actual value (the errors)\n- square each error (this prevents negative errors from cancelling positive errors)\n- average the squared errors\n- take the square root of the average (this gets the error back in the original units)\n\nWrite a function `rmse` that calculates the mean squared error of a predicted set of values.",
"_____no_output_____"
]
],
[
[
"def rmse(pred, actual):\n return np.sqrt(np.mean((pred - actual) ** 2))",
"_____no_output_____"
]
],
[
[
"Now calculate the mean squared error for your linear model.",
"_____no_output_____"
]
],
[
[
"rmse(lin_pred, y_train)",
"_____no_output_____"
]
],
[
[
"## 3. Ridge Regression <a id='section 3'></a>",
"_____no_output_____"
],
[
"Now that you've gone through the process for OLS linear regression, it's easy to do the same for [**Ridge Regression**](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html). In this case, the constructor function that makes the model is `Ridge()`.",
"_____no_output_____"
]
],
[
[
"# make and fit a Ridge regression model\nridge_reg = Ridge() \nridge_model = ridge_reg.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# use the model to make predictions\nridge_pred = ridge_model.predict(X_train)\n\n# plot the predictions\nplt.scatter(y_train, ridge_pred)\nplt.title('Ridge Model')\nplt.xlabel('actual values')\nplt.ylabel('predicted values')\nplt.show()",
"_____no_output_____"
],
[
"# calculate the rmse for the Ridge model\nrmse(ridge_pred, y_train)",
"_____no_output_____"
]
],
[
[
"Note: the documentation for Ridge regression shows it has lots of **hyperparameters**: values we can choose when the model is made. Now that we've tried it using the defaults, look at the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) and try changing some parameters to see if you can get a lower RMSE (`alpha` might be a good one to try).",
"_____no_output_____"
],
[
"## 4. LASSO Regression <a id='section 4'></a>",
"_____no_output_____"
],
[
"Finally, we'll try using [LASSO regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html). The constructor function to make the model is `Lasso()`. \n\nYou may get a warning message saying the objective did not converge. The model will still work, but to get convergence try increasing the number of iterations (`max_iter=`) when you construct the model.\n",
"_____no_output_____"
]
],
[
[
"# create and fit the model\nlasso_reg = Lasso(max_iter=10000) \n\nlasso_model = lasso_reg.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# use the model to make predictions\nlasso_pred = lasso_model.predict(X_train)\n\n# plot the predictions\nplt.scatter(y_train, lasso_pred)\nplt.title('LASSO Model')\nplt.xlabel('actual values')\nplt.ylabel('predicted values')\nplt.show()",
"_____no_output_____"
],
[
"# calculate the rmse for the LASSO model\nrmse(lasso_pred, y_train)",
"_____no_output_____"
]
],
[
[
"Note: LASSO regression also has many tweakable hyperparameters. See how changing them affects the accuracy!\n\nQuestion: How do these three models compare on performance? What sorts of things could we do to improve performance?",
"_____no_output_____"
],
[
"**ANSWER:** All three models have very similar accuracy, around 900 RMSE for each.\n\nWe could try changing which features we use or adjust the hyperparameters.",
"_____no_output_____"
],
[
"---\n## 5. Choosing a model <a id='section 5'></a>\n### Validation",
"_____no_output_____"
],
[
"Once you've tweaked your models' hyperparameters to get the best possible accuracy on your training sets, we can compare your models on your validation set. Make predictions on `X_validate` with each one of your models, then calculate the RMSE for each set of predictions.",
"_____no_output_____"
]
],
[
[
"# make predictions for each model\nlin_vpred = lin_model.predict(X_validate)\nridge_vpred = ridge_model.predict(X_validate)\nlasso_vpred = lasso_model.predict(X_validate)",
"_____no_output_____"
],
[
"# calculate RMSE for each set of validation predictions\nprint(\"linear model rmse: \", rmse(lin_vpred, y_validate))\nprint(\"Ridge rmse: \", rmse(ridge_vpred, y_validate))\nprint(\"LASSO rmse: \", rmse(lasso_vpred, y_validate))",
"linear model rmse: 849.43818117197\nRidge rmse: 852.6259258782582\nLASSO rmse: 853.7166945845377\n"
]
],
[
[
"How do the RMSEs for the validation data compare to those for the training data? Why?\n\nDid the model that performed best on the training set also do best on the validation set?",
"_____no_output_____"
],
[
"**YOUR ANSWER:** The RMSE for the validation set tends to be larger than for the training set, simply because the models were fit to the training data.",
"_____no_output_____"
],
[
"### Predicting the Test Set",
"_____no_output_____"
],
[
"Finally, select one final model to make predictions for your test set. This is often the model that performed best on the validation data.",
"_____no_output_____"
]
],
[
[
"# make predictions for the test set using one model of your choice\nfinal_pred = lin_model.predict(X_test)\n# calculate the rmse for the final predictions\nprint('Test set rmse: ', rmse(final_pred, y_test))",
"Test set rmse: 891.3006893945142\n"
]
],
[
[
"Coming up this semester: how to select your models, model parameters, and features to get the best performance.",
"_____no_output_____"
],
[
"---\nNotebook developed by: Keeley Takimoto\n\nData Science Modules: http://data.berkeley.edu/education/modules\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ec9422746a27dd42ec004f0b56297270760901f2 | 4,334 | ipynb | Jupyter Notebook | examples/tutorial/1.0_usage.ipynb | brightgems/spacy-ann-linker | 4b51fe5d47c9ce214480bd48d2c0836e534f3c62 | [
"MIT"
] | 1 | 2022-02-07T07:44:12.000Z | 2022-02-07T07:44:12.000Z | examples/tutorial/1.0_usage.ipynb | brightgems/spacy-ann-linker | 4b51fe5d47c9ce214480bd48d2c0836e534f3c62 | [
"MIT"
] | null | null | null | examples/tutorial/1.0_usage.ipynb | brightgems/spacy-ann-linker | 4b51fe5d47c9ce214480bd48d2c0836e534f3c62 | [
"MIT"
] | null | null | null | 22.572917 | 130 | 0.52815 | [
[
[
"import spacy\nfrom spacy_ann import AnnLinker\n\n# Load the spaCy model from the output_dir you used from the create_index command\nmodel_dir = \"models/ann_linker/\"\nnlp = spacy.load(model_dir)\n\n# The NER component of the en_core_web_md model doesn't actually recognize the aliases as entities\n# so we'll add a spaCy EntityRuler component for now to extract them.\nruler=nlp.add_pipe('entity_ruler', before=\"ann_linker\")\npatterns = [{\"label\": \"SKILL\", \"pattern\": alias} for alias in nlp.get_pipe('ann_linker').kb.get_alias_strings()]+\\\n [{'label': 'SKILL', 'pattern': 'AI2'}]\nruler.add_patterns(patterns)",
"_____no_output_____"
],
[
"doc = nlp(\"NLP is a highly researched subset of AI2 learn.\")\n[(e.text, e.label_, e.kb_id_) for e in doc.ents]",
"_____no_output_____"
],
[
"import srsly\nimport numpy as np\nentities = list(srsly.read_jsonl('data/entities.jsonl'))\nnatl_doc = nlp.make_doc(entities[2]['description'])\nneur_doc = nlp.make_doc(entities[3]['description']) ",
"_____no_output_____"
],
[
"entity_encodings = np.asarray([natl_doc.vector, neur_doc.vector])\nentity_norm = np.linalg.norm(entity_encodings, axis=1)\nentity_norm",
"_____no_output_____"
],
[
"sims = np.dot(entity_encodings, doc.vector.T) / (doc.vector_norm * entity_norm)\nsims.argmax()",
"_____no_output_____"
],
[
"patterns = [\n {\"label\": \"SKILL\", \"pattern\": alias}\n for alias in nlp.get_pipe('ann_linker').kb.get_alias_strings()\n]",
"_____no_output_____"
],
[
"print([(e.text, e.label_, e.kb_id_) for e in doc.ents])",
"[('NLP', 'ORG', 'a3'), ('AI2', 'SKILL', '')]\n"
],
[
"nlp(\"More text about nlpe\")",
"_____no_output_____"
],
[
"ent = list(doc.ents)[0]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec94480968ad13eac514350d490a74aaaae63a02 | 18,861 | ipynb | Jupyter Notebook | Training.ipynb | MasazI/SageMaker-MLWorkflow-XGBoost-sdkv2 | 68c2e883793881aad743bd7ce9a538182141b41a | [
"MIT"
] | null | null | null | Training.ipynb | MasazI/SageMaker-MLWorkflow-XGBoost-sdkv2 | 68c2e883793881aad743bd7ce9a538182141b41a | [
"MIT"
] | null | null | null | Training.ipynb | MasazI/SageMaker-MLWorkflow-XGBoost-sdkv2 | 68c2e883793881aad743bd7ce9a538182141b41a | [
"MIT"
] | null | null | null | 27.859675 | 345 | 0.583479 | [
[
[
"### Training\nそれでは学習を始めましょう。まず、XGBoost のコンテナの場所を取得します。コンテナ自体は SageMaker 側で用意されているので、場所を指定すれば利用可能です。\n\nまずは、データの前処理でS3に保存したファイルのパスを取得します。",
"_____no_output_____"
]
],
[
[
"!aws s3 ls s3://###sagemaker default bucket###/xgboost-churn-stepfunctions/xgboost-churn",
"_____no_output_____"
],
[
"import sagemaker\nfrom sagemaker import get_execution_role\nrole = get_execution_role()\nsess = sagemaker.Session()\nbucket = sess.default_bucket()\nprefix = 'xgboost-churn-stepfunctions/xgboost-churn'\nsagemaker.__version__",
"_____no_output_____"
]
],
[
[
"上記セルを実行して、SageMaker Python SDK Version が 1.xx.x の場合、以下のセルのコメントアウトを解除してから実行してください。実行が完了したら、上にあるメニューから [Kernel] -> [Restart kernel] を選択してカーネルを再起動してください。\n\n再起動が完了したら、このノートブックの一番上のセルから再度実行してください。その場合、以下のセルを実行する必要はありません。",
"_____no_output_____"
]
],
[
[
"# !pip install -U --quiet \"sagemaker==2.16.1\"",
"_____no_output_____"
],
[
"# 前処理データをダウンロード\n# s3 = boto3.resource('s3')\n# s3.Bucket(bucket).download_file('{}/{}'.format(prefix, 'train.csv'), 'train.csv')\n# s3.Bucket(bucket).download_file('{}/{}'.format(prefix, 'validation.csv'), 'validation.csv')\n# s3.Bucket(bucket).download_file('{}/{}'.format(prefix, 'test.csv'), 'test.csv')",
"_____no_output_____"
]
],
[
[
"開発時に学習で利用する場所にデータをアップロードします。",
"_____no_output_____"
]
],
[
[
"# 学習用データとしてアップロード\ninput_train = sess.upload_data(path='train.csv', key_prefix='xgboost-churn-stepfunctions/xgboost-churn-input')\ninput_validation = sess.upload_data(path='validation.csv', key_prefix='xgboost-churn-stepfunctions/xgboost-churn-input')\ninput_test = sess.upload_data(path='validation.csv', key_prefix='xgboost-churn-stepfunctions/xgboost-churn-input')",
"_____no_output_____"
],
[
"input_train",
"_____no_output_____"
],
[
"# from sagemaker.session import s3_input\nfrom sagemaker.inputs import TrainingInput\n\ninput_train_prefix = 's3://{}/{}/train'.format(bucket, 'xgboost-churn-stepfunctions/xgboost-churn-input')\ninput_validation_prefix = 's3://{}/{}/validation'.format(bucket, 'xgboost-churn-stepfunctions/xgboost-churn-input')\n\ncontent_type='text/csv'\ns3_input_train = TrainingInput(input_train_prefix, content_type=content_type)\ns3_input_validation = TrainingInput(input_validation_prefix, content_type=content_type)",
"_____no_output_____"
]
],
[
[
"### 学習の実行",
"_____no_output_____"
]
],
[
[
"import boto3\ncontainer = sagemaker.image_uris.retrieve(\"xgboost\", boto3.Session().region_name, \"1.2-1\")",
"_____no_output_____"
],
[
"container",
"_____no_output_____"
]
],
[
[
"学習のためにハイパーパラメータを指定したり、学習のインスタンスの数やタイプを指定することができます。XGBoost における主要なハイパーパラメータは以下のとおりです。\n\n- max_depth アルゴリズムが構築する木の深さをコントロールします。深い木はより学習データに適合しますが、計算も多く必要で、overfiting になる可能性があります。たくさんの浅い木を利用するか、少数の深い木を利用するか、モデルの性能という面ではトレードオフがあります。\n- subsample 学習データのサンプリングをコントロールします。これは overfitting のリスクを減らしますが、小さすぎるとモデルのデータが不足してしまいます。\n- num_round ブースティングを行う回数をコントロールします。以前のイテレーションで学習したときの残差を、以降のモデルにどこまで利用するかどうかを決定します。多くの回数を指定すると学習データに適合しますが、計算も多く必要で、overfiting になる可能性があります。\n- eta 各ブースティングの影響の大きさを表します。大きい値は保守的なブースティングを行います。\n- gamma ツリーの成長の度合いをコントロールします。大きい値はより保守的なモデルを生成します。\n\nXGBoostのhyperparameterに関する詳細は github もチェックしてください。",
"_____no_output_____"
]
],
[
[
"hyperparameters = {\"max_depth\":\"5\",\n \"eta\":\"0.2\",\n \"gamma\":\"4\",\n \"min_child_weight\":\"6\",\n \"subsample\":\"0.8\",\n \"objective\":\"binary:logistic\",\n \"num_round\":\"100\"}\n\nxgb = sagemaker.estimator.Estimator(container,\n role, \n hyperparameters=hyperparameters,\n instance_count=1, \n instance_type='ml.m4.xlarge',\n sagemaker_session=sess\n )",
"_____no_output_____"
],
[
"xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})",
"_____no_output_____"
]
],
[
[
"### Evaluation",
"_____no_output_____"
],
[
"#### SageMaker Endpointを利用して評価",
"_____no_output_____"
]
],
[
[
"xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')",
"_____no_output_____"
]
],
[
[
"現在、エンドポイントをホストしている状態で、これを利用して簡単に予測を行うことができます。予測は http の POST の request を送るだけです。 ここではデータを numpy の array の形式で送って、予測を得られるようにしたいと思います。しかし、endpoint は numpy の array を受け取ることはできません。\n\nこのために、csv_serializer を利用して、csv 形式に変換して送ることができます。",
"_____no_output_____"
]
],
[
[
"xgb_predictor.serializer = sagemaker.serializers.CSVSerializer()",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\n\ndef predict(data, rows=500):\n split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))\n predictions = ''\n for array in split_array:\n predictions = ','.join([predictions, xgb_predictor.predict(array).decode('utf-8')])\n\n return np.fromstring(predictions[1:], sep=',')\n\ntest_data = pd.read_csv('test.csv')\ndtest = test_data.values\npredictions = []\npredictions.append(predict(dtest[:, 1:]))\npredictions = np.array(predictions).squeeze()",
"_____no_output_____"
],
[
"predictions",
"_____no_output_____"
]
],
[
[
"機械学習の性能を比較評価する方法はいくつかありますが、単純に、予測値と実際の値を比較しましょう。今回は、顧客が離反する 1 と離反しない 0 を予測しますので、この混同行列を作成します。",
"_____no_output_____"
]
],
[
[
"pd.crosstab(index=test_data.iloc[:, 0], columns=np.round(predictions), rownames=['actual'], colnames=['predictions'])",
"_____no_output_____"
]
],
[
[
"※ 注意点, アルゴリズムにはランダムな要素があるので結果は必ずしも一致しません.\n\n48人の離反者がいて、それらの39名 (true positives) を正しく予測できました。そして、4名の顧客は離反すると予測しましたが、離反していません (false positives)。9名の顧客は離反しないと予測したにもかかわらず離反してしまいました (false negatives)。\n\n重要な点として、離反するかどうかを np.round() という関数で、しきい値0.5で判断しています。xgboost が出力する値は0から1までの連続値で、それらを離反する 1 と 離反しない 0 に分類します。しかし、その連続値 (離反する確率) が示すよりも、顧客の離反というのは損害の大きい問題です。つまり離反する確率が低い顧客も、しきい値を0.5から下げて、離反するとみなす必要があるかもしれません。もちろんこては、false positives (離反すると予測したけど離反しなかった)を増やすと思いますが、 true positives (離反すると予測して離反した) を増やし、false negatives (離反しないと予測して離反した)を減らせます。\n\n直感的な理解のため、予測結果の連続値をみてみましょう。",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nplt.hist(predictions)\nplt.show()",
"_____no_output_____"
]
],
[
[
"連続値は0から1まで歪んでいますが、0.1から0.9までの間で、しきい値を調整するにはちょうど良さそうです。",
"_____no_output_____"
],
[
"\n例えば、しきい値を0.5から0.3まで減らしてみたとき、true positives は 1 つ、false positives は 3 つ増え、false negatives は 1 つ減りました。全体からみると小さな値ですが、全体の6-10%の顧客が、しきい値の変更で、予測結果が変わりました。ここで5名にインセンティブを与えることによって、インセンティブのコストが掛かりますが、3名の顧客を引き止めることができるかもしれません。 つまり、最適な閾値を決めることは、実世界の問題を機械学習で解く上で重要なのです。これについてもう少し広く議論し、仮説的なソリューションを考えたいと思います。",
"_____no_output_____"
],
[
"#### 推論エラーのコストをビジネスモデルから定義",
"_____no_output_____"
],
[
"2値分類の問題においては、しきい値に注意しなければならないという、似たような状況に直面することが多いです。それ自体は問題ではありません。もし、出力の連続値が2クラスで完全に別れていれば、MLを使うことなく単純なルールで解くことができると考えられます。\n\n重要なこととして、MLモデルを正版環境に導入する際、モデルが false positives と false negatives に誤って入れたときのコストがあげられます。しきい値の選択は4つの指標に影響を与えます。4つの指標に対して、ビジネス上の相対的なコストを考える必要があるでしょう。\n\n",
"_____no_output_____"
],
[
"携帯電話会社の離反の問題において、コストとはなんでしょうか?コストはビジネスでとるべきアクションに結びついています。いくつかの仮定をおいてみましょう。\n\nまず、true negatives のコストとして 0USD を割り当てます。満足しているお客様を正しく認識できていれば何も実施しません。\n\nfalse negatives が一番問題で、なぜなら、離反していく顧客を正しく予測できないからです。顧客を失えば、再獲得するまでに多くのコストを払う必要もあり、例えば逸失利益、広告コスト、管理コスト、販売管理コスト、電話の購入補助金などがあります。インターネットを簡単に検索してみると、そのようなコストは数百ドルとも言われ、ここでは 500USD としましょう。これが false negatives に対するコストです。\n\n最後に、離反していくと予測された顧客に 100USD のインセンティブを与えることを考えましょう。 携帯電話会社がそういったインセンティブを提供するなら、2回くらいは離反の前に考え直すかもしれません。これは true positive と false negative のコストになります。false positives の場合 (顧客は満足していて、モデルが誤って離反しそうと予測した場合)、 100USD のインセンティブは捨てることになります。その 100USD を効率よく消費してしまうかもしれませんが、優良顧客へのロイヤリティを増やすという意味では悪くないかもしれません。",
"_____no_output_____"
],
[
"#### コストの計算式",
"_____no_output_____"
],
[
"alse negatives が false positives よりもコストが高いことは説明しました。そこで、顧客の数ではなく、コストを最小化するように、しきい値を最適化することを考えましょう。コストの関数は以下のようなものになります。\n\ntxt\n500USD * FN(C) + 0USD * TN(C) + 100USD * FP(C) + 100USD * TP(C)\nFN(C) は false negative の割合で、しきい値Cの関数です。同様にTN, FP, TP も用意します。この関数の値が最小となるようなしきい値Cを探します。 最も単純な方法は、候補となる閾値で何度もシミュレーションをすることです。以下では100個の値に対してループで計算を行います。",
"_____no_output_____"
]
],
[
[
"cutoffs = np.arange(0.01, 1, 0.01)\ncosts = []\n\nfor c in cutoffs:\n _predictions = pd.Categorical(np.where(predictions > c, 1, 0), categories=[0, 1])\n matrix_a = np.array([[0, 100], [500, 100]])\n matrix_b = pd.crosstab(index=test_data.iloc[:, 0], columns=_predictions, dropna=False)\n costs.append(np.sum(np.sum(matrix_a * matrix_b)))\n\ncosts = np.array(costs)\nplt.plot(cutoffs, costs)\nplt.show()\nprint('Cost is minimized near a cutoff of:', cutoffs[np.argmin(costs)], 'for a cost of:', np.min(costs))",
"_____no_output_____"
]
],
[
[
"#### エンドポイント の削除",
"_____no_output_____"
]
],
[
[
"xgb_predictor.delete_endpoint()",
"_____no_output_____"
]
],
[
[
"#### マニュアルで評価",
"_____no_output_____"
]
],
[
[
"!pip install xgboost",
"_____no_output_____"
],
[
"model_path = xgb.model_data\nprint(model_path)",
"_____no_output_____"
],
[
"sagemaker.s3.S3Downloader.download(model_path, './')",
"_____no_output_____"
],
[
"!tar xvzf model.tar.gz",
"_____no_output_____"
],
[
"import pickle\nmodel = pickle.load(open('xgboost-model', 'rb'))",
"_____no_output_____"
],
[
"import xgboost\nxgboost.plot_importance(model)",
"_____no_output_____"
],
[
"test_dm = xgboost.DMatrix(test_data.values[:, 1:])\npredictions_xgb = model.predict(test_dm)",
"_____no_output_____"
],
[
"pd.crosstab(index=test_data.iloc[:, 0], columns=np.round(predictions_xgb), rownames=['actual'], colnames=['predictions'])",
"_____no_output_____"
],
[
"from sklearn import metrics\nmetrics.accuracy_score(test_data.iloc[:, 0].values, np.round(predictions_xgb))",
"_____no_output_____"
]
],
[
[
"#### evaluationスクリプトの作成",
"_____no_output_____"
],
[
"```python\n%%writefile evaluation.py\nimport os\nimport tarfile\nimport pickle\nimport numpy as np\nimport xgboost\nimport pandas as pd\nfrom sklearn import metrics\n\nif __name__ == \"__main__\":\n model_path = os.path.join(\"/opt/ml/processing/model\", \"model.tar.gz\")\n print(\"Extracting model from path: {}\".format(model_path))\n with tarfile.open(model_path) as tar:\n tar.extractall(path=\".\")\n print(\"Loading model\")\n model = pickle.load(open('xgboost-model', 'rb'))\n\n print(\"Loading test input data\")\n test_data = pd.read_csv('test.csv')\n dtest = test_data.values\n \n print(\"Evaluating\")\n test_dm = xgboost.DMatrix(test_data.values[:, 1:])\n predictions_xgb = model.predict(test_dm)\n \n score = metrics.accuracy_score(test_data.iloc[:, 0].values, np.round(predictions_xgb))\n print(score)\n```",
"_____no_output_____"
],
[
"### Evalutaion jobの作成",
"_____no_output_____"
],
[
"#### processing docker imageの作成",
"_____no_output_____"
]
],
[
[
"import boto3\n# boto3の機能を使ってリポジトリ名に必要な情報を取得する\naccount_id = boto3.client('sts').get_caller_identity().get('Account')\nregion = boto3.session.Session().region_name\nprint(region)\nprint(account_id)\necr_repository = 'xgboost-churn-evaluation'\ntag = ':latest'\nrepository_uri = '{}.dkr.ecr.{}.amazonaws.com/{}'.format(account_id, region, ecr_repository + tag)\n\n!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)\n# リポジトリの作成\n# すでにある場合はこのコマンドは必要ありません。\n!aws ecr create-repository --repository-name $ecr_repository",
"_____no_output_____"
],
[
"!docker build -f Dockerfile-evaluation -t xgboost-churn-evaluation .",
"_____no_output_____"
],
[
"# docker imageをecrにpush\n!docker tag {ecr_repository + tag} $repository_uri\n!docker push $repository_uri",
"_____no_output_____"
]
],
[
[
"#### local からprocessingを実行",
"_____no_output_____"
]
],
[
[
"from sagemaker import get_execution_role\nfrom sagemaker.processing import ScriptProcessor, ProcessingInput, ProcessingOutput\nrole = get_execution_role()\n\nscript_processor = ScriptProcessor(\n image_uri='%s.dkr.ecr.ap-northeast-1.amazonaws.com/%s:latest' % (account_id, ecr_repository),\n role=role,\n command=['python3'],\n instance_count=1,\n instance_type='ml.m5.xlarge')",
"_____no_output_____"
],
[
"model_data_s3_uri = model_path\n\nscript_processor.run(code='evaluation.py',\n inputs=[ProcessingInput(\n source='test.csv',\n destination='/opt/ml/processing/input',\n input_name='input-1'),\n ProcessingInput(\n source=model_data_s3_uri,\n destination='/opt/ml/processing/model',\n input_name='input-2')],\n outputs=[ProcessingOutput(\n source=\"/opt/ml/processing/evaluation\",\n output_name=\"evaluation\",\n )],\n)",
"_____no_output_____"
]
],
[
[
"ここまでで、SageMaker環境でコンテナを活用した前処理、学習、検証が実行できました。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ec944eb211c6977b47f63b2a83f282d9f4a20287 | 10,243 | ipynb | Jupyter Notebook | Modulo3/Ejercicios/Problemas Diversos.ipynb | EddieRodriguezRojas/CURSOPYTHOND | 7293551a3c3d8dd8fff2f1bac36d70840ea7c1ad | [
"Apache-2.0"
] | null | null | null | Modulo3/Ejercicios/Problemas Diversos.ipynb | EddieRodriguezRojas/CURSOPYTHOND | 7293551a3c3d8dd8fff2f1bac36d70840ea7c1ad | [
"Apache-2.0"
] | null | null | null | Modulo3/Ejercicios/Problemas Diversos.ipynb | EddieRodriguezRojas/CURSOPYTHOND | 7293551a3c3d8dd8fff2f1bac36d70840ea7c1ad | [
"Apache-2.0"
] | null | null | null | 30.945619 | 235 | 0.492336 | [
[
[
"# PROBLEMAS DIVERSOS",
"_____no_output_____"
],
[
"<h3>1.</h3>\nRealizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.",
"_____no_output_____"
]
],
[
[
"import re #Librería que utilizaré para realizar una búsqueda en el ejercicio 4\n\nlistado_alumnos=[]\n\nclass Alumno:\n #La clase alumno solo tendrá el atributo num que indicará la cantidad de alumnos que se cargarán\n def __init__(self, num):\n self.num = num\n \n # 1. Función para cargar alumnos en una lista de acuerdo a la cantidad de alumnos establecido como valor en el constructor\n def cargaralumnos(self, listado_alumnos):\n notas = []\n for i in range(self.num):\n nombre=input(f\"Ingrese el nombre completo del alumno {i+1}: \")\n for n in range(3):\n while True:\n try:\n nota = float(input(f\"Ingrese la nota {n+1} del alumno: \"))\n if nota >= 0 and nota <= 10:\n notas.append(nota)\n break\n else:\n print(\"La nota debe estar comprendida entre 0 y 10\")\n except:\n print(\"Ingrese un número valido\")\n alumno = {'nombre' : nombre, 'notas' : [notas[0], notas[1], notas[2]]}\n notas.clear()\n listado_alumnos.append(alumno)\n \n # 2. Función para contar cuántos alumnos están aprobados y desaprobados. Adicionalmente, almaceno el promedio de notas\n # y el estado del alumno en el curso, el cual puede ser aprobado o desaprobado\n def evaluar(self, listado_alumnos):\n aprobados = 0\n desaprobados = 0\n for alumno in listado_alumnos:\n promedio = 0\n for notas in alumno['notas']:\n promedio = promedio + notas\n promedio = promedio/3\n alumno['promedio'] = promedio\n if promedio >= 4:\n alumno['estado'] = 'Aprobado'\n aprobados = aprobados + 1\n else:\n alumno['estado'] = 'Desaprobado'\n desaprobados = desaprobados + 1\n print(f\"La cantidad de alumnos aprobados es de {aprobados}\")\n print(f\"La cantidad de alumnos desaprobados es de {desaprobados}\")\n \n # 3. Función para mostrar el promedio total del curso\n def promedio_curso(self, listado_alumnos):\n promedio = 0\n for alumno in listado_alumnos:\n for notas in alumno['notas']:\n promedio = promedio + notas\n promedio = promedio / (3 * self.num)\n print(f\"El promedio total del curso es de: {promedio}\")\n \n # 4. Función para definir quién es el alumno con el promedio más alto y más bajo\n def promedio_alto_bajo (self, listado_alumnos):\n alto = 0\n bajo = 10\n alumno_alto = ''\n alumno_bajo = ''\n \n for alumno in listado_alumnos:\n promedio = 0\n for notas in alumno['notas']:\n promedio = promedio + notas\n promedio = promedio/3\n \n if promedio > alto:\n alto = promedio\n alumno_alto = alumno['nombre']\n if promedio < bajo:\n bajo = promedio\n alumno_bajo = alumno['nombre']\n \n print(f\"El alumno con el promedio más alto es: {alumno_alto}\")\n print(f\"El alumno con el promedio más bajo es: {alumno_bajo}\")\n \n # 5. Buscar datos de alumnos por nombre parcial o completo\n def buscar(self, listado_alumnos, nombre):\n for alumno in listado_alumnos:\n if re.search(nombre, alumno['nombre']):\n print(f\"Nombre: {alumno['nombre']}\")\n i = 1\n for nota in alumno['notas']:\n print(f\" nota {i}: {nota}\")\n i = i + 1\n print(f\" Promedio: {alumno['promedio']}\")\n print(f\" Estado: {alumno['estado']}\")",
"_____no_output_____"
],
[
"#Se solicita la cantidad de alumnos\nwhile True:\n try:\n num= int(input(\"Ingrese el número de alumnos: \"))\n if num <= 0:\n print(\"Por favor, ingrese un número mayor a 0\")\n else:\n alumno = Alumno(num)\n break\n except:\n print(\"Por favor, ingrese un número válido de alumnos\")",
"Ingrese el número de alumnos: 3\n"
],
[
"alumno.cargaralumnos(listado_alumnos)",
"Ingrese el nombre completo del alumno 1: Eddie\nIngrese la nota 1 del alumno: 10\nIngrese la nota 2 del alumno: 9\nIngrese la nota 3 del alumno: 8\nIngrese el nombre completo del alumno 2: Edgard\nIngrese la nota 1 del alumno: 7\nIngrese la nota 2 del alumno: 6\nIngrese la nota 3 del alumno: 5\nIngrese el nombre completo del alumno 3: Raúl\nIngrese la nota 1 del alumno: 4\nIngrese la nota 2 del alumno: 3\nIngrese la nota 3 del alumno: 2\n"
],
[
"listado_alumnos",
"_____no_output_____"
]
],
[
[
"### 2.\nDefinir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.",
"_____no_output_____"
]
],
[
[
"alumno.evaluar(listado_alumnos)",
"La cantidad de alumnos aprobados es de 2\nLa cantidad de alumnos desaprobados es de 1\n"
]
],
[
[
"### 3.\nInformar el promedio de nota del curso total.",
"_____no_output_____"
]
],
[
[
"alumno.promedio_curso(listado_alumnos)",
"El promedio total del curso es de: 6.0\n"
]
],
[
[
"### 4.\nRealizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.",
"_____no_output_____"
]
],
[
[
"alumno.promedio_alto_bajo(listado_alumnos)",
"El alumno con el promedio más alto es: Eddie\nEl alumno con el promedio más bajo es: Raúl\n"
]
],
[
[
"### 5.\nRealizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.",
"_____no_output_____"
]
],
[
[
"alumno.buscar(listado_alumnos, \"Ed\")",
"Nombre: Eddie\n nota 1: 10.0\n nota 2: 9.0\n nota 3: 8.0\n Promedio: 9.0\n Estado: Aprobado\nNombre: Edgard\n nota 1: 7.0\n nota 2: 6.0\n nota 3: 5.0\n Promedio: 6.0\n Estado: Aprobado\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec945292c36b925c473912369d0fc3aaaebddee6 | 32,713 | ipynb | Jupyter Notebook | visualization_2.ipynb | pancetta/python-hpc-performance | fc4c0fcd87d5a0fde78a0d6f284d1c89a31fbb03 | [
"BSD-2-Clause"
] | 1 | 2020-10-29T06:04:43.000Z | 2020-10-29T06:04:43.000Z | visualization_2.ipynb | pancetta/python-hpc-performance | fc4c0fcd87d5a0fde78a0d6f284d1c89a31fbb03 | [
"BSD-2-Clause"
] | null | null | null | visualization_2.ipynb | pancetta/python-hpc-performance | fc4c0fcd87d5a0fde78a0d6f284d1c89a31fbb03 | [
"BSD-2-Clause"
] | null | null | null | 265.95935 | 15,719 | 0.928958 | [
[
[
"This is a first visualization test!",
"_____no_output_____"
]
],
[
[
"import glob\nimport pandas as pd",
"_____no_output_____"
],
[
"result_files = glob.glob('data/' + 'results*.json')",
"_____no_output_____"
],
[
"for file in result_files:\n sysname = file.split('_')[-1].split('.')[0]\n\n df = pd.read_json(file)\n\n # for name in df['name'].unique():\n # print(name, df[df['name'].eq(name)]['MPI_size'].unique())\n\n df = df[df['name'].isin(['mpi_broadcast']) & df['params_n'].isin([10000])]\n\n sizes = df['MPI_size'].unique()\n\n ax = None\n for i, size in enumerate(sizes):\n df_extract = df[df['MPI_size'].eq(size)]\n ax = df_extract.plot(ax=ax, logy=True, title=f'mpi_broadcast @ {sysname}', label=f'MPI_Size: {size}', ylabel='Time (sec.)', legend=True, x='MPI_rank', y='mean_duration', yerr=[df_extract['mean_duration']- df_extract['min_duration'], df_extract['max_duration'] - df_extract['mean_duration']], capsize=4, fmt='o', markersize=8)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9454de4b982918424ff90fc12dc478a47a8eea | 139,590 | ipynb | Jupyter Notebook | time_estimates.ipynb | agostontorok/TaskCompletionTimeEstimation | 55a82fbd3926d995031af1b8090f44f31b4c327e | [
"MIT"
] | null | null | null | time_estimates.ipynb | agostontorok/TaskCompletionTimeEstimation | 55a82fbd3926d995031af1b8090f44f31b4c327e | [
"MIT"
] | null | null | null | time_estimates.ipynb | agostontorok/TaskCompletionTimeEstimation | 55a82fbd3926d995031af1b8090f44f31b4c327e | [
"MIT"
] | null | null | null | 742.5 | 126,316 | 0.938885 | [
[
[
"# Why overdue tasks take still a long time to finish?\n\nThe prototypical situation that has been puzzling me for a while is the following: \n_- We estimated 3 days for a task \n- Bob has been working on the task for 3 days already \n- On the daily scrum, he says that he just needs to take care of a couple more things and the task will be finished \n- The task is not finished the next day_\n\nSometimes such tasks are not finished even after 5 days of work. Since they are typically “more research-less development” tasks, many people suggest accepting this kind of uncertainty as inherent to research. While I agree to a point, I don’t want to give up predictability so easily. A year ago, Erik Bernhardsson shared a [fascinating blog post on time estimations](https://erikbern.com/2019/04/15/why-software-projects-take-longer-than-you-think-a-statistical-model.html), concluding that the reason why development tasks take longer than expected is that the blowup factor (actual time/expected time) follows a lognormal distribution and our estimates are accurate for the Median but not so much for the Mean task completion time. I really liked his post and re-read it several times last year because it finally helped me to understand why is it so difficult to play scrum poker. I’m taking his model as the basis for my work here, to see if we need to finetune our estimate for task completion.\n\n#### Estimates are smaller than expected finishing times\n\nWhen in the above example, Bob was 3 days into the task, it was intuitive to believe that he was really about to finish it. After all, the estimated time need for the task was 3 days. However, statistically speaking, our estimate of 3 days was an estimate for the Median of the underlying distribution. The Median of the distribution does not say too much about the scale of the values, it just says plain and simple that 50% of the time the task would be completed faster, and 50% of the time it would take longer than 3 days. A problem with the Median is that it is **not** sensitive to the uncertainty in the task. However, when you want to calculate the development time of a feature, you do want to take into account the uncertainty. Therefore, it is better to calculate the expected value for the task completion time for which we have pretty much all the information we need. Let’s see what this expected value would look like.\n\nThe expected value for the lognormal distribution is:\n\n$$ \\operatorname{E} = \\exp(\\mu + \\frac{\\sigma^2}{2}) $$\n\nTo calculate this, we need to know two parameters μ and σ. Luckily, we have an unbiased estimate for the Median, and since that is:\n\n$$ Median = \\exp(\\mu) $$\n\nWe have μ already. For σ, the case is a bit more tricky but not hopeless. Erik’s idea was to use the gut feeling of risk. My proposal, in addition, is to use the spread of the scrum poker estimates as an uncalibrated estimate for σ. To be sure, this is an uncalibrated estimate because — although it should be correlated with some sort of uncertainty — it still doesn’t reflect any particular scaling factor. Probably, this is also team and environment-dependent, so it’s best to estimate it based on the actual data collected from the sprints. Basically, you need the individual estimates and the actual time to be able to fit a simple model and get σ for your team. One addition: likely, the uncertainty around a task can be better estimated with a model that also factors in who the assignee is (no offence meant to anyone ;). Some people are better at streamlining, while others are much more conscious of details.\n\n#### The Bermuda triangle of the “Doing” column\n\nSo let’s get back to the daily scrum where Bob said that he was about to finish the task that we estimated to take 3 days of work (just as a remark, Bob is a fictive person). The question is whether we should just accept that the mean completion time is bigger than our estimate (Median) anyway, or if there is more to this story than meets the eye.\n\nLet’s say when we did the scrum poker we voted as follows: Me: 2; John: 2, Bob: 3, Sarah: 3, Linda: 5, Mary: 5. Based on this we had an estimate of 3 days. Now the three days passed, so the question is whether we should still stick to the same estimate? Actually, it turned out that both John and I were wrong in gauging the difficulty of the task, so one can already see intuitively that the Median of the remaining votes (discounting our lousy votes) can be considered to be higher (it is now 4 days!). More generally speaking, when we made our estimate before the task was started we took into account all kinds of outcomes, amongst them the case when the task could have been completed in a few minutes (perhaps if Bob had realized that the same feature already existed someplace but with a different name), to the absurd case of a very difficult implementation process (perhaps if the feature had been more complicated than we imagined). Statistically, we calculate the expected value by integrating over the entire distribution. Now for any time point _t_ > 0 it is evident that we can’t consider times between 0 and _t_ in the integration and have to instead calculate the expected value as follows:\n\n$$ \\operatorname{E}[X] = \\int_{t}^{\\infty} x f(x)\\, dx. $$\n\nWhere _f(x)_ is the conditional probability of _x_ given that we consider points from _t_ to _∞_. So we are not considering the cases which we already know did not materialize and are considering only cases when the task will take at least a time of _t_ to be completed.\n\nSo although beyond the peak of the distribution, points right after _t_ have a relatively higher probability than points farther away, there are much more points farther away and the curvature is also changing as it moves away from the peak, so the expected value is actually blowing up. Let’s demonstrate this through an example. Let’s take three cases to illustrate, (1) where the task is in “todo” phase and we have not started it yet, (2) where the task has already been worked on for the initially estimated time (blowup factor of 1), and (3) where the task has been worked on for double the initially estimated time (blowup factor of 2).",
"_____no_output_____"
]
],
[
[
"%pylab inline\nfrom scipy.stats import lognorm, rv_continuous\nmatplotlib.style.use('ggplot')\n\n# parameters of the distribution of the blowup factor\n\nmu = 0 # given that the median is an unbiased estimate of the actual time\nsigma = 1 # the uncertainty\n\n# statistics of the distribution\n\ntheoretic_median = exp(mu)\ntheoretic_exp_val = exp(mu + sigma ** 2 / 2)\n\n# samples from the distribution\n\nx = np.linspace(0, 12, 1000)\ny = lognorm.pdf(x, s=sigma, scale=exp(mu))\n\n# testing\n\nintegrated_exp_val = rv_continuous.expect(lognorm, args=(sigma, ),\n lb=0, ub=inf, conditional=True)\nnp.testing.assert_approx_equal(integrated_exp_val, theoretic_exp_val)\nalready_started_task_exp_val = rv_continuous.expect(lognorm,\n args=(sigma, ), lb=0.001, ub=inf, conditional=True)\nassert already_started_task_exp_val > integrated_exp_val, \\\n 'we expect that the E for a task in process is higher than when it was in to do'\n\n# ploting\n\n(fig, axs) = plt.subplots(3, sharex=True, sharey=True,\n gridspec_kw={'hspace': 0.3}, figsize=(15, 10))\n\nfor (i, time_spent) in enumerate([0, 1, 1.5]):\n x_fill = np.linspace(time_spent, 12, 1000)\n y_fill = lognorm.pdf(x_fill, s=sigma, scale=exp(mu))\n exp_val = rv_continuous.expect(lognorm, args=(sigma, ),\n lb=time_spent, ub=inf,\n conditional=True)\n\n axs[i].scatter(time_spent, 0, s=200,\n label='Current time factor: %.2f' % time_spent)\n axs[i].plot(x, y, 'k-', lw=5, alpha=0.6)\n axs[i].fill_between(\n x_fill,\n y_fill * 0,\n y_fill,\n color='C0',\n alpha=0.3,\n label='Possible outcomes from current time',\n )\n axs[i].axvline(x=theoretic_median, color='C1', lw=3,\n label='Original estimate : %.2f' % theoretic_median)\n axs[i].axvline(x=exp_val, color='C2', lw=3,\n label='Expected value : %.2f' % exp_val)\n axs[i].legend()\n axs[i].set_title('When current factor is {0}, the expected remaining time is\\n{1:0.2f} times the original estimate'.format(\n time_spent,\n exp_val - time_spent))\n\n# aesthetics\n\naxs[2].set_xlabel('Blowup factor (actual/estimated)')\naxs[1].set_ylabel('Probability distribution')",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"Now you see a paradoxical thing: the remaining time, operationalized by E_t — t, is not shrinking as we proceed but it is growing. This, of course, does not mean that we cannot finish tasks; it just helps to make increasingly better estimates for the remaining work by factoring the elapsed time into the equation. We can use this knowledge in multiple scenarios. First, it can help teams make better decisions on when to cut or restructure tasks, and in general to understand when they’re like to be finished (i.e. as a rule of thumb reject the notion of finishing in the next hour when the blowup factor is already 2).\n\nSecond, this knowledge is also critical to recognize tasks that are becoming impossible to finish in time. We have seen several tasks, which seemed tractable at first sight and then became the bogeyman of the project. It is essential to detect these as early as possible and rethink deliverables, handle expectations, and/or come up with alternative solutions.\n\nAlso, the assumption in this exercise is that during the execution of a task, there are no “feedback effects of inspection”. In reality, the feedback of the team during the daily scrum or of the stakeholders during a review may change the approach (with the distribution) and hence the expected time to finish too. In fact, if you look at the distribution of blowup factors in the [SiP dataset](https://github.com/Derek-Jones/SiP_dataset) (which Erik also looked at) the right tail is not as heavy as one would expect from a standard lognormal distribution, my hypothesis is that it is exactly those feedback, control and restructuring mechanisms — which kick into action when the blowup factor becomes large — that are responsible for this. So, my main suggestion would be for teams to pay close attention to what plays out in the daily scrums to help avoid story completion time blowup.\n\nCode is available on [github](https://github.com/agostontorok/TaskCompletionTimeEstimation)\n\nRemarks:\n\n- Derek M. Jones put together [an interesting paper](https://arxiv.org/pdf/1901.01621.pdf) based on his analysis of the SiP dataset and an interview with Stephen Cullum, founder of SiP. His analysis also suggests that estimates are not only growing as estimators are becoming more accurate but also decreasing as some task types are repeated more and more.\n- In the SiP dataset estimates were made by single developers and not as a joint effort, also most of the estimates there are in the sub 2 days range.\n\nThanks for the comments of Adam Csapo on the first draft",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9469ecfeac51d1dd08e3c8945e243299c42e7c | 1,790 | ipynb | Jupyter Notebook | notebooks.ipynb | StupidPotc/laba1 | 2d5625f9296a8b7c2ce4d2d1a495430067c9b9f7 | [
"MIT"
] | null | null | null | notebooks.ipynb | StupidPotc/laba1 | 2d5625f9296a8b7c2ce4d2d1a495430067c9b9f7 | [
"MIT"
] | null | null | null | notebooks.ipynb | StupidPotc/laba1 | 2d5625f9296a8b7c2ce4d2d1a495430067c9b9f7 | [
"MIT"
] | null | null | null | 15.701754 | 36 | 0.417318 | [
[
[
"18+81",
"_____no_output_____"
],
[
"a = 25\nb = 52\nprint(a+b)",
"77\n"
],
[
"n = 5\nfor i in range(n):\n print(i*10)",
"0\n10\n20\n30\n40\n"
],
[
"i = 0\nwhile True:\n i += 1\n if i > 5:\n break\n print(\"Test while\")",
"Test while\nTest while\nTest while\nTest while\nTest while\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ec9473bd3480221d7c0d71e9da20cea441342bde | 21,988 | ipynb | Jupyter Notebook | Fig3.ipynb | juancolonna/BCI-Bioacoustic_Complexity_Index | 62a3ce29c91e97672649df53c90f47ab706a13f0 | [
"MIT"
] | 3 | 2020-07-13T22:28:38.000Z | 2020-11-02T19:33:28.000Z | Fig3.ipynb | juancolonna/BCI-Bioacoustic_Complexity_Index | 62a3ce29c91e97672649df53c90f47ab706a13f0 | [
"MIT"
] | null | null | null | Fig3.ipynb | juancolonna/BCI-Bioacoustic_Complexity_Index | 62a3ce29c91e97672649df53c90f47ab706a13f0 | [
"MIT"
] | 1 | 2020-11-02T19:33:35.000Z | 2020-11-02T19:33:35.000Z | 81.136531 | 14,000 | 0.776014 | [
[
[
"import pandas as pd\nimport numpy as np\nimport scipy.stats as st\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Baseline Dataset",
"_____no_output_____"
]
],
[
[
"lag = 512\n\nbase = pd.read_pickle('./pkl_datasets/baseline_dataset_ACF_' + str(lag) + '.gzip')\nbase.head()",
"_____no_output_____"
],
[
"labels = []\nfor index, row in base.iterrows():\n labels.append('$s_{'+(row['ID'].split('.'))[-2].split('0')[-1]+'}$')",
"_____no_output_____"
],
[
"plt.figure(figsize=(18,3))\nplt.rc('font', size=16)\nplt.rc('axes', titlesize=16)\n\nplt.subplot(1,4,1)\nplt.bar(range(0,base.shape[0]),base['H'])\nplt.xticks(range(0,base.shape[0]),labels)\nplt.title('H')\n\nplt.subplot(1,4,2)\nplt.bar(range(0,base.shape[0]),base['C'])\nplt.xticks(range(0,base.shape[0]),labels)\nplt.title('EGCI')\n\nplt.subplot(1,4,3)\nplt.bar(range(base.shape[0]),base['AEI'])\nplt.xticks(range(0,base.shape[0]),labels)\nplt.title(r'$H_a$')\n\nplt.subplot(1,4,4)\nplt.bar(range(base.shape[0]),base['ACI'])\nplt.xticks(range(0,base.shape[0]),labels)\nplt.title('ACI')\n\n# plt.savefig('./figures/Fig3.eps', format=\"eps\", bbox_inches='tight')\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9495a622f0a0d4af4dec132b9cdde347e7cb31 | 71,149 | ipynb | Jupyter Notebook | notebooks/Chinese-Growth.ipynb | thedatalass/DemARK | bdf67063964c430d48690eaa1dbc521fff28f262 | [
"Apache-2.0"
] | null | null | null | notebooks/Chinese-Growth.ipynb | thedatalass/DemARK | bdf67063964c430d48690eaa1dbc521fff28f262 | [
"Apache-2.0"
] | null | null | null | notebooks/Chinese-Growth.ipynb | thedatalass/DemARK | bdf67063964c430d48690eaa1dbc521fff28f262 | [
"Apache-2.0"
] | null | null | null | 122.249141 | 43,616 | 0.840195 | [
[
[
"# Initial imports and notebook setup, click arrow to show\n%matplotlib inline\n# The first step is to be able to bring things in from different directories\nimport sys \nimport os\nsys.path.insert(0, os.path.abspath('../lib'))\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom copy import deepcopy\nfrom util import log_progress\nimport HARK # Prevents import error from Demos repo",
"_____no_output_____"
]
],
[
[
"# Do Precautionary Motives Explain China's High Saving Rate?\n\n[](https://mybinder.org/v2/gh/econ-ark/DemArk/master?filepath=%2Fnotebooks%2FChinese-Growth.ipynb)\n\nThe notebook [Nondurables-During-Great-Recession](http://econ-ark.org/notebooks/) shows that the collapse in consumer spending in the U.S. during the Great Recession could easily have been caused by a moderate and plausible increase in the degree of uncertainty.\n\nBut that exercise might make you worry that invoking difficult-to-measure \"uncertainty\" can explain anything (e.g. \"the stock market fell today because the risk aversion of the representative agent increased\").\n\nThe next exercise is designed to show that there are limits to the phenomena that can be explained by invoking plausible changes in uncertainty.\n\nThe specific question is whether a high degree of uncertainty can explain China's very high saving rate (approximately 25 percent), as some papers have proposed. Specifically, we ask \"what beliefs about uncertainty would Chinese consumers need to hold in order to generate a saving rate of 25 percent, given the rapid pace of Chinese growth?\"\n\n### The Thought Experiment\n\nIn more detail, our consumers will initially live in a stationary, low-growth environment (intended to approximate China before 1978). Then, unexpectedly, income growth will surge at the same time that income uncertainty increases (intended to approximate the effect of economic reforms in China since 1978.) Consumers believe the high-growth, high-uncertainty state is highly persistent, but that ultimately growth will slow to a \"normal\" pace matching that of other advanced countries.\n",
"_____no_output_____"
],
[
"### The Baseline Model\n\nWe want the model to have these elements:\n1. \"Standard\" infinite horizon consumption/savings model, with mortality and permanent and temporary shocks to income\n0. The capacity to provide a reasonable match to the distribution of wealth inequality in advanced economies\n0. Ex-ante heterogeneity in consumers' discount factors (to capture wealth inequality)\n\nAll of these are features of the model in the paper [\"The Distribution of Wealth and the Marginal Propensity to Consume\" by Carroll, Slacalek, Tokuoka, and White (2017)](http://econ.jhu.edu/people/ccarroll/papers/cstwMPC), for which all of the computational results were produced using the HARK toolkit. The results for that paper are available in the $\\texttt{cstwMPC}$ directory.\n\n### But With A Different ConsumerType\n\nOne feature that was not present in that model is important here: \n- A Markov state that represents the state of the Chinese economy (to be detailed later)\n\nHARK's $\\texttt{MarkovConsumerType}$ is the right tool for this experiment. So we need to prepare the parameters to create that ConsumerType, and then create it.",
"_____no_output_____"
]
],
[
[
"# Initialize the cstwMPC parameters\ninit_China_parameters = {\n \"CRRA\":1.0, # Coefficient of relative risk aversion \n \"Rfree\":1.01/(1.0 - 1.0/160.0), # Survival probability,\n \"PermGroFac\":[1.000**0.25], # Permanent income growth factor (no perm growth),\n \"PermGroFacAgg\":1.0,\n \"BoroCnstArt\":0.0,\n \"CubicBool\":False,\n \"vFuncBool\":False,\n \"PermShkStd\":[(0.01*4/11)**0.5], # Standard deviation of permanent shocks to income\n \"PermShkCount\":5, # Number of points in permanent income shock grid\n \"TranShkStd\":[(0.01*4)**0.5], # Standard deviation of transitory shocks to income,\n \"TranShkCount\":5, # Number of points in transitory income shock grid\n \"UnempPrb\":0.07, # Probability of unemployment while working\n \"IncUnemp\":0.15, # Unemployment benefit replacement rate\n \"UnempPrbRet\":None,\n \"IncUnempRet\":None,\n \"aXtraMin\":0.00001, # Minimum end-of-period assets in grid\n \"aXtraMax\":20, # Maximum end-of-period assets in grid\n \"aXtraCount\":20, # Number of points in assets grid,\n \"aXtraExtra\":[None],\n \"aXtraNestFac\":3, # Number of times to 'exponentially nest' when constructing assets grid\n \"LivPrb\":[1.0 - 1.0/160.0], # Survival probability\n \"DiscFac\":0.97, # Default intertemporal discount factor, # dummy value, will be overwritten\n \"cycles\":0,\n \"T_cycle\":1,\n \"T_retire\":0,\n 'T_sim':1200, # Number of periods to simulate (idiosyncratic shocks model, perpetual youth)\n 'T_age': 400,\n 'IndL': 10.0/9.0, # Labor supply per individual (constant),\n 'aNrmInitMean':np.log(0.00001),\n 'aNrmInitStd':0.0,\n 'pLvlInitMean':0.0,\n 'pLvlInitStd':0.0,\n 'AgentCount':0, # will be overwritten by parameter distributor\n}",
"_____no_output_____"
]
],
[
[
"### Set Up the Growth Process\n\nFor a Markov model, we need a Markov transition array. Here, we create that array.\nRemember, for this simple example, we just have a low-growth state, and a high-growth state",
"_____no_output_____"
]
],
[
[
"StateCount = 2 #number of Markov states\nProbGrowthEnds = (1./160.) #probability agents assign to the high-growth state ending\nMrkvArray = np.array([[1.,0.],[ProbGrowthEnds,1.-ProbGrowthEnds]]) #Markov array\ninit_China_parameters['MrkvArray'] = [MrkvArray] #assign the Markov array as a parameter",
"_____no_output_____"
]
],
[
[
"One other parameter needs to change: the number of agents in simulation. We want to increase this, because later on when we vastly increase the variance of the permanent income shock, things get wonky. (We need to change this value here, before we have used the parameters to initialize the $\\texttt{MarkovConsumerType}$, because this parameter is used during initialization.)\n\nOther parameters that are not used during initialization can also be assigned here, by changing the appropriate value in the $\\texttt{init_China_parameters_dictionary}$; however, they can also be changed later, by altering the appropriate attribute of the initialized $\\texttt{MarkovConsumerType}$.",
"_____no_output_____"
]
],
[
[
"init_China_parameters['AgentCount'] = 10000",
"_____no_output_____"
]
],
[
[
"### Import and initialize the Agents\n\nHere, we bring in an agent making a consumption/savings decision every period, subject to transitory and permanent income shocks, AND a Markov shock",
"_____no_output_____"
]
],
[
[
"from HARK.ConsumptionSaving.ConsMarkovModel import MarkovConsumerType\nChinaExample = MarkovConsumerType(**init_China_parameters)",
"_____no_output_____"
]
],
[
[
"Currently, Markov states can differ in their interest factor, permanent growth factor, survival probability, and income distribution. Each of these needs to be specifically set.\n\nDo that here, except shock distribution, which will be done later (because we want to examine the consequences of different shock distributions).",
"_____no_output_____"
]
],
[
[
"GrowthFastAnn = 1.06 # Six percent annual growth \nGrowthSlowAnn = 1.00 # Stagnation\nChinaExample.assignParameters(PermGroFac = [np.array([GrowthSlowAnn,GrowthFastAnn ** (.25)])], #needs to be a list, with 0th element of shape of shape (StateCount,)\n Rfree = np.array(StateCount*[init_China_parameters['Rfree']]), #needs to be an array, of shape (StateCount,)\n LivPrb = [np.array(StateCount*[init_China_parameters['LivPrb']][0])], #needs to be a list, with 0th element of shape of shape (StateCount,)\n cycles = 0)\n\nChinaExample.track_vars = ['aNrmNow','cNrmNow','pLvlNow'] # Names of variables to be tracked",
"_____no_output_____"
]
],
[
[
"Now, add in ex-ante heterogeneity in consumers' discount factors.\n\nThe cstwMPC parameters do not define a single discount factor; instead, there is ex-ante heterogeneity in the discount factor. To prepare to create this ex-ante heterogeneity, first create the desired number of consumer types:\n",
"_____no_output_____"
]
],
[
[
"num_consumer_types = 7 # declare the number of types we want\nChineseConsumerTypes = [] # initialize an empty list\n\nfor nn in range(num_consumer_types):\n # Now create the types, and append them to the list ChineseConsumerTypes\n newType = deepcopy(ChinaExample) \n ChineseConsumerTypes.append(newType)",
"_____no_output_____"
]
],
[
[
"\nNow, generate the desired ex-ante heterogeneity, by giving the different consumer types each their own discount factor.\n\nFirst, decide the discount factors to assign:",
"_____no_output_____"
]
],
[
[
"from HARK.utilities import approxUniform\n\nbottomDiscFac = 0.9800\ntopDiscFac = 0.9934 \nDiscFac_list = approxUniform(N=num_consumer_types,bot=bottomDiscFac,top=topDiscFac)[1]\n\n# Now, assign the discount factors we want to the ChineseConsumerTypes\nfor j in range(num_consumer_types):\n ChineseConsumerTypes[j].DiscFac = DiscFac_list[j]",
"_____no_output_____"
]
],
[
[
"## Setting Up the Experiment\n\nThe experiment is performed by a function we will now write.\n\nRecall that all parameters have been assigned appropriately, except for the income process.\n\nThis is because we want to see how much uncertainty needs to accompany the high-growth state to generate the desired high savings rate.\n\nTherefore, among other things, this function will have to initialize and assign the appropriate income process.",
"_____no_output_____"
]
],
[
[
"# First create the income distribution in the low-growth state, which we will not change\nfrom HARK.ConsumptionSaving.ConsIndShockModel import constructLognormalIncomeProcessUnemployment\nimport HARK.ConsumptionSaving.ConsumerParameters as IncomeParams\n\nLowGrowthIncomeDstn = constructLognormalIncomeProcessUnemployment(IncomeParams)[0][0]\n\n# Remember the standard deviation of the permanent income shock in the low-growth state for later\nLowGrowth_PermShkStd = IncomeParams.PermShkStd\n\n\n\ndef calcNatlSavingRate(PrmShkVar_multiplier,RNG_seed = 0):\n \"\"\"\n This function actually performs the experiment we want.\n \n Remember this experiment is: get consumers into the steady-state associated with the low-growth\n regime. Then, give them an unanticipated shock that increases the income growth rate\n and permanent income uncertainty at the same time. What happens to the path for \n the national saving rate? Can an increase in permanent income uncertainty\n explain the high Chinese saving rate since economic reforms began?\n \n The inputs are:\n * PrmShkVar_multiplier, the number by which we want to multiply the variance\n of the permanent shock in the low-growth state to get the variance of the\n permanent shock in the high-growth state\n * RNG_seed, an integer to seed the random number generator for simulations. This useful\n because we are going to run this function for different values of PrmShkVar_multiplier,\n and we may not necessarily want the simulated agents in each run to experience\n the same (normalized) shocks.\n \"\"\"\n\n # First, make a deepcopy of the ChineseConsumerTypes (each with their own discount factor), \n # because we are going to alter them\n ChineseConsumerTypesNew = deepcopy(ChineseConsumerTypes)\n\n # Set the uncertainty in the high-growth state to the desired amount, keeping in mind\n # that PermShkStd is a list of length 1\n PrmShkStd_multiplier = PrmShkVar_multiplier ** .5\n IncomeParams.PermShkStd = [LowGrowth_PermShkStd[0] * PrmShkStd_multiplier] \n\n # Construct the appropriate income distributions\n HighGrowthIncomeDstn = constructLognormalIncomeProcessUnemployment(IncomeParams)[0][0]\n\n # To calculate the national saving rate, we need national income and national consumption\n # To get those, we are going to start national income and consumption at 0, and then\n # loop through each agent type and see how much they contribute to income and consumption.\n NatlIncome = 0.\n NatlCons = 0.\n\n for ChineseConsumerTypeNew in ChineseConsumerTypesNew:\n ### For each consumer type (i.e. each discount factor), calculate total income \n ### and consumption\n\n # First give each ConsumerType their own random number seed\n RNG_seed += 19\n ChineseConsumerTypeNew.seed = RNG_seed\n \n # Set the income distribution in each Markov state appropriately \n ChineseConsumerTypeNew.IncomeDstn = [[LowGrowthIncomeDstn,HighGrowthIncomeDstn]]\n\n # Solve the problem for this ChineseConsumerTypeNew\n ChineseConsumerTypeNew.solve()\n\n \"\"\"\n Now we are ready to simulate.\n \n This case will be a bit different than most, because agents' *perceptions* of the probability\n of changes in the Chinese economy will differ from the actual probability of changes. \n Specifically, agents think there is a 0% chance of moving out of the low-growth state, and \n that there is a (1./160) chance of moving out of the high-growth state. In reality, we \n want the Chinese economy to reach the low growth steady state, and then move into the \n high growth state with probability 1. Then we want it to persist in the high growth \n state for 40 years. \n \"\"\"\n \n ## Now, simulate 500 quarters to get to steady state, then 40 years of high growth\n ChineseConsumerTypeNew.T_sim = 660 \n \n # Ordinarily, the simulate method for a MarkovConsumerType randomly draws Markov states\n # according to the transition probabilities in MrkvArray *independently* for each simulated\n # agent. In this case, however, we want the discrete state to be *perfectly coordinated*\n # across agents-- it represents a macroeconomic state, not a microeconomic one! In fact,\n # we don't want a random history at all, but rather a specific, predetermined history: 125\n # years of low growth, followed by 40 years of high growth.\n \n # To do this, we're going to \"hack\" our consumer type a bit. First, we set the attribute\n # MrkvPrbsInit so that all of the initial Markov states are in the low growth state. Then\n # we initialize the simulation and run it for 500 quarters. However, as we do not\n # want the Markov state to change during this time, we change its MrkvArray to always be in\n # the low growth state with probability 1.\n \n ChineseConsumerTypeNew.MrkvPrbsInit = np.array([1.0,0.0]) # All consumers born in low growth state\n ChineseConsumerTypeNew.MrkvArray[0] = np.array([[1.0,0.0],[1.0,0.0]]) # Stay in low growth state\n ChineseConsumerTypeNew.initializeSim() # Clear the history and make all newborn agents\n ChineseConsumerTypeNew.simulate(500) # Simulate 500 quarders of data\n \n # Now we want the high growth state to occur for the next 160 periods. We change the initial\n # Markov probabilities so that any agents born during this time (to replace an agent who\n # died) is born in the high growth state. Moreover, we change the MrkvArray to *always* be\n # in the high growth state with probability 1. Then we simulate 160 more quarters.\n \n ChineseConsumerTypeNew.MrkvPrbsInit = np.array([0.0,1.0]) # All consumers born in low growth state\n ChineseConsumerTypeNew.MrkvArray[0] = np.array([[0.0,1.0],[0.0,1.0]]) # Stay in low growth state\n ChineseConsumerTypeNew.simulate(160) # Simulate 160 quarders of data\n \n # Now, get the aggregate income and consumption of this ConsumerType over time\n IncomeOfThisConsumerType = np.sum((ChineseConsumerTypeNew.aNrmNow_hist*ChineseConsumerTypeNew.pLvlNow_hist*\n (ChineseConsumerTypeNew.Rfree[0] - 1.)) +\n ChineseConsumerTypeNew.pLvlNow_hist, axis=1)\n \n ConsOfThisConsumerType = np.sum(ChineseConsumerTypeNew.cNrmNow_hist*ChineseConsumerTypeNew.pLvlNow_hist,axis=1)\n \n # Add the income and consumption of this ConsumerType to national income and consumption\n NatlIncome += IncomeOfThisConsumerType\n NatlCons += ConsOfThisConsumerType\n\n \n # After looping through all the ConsumerTypes, calculate and return the path of the national \n # saving rate\n NatlSavingRate = (NatlIncome - NatlCons)/NatlIncome\n\n return NatlSavingRate",
"_____no_output_____"
]
],
[
[
"Now we can use the function we just defined to calculate the path of the national saving rate following the economic reforms, for a given value of the increase to the variance of permanent income accompanying the reforms. We are going to graph this path for various values for this increase.\n\nRemember, we want to see if a plausible value for this increase in uncertainty can explain the high Chinese saving rate.",
"_____no_output_____"
]
],
[
[
"# Declare the number of periods before the reforms to plot in the graph\nquarters_before_reform_to_plot = 5\n\n# Declare the quarters we want to plot results for\nquarters_to_plot = np.arange(-quarters_before_reform_to_plot ,160,1)\n\n# Create a list to hold the paths of the national saving rate\nNatlSavingsRates = []\n\n# Create a list of floats to multiply the variance of the permanent shock to income by\nPermShkVarMultipliers = (1.,2.,4.,8.,11.)\n\n# Loop through the desired multipliers, then get the path of the national saving rate\n# following economic reforms, assuming that the variance of the permanent income shock\n# was multiplied by the given multiplier\nindex = 0\nfor PermShkVarMultiplier in log_progress(PermShkVarMultipliers, every=1):\n NatlSavingsRates.append(calcNatlSavingRate(PermShkVarMultiplier,RNG_seed = index)[-160 - quarters_before_reform_to_plot :])\n index +=1",
"_____no_output_____"
]
],
[
[
"We've calculated the path of the national saving rate as we wanted. All that's left is to graph the results!",
"_____no_output_____"
]
],
[
[
"plt.ylabel('Natl Saving Rate')\nplt.xlabel('Quarters Since Economic Reforms')\nplt.plot(quarters_to_plot,NatlSavingsRates[0],label=str(PermShkVarMultipliers[0]) + ' x variance')\nplt.plot(quarters_to_plot,NatlSavingsRates[1],label=str(PermShkVarMultipliers[1]) + ' x variance')\nplt.plot(quarters_to_plot,NatlSavingsRates[2],label=str(PermShkVarMultipliers[2]) + ' x variance')\nplt.plot(quarters_to_plot,NatlSavingsRates[3],label=str(PermShkVarMultipliers[3]) + ' x variance')\nplt.plot(quarters_to_plot,NatlSavingsRates[4],label=str(PermShkVarMultipliers[4]) + ' x variance')\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\nncol=2, mode=\"expand\", borderaxespad=0.) #put the legend on top\nplt.show(block=False)",
"_____no_output_____"
]
],
[
[
"The figure shows that, if the rate of growth increases the way Chinese growth did, but is not accompanied by any change in the degree of uncertainty, the model's predicted saving rate declines drastically, from an initial (calibrated) value of about 0.1 (ten percent) to close to zero. For this model to have any hope of predicting an increase in the saving rate, it is clear that the increase in uncertainty that accompanies the increase in growth will have to be substantial. \n\nThe red line shows that a mere doubling of uncertainty from its baseline value is not enough: The steady state saving rate is still below its slow-growth value.\n\nWhen we assume that the degree of uncertainty quadruples, the model does finally predict that the new steady-state saving rate will be higher than before, but not much higher, and not remotely approaching 25 percent.\n\nOnly when the degree of uncertainty increases by a factor of 8 is the model capable of producing a new equilbrium saving rate in the ballpark of the Chinese value. \n\nBut this is getting close to a point where the model starts to break down (for both numerical and conceptual reasons), as shown by the erratic path of the saving rate when we multiply the initial variance by 11. \n\nWe do not have historical data on the magnitude of permanent income shocks in China in the pre-1978 period; it would be remarkable if the degree of uncertainty increased by such a large amount, but in the absence of good data it is hard to know for sure. \n\nWhat the experiment does demonstrate, though, is that it is _not_ the case that \"it is easy to explain anything by invoking some plausible but unmeasurable change in uncertainty.\" Substantial differences in the degree of permanent (or highly persistent) income uncertainty across countries, across periods, and across people have been measured in the literature, and those differences could in principle be compared to differences in saving rates to get a firmer fix on the quantitative importance of the \"precautionary saving\" explanation in the Chinese context.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec94af649198ffd94ee3225206b5719157e4d5b0 | 456,975 | ipynb | Jupyter Notebook | 00-intro.ipynb | langmm/CiS2021-hackathon | 02bdc6511d577325cc6cc3967681ffe3c2b7c35a | [
"BSD-3-Clause"
] | null | null | null | 00-intro.ipynb | langmm/CiS2021-hackathon | 02bdc6511d577325cc6cc3967681ffe3c2b7c35a | [
"BSD-3-Clause"
] | null | null | null | 00-intro.ipynb | langmm/CiS2021-hackathon | 02bdc6511d577325cc6cc3967681ffe3c2b7c35a | [
"BSD-3-Clause"
] | null | null | null | 993.423913 | 141,420 | 0.959729 | [
[
[
"# Introduction\n\n(NOTE: This notebook is intended for use with the slides found [here](https://github.com/cropsinsilico/CiS2021-hackathon/blob/main/slides.pdf)).\n\nThis is a Jupyter notebook. It allows us to run code (in this case Python) alongside text in different \"cells\". This cell is a markdown cell that can display text and html, the next cell is a code cell.\n\nIn the code cells (prefixed by `In [ ]:`), you can assign variables, perform calculations or call external functions/classes. You can run code cells by selecting the cell (so that a blue or green box appears around it) and then clicking the run button (located at the top of the page) or pressing `Shift+Enter` together. Then a number will appear inside the brackets indicating the order of when the cell was executed. \n\nOutput from the cell will be displayed below it with the `Out[#]:` prefix where the number in the brackets indicates the input cell that generated it.",
"_____no_output_____"
]
],
[
[
"x = 1\ny = 3\nz = (x + y)**3\nz",
"_____no_output_____"
]
],
[
[
"Any Python code can be used, and we can import external packages as well just like in Python scripts. Cells can also use any variables created in any previously executed cell. The cell below imports some tools that will be used in the rest of this notebook.",
"_____no_output_____"
]
],
[
[
"from yggdrasil import tools # Displaying syntax highlighted source code\nfrom yggdrasil.runner import run # Running integrations\nimport trimesh # Load & display 3D meshes",
"_____no_output_____"
]
],
[
[
"The notebook can also display plots, 3D graphics, and interactive widgets. \n\nThe cell below uses `trimesh` to load and display a 3D mesh. You can drag the image to rotate the object and zoom in/out by scrolling over the image. This type of display will be used in some the examples today to display output.",
"_____no_output_____"
]
],
[
[
"fname = 'meshes/plants-2.obj'\nmesh = trimesh.load_mesh(fname)\nmesh.show()",
"_____no_output_____"
]
],
[
[
"# Integrating Models as Functions\n\nyggdrasil provides interfaces in several languages that can be used to open connections with other models, but in many cases, making a model work in integrations can be done by allowing yggdrasil to wrap a function that executes the model calculations.\n\nFor example, the model displayed by the cell below calculates (albeit poorly) the intensity of light for a given day of the year and height from the ground. It is written as a Python function that takes `doy` (day of the year) and `height` as inputs and returns the intensity as output.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"tools.display_source('models/light_v0.py', number_lines=True)",
"_____no_output_____"
]
],
[
[
"To use this model with yggdrasil, no modification of the code is necessary. The only new material required is a YAML configuration file. The cell below displays the YAML for the model above.",
"_____no_output_____"
]
],
[
[
"tools.display_source('yamls/light_v0_python.yml', number_lines=True)",
"_____no_output_____"
]
],
[
[
"This alone will not allow yggdrasil to run as the model inputs & output are not connected to anything. The YAML file below declares the connections that should be made to get input from the tab-delimited table `input/light_v0.txt` (also shown below) and direct output to the file `output/light_v0.txt`.\n\nSince the input/output to the light model is not explicitly defined in the model YAML above, yggdrasil assumes that all inputs will come from the same channel named `<model_name>:input` and names the output channel `<model_name>:output`.\n\n",
"_____no_output_____"
]
],
[
[
"tools.display_source('yamls/connections_v0.yml', number_lines=True)",
"_____no_output_____"
]
],
[
[
"To run the light-to-file integration defined in these two YAML files, the `run` function is called with the paths to the YAML files as input. This is equivalent to calling `yggrun yamls/light_v0_python.yml yamls/connections_v0.yml --production-run` from the command line.\n\nThe output from the yggdrasil integration will include output from the models themselves, some information about what stage each model is in, and the duration of different stages in the integration take to complete. In the case of running multiple models, the output from different models will often be interwoven and does not necessary indicate the order that models are executed in.",
"_____no_output_____"
]
],
[
[
"run(['yamls/light_v0_python.yml', 'yamls/connections_v0.yml'], production_run=True)",
"_____no_output_____"
]
],
[
[
"The cell below will display the contents of the output file `output/light_v0.txt` following the run.",
"_____no_output_____"
]
],
[
[
"tools.display_source('output/light_v0.txt', number_lines=True)",
"_____no_output_____"
]
],
[
[
"## Command Line Interface\nThe examples above have been using yggdrasil's Python interface to run the integration, but that is not necessary. yggdrasil has a command line utility for running integration `yggrun` which takes YAML paths as inputs.\n\n## The `production_run` Keyword\nYou may have noticed that we passed the `production_run` keyword to the `run` API function with a value of `True`. When set to `True`, yggdrasil turns of several safe guards that increase run-time. These include things like checking data formats and validating inputs/outputs to/from framework components. It is highly recommended, that `production_run` is only set to `True` when you are done testing an integration and are ready for a \"production run\" that requires higher performance. The `production_run` flag can also be passed to the command line interface `yggrun` as `--production-run`.\n\n## Similarly in Other Languages\n\nNOTE: Units must be explicitly added via a `datatype` entry in the model yaml for the compiled languages (i.e. C, C++, & Fortran)\n\n### C++ Version",
"_____no_output_____"
]
],
[
[
"tools.display_source('models/light_v0.cpp', number_lines=True)\ntools.display_source('yamls/light_v0_cpp.yml', number_lines=True)\nrun(['yamls/light_v0_cpp.yml', 'yamls/connections_v0.yml'], production_run=True)\ntools.display_source('output/light_v0.txt', number_lines=True)",
"_____no_output_____"
]
],
[
[
"### Fortran Version",
"_____no_output_____"
]
],
[
[
"tools.display_source('models/light_v0.f90', number_lines=True)\ntools.display_source('yamls/light_v0_fortran.yml', number_lines=True)\nrun(['yamls/light_v0_fortran.yml', 'yamls/connections_v0.yml'], production_run=True)\ntools.display_source('output/light_v0.txt', number_lines=True)",
"_____no_output_____"
]
],
[
[
"### R Version",
"_____no_output_____"
]
],
[
[
"tools.display_source('models/light_v0.R', number_lines=True)\ntools.display_source('yamls/light_v0_R.yml', number_lines=True)\nrun(['yamls/light_v0_R.yml', 'yamls/connections_v0.yml'], production_run=True)\ntools.display_source('output/light_v0.txt', number_lines=True)",
"_____no_output_____"
]
],
[
[
"# Integrating Models via Interface\n\nThe function wrapping method of yggdrasil works in many cases, but not all. When a model must send or receive data to/from another model mid-calculation or the model algorithm is written such that writing it as a function would be unwieldy, the yggdrasil interface can be used directly.\n\nFor example, the model below simulates growth of a 3D shoot structure over time and is executed via the command line with parameters controlling the how long the simulations runs and what the initial mesh looks like.",
"_____no_output_____"
]
],
[
[
"tools.display_source('models/shoot_v0.py', number_lines=True)",
"_____no_output_____"
]
],
[
[
"\n\nWe can run this model via the command line or via yggdrasil using the YAML displayed below which runs the model for 48 hrs with a time step of 6 hrs and does not handle any input or output.",
"_____no_output_____"
]
],
[
[
"tools.display_source('yamls/shoot_v0.yml', number_lines=True)\nrun('yamls/shoot_v0.yml', production_run=True)",
"_____no_output_____"
]
],
[
[
"The final mesh from this simulation is displayed by the cell below using the `trimesh` package.",
"_____no_output_____"
]
],
[
[
"mesh = trimesh.load_mesh('output/mesh_008.obj')\nmesh.show()",
"_____no_output_____"
]
],
[
[
"If we want to determine the light intensity at the top of the plant at each timestep, re-writing this model as a function and allowing yggdrasil to wrap it requres a lot of modification to the original code. Instead we can create an output channel via the yggdrasil Python interface with minimal modification to the code.\n\nThe cell below shows the diff for an updated version of this model that does this.\n1. Checks if the yggdrasil version of the code should be run\n1. Imports the relevant yggdrasil modules and functions and opens an output channel with the name `height`\n1. Sends the time and maximum height of the mesh to the `height` output channel with units\n",
"_____no_output_____"
]
],
[
[
"tools.display_source_diff('models/shoot_v0.py', 'models/shoot_v1.py', number_lines=True)",
"_____no_output_____"
]
],
[
[
"The above model can be run in the exact same manner as the original without yggdrasil. The YAML diff displayed in the cell below shows the changes necessary to connect the `height` output to a table file `output/height.txt` in the absence of any other connection.",
"_____no_output_____"
]
],
[
[
"tools.display_source_diff('yamls/shoot_v0.yml', 'yamls/shoot_v1.yml', number_lines=True)",
"_____no_output_____"
]
],
[
[
"\n\nThe cell below runs the 1-model integration defined in the YAML above.",
"_____no_output_____"
]
],
[
[
"run(['yamls/shoot_v1.yml'], production_run=True)",
"_____no_output_____"
]
],
[
[
"The resulting mesh and `output/height.txt` file is displayed by the next two cells.",
"_____no_output_____"
]
],
[
[
"mesh = trimesh.load_mesh('output/mesh_008.obj')\nmesh.show()",
"_____no_output_____"
],
[
"tools.display_source('output/height.txt')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec94d639e3a5e1d7a368b9abe4eb308c6b96ac43 | 162,940 | ipynb | Jupyter Notebook | src/ImageClassificationDetectorCNN.ipynb | Septianleonardo/Image-Classification-Detector | 1460a6e96e1e8187426a13620640ecf1bef90156 | [
"MIT"
] | null | null | null | src/ImageClassificationDetectorCNN.ipynb | Septianleonardo/Image-Classification-Detector | 1460a6e96e1e8187426a13620640ecf1bef90156 | [
"MIT"
] | null | null | null | src/ImageClassificationDetectorCNN.ipynb | Septianleonardo/Image-Classification-Detector | 1460a6e96e1e8187426a13620640ecf1bef90156 | [
"MIT"
] | null | null | null | 175.204301 | 126,737 | 0.874248 | [
[
[
"<h2 style='color:blue' align='center'>Pengklasifikasian Gambar hewan atau kendaraan berukuran Kecil Menggunakan Convolutional Neural Network (CNN)</h2>",
"_____no_output_____"
],
[
"Disusun oleh:\n* 140810190022 Muhammad Diva Eka Andriana\n* 140810190028 Robby Sobari\n* 140810190030 Azhar Jauharul Umam\n* 140810190038 Leonardo Septian Dwigantoro\n\nPercobaan pengklasifikasian gambar kecil dari dataset cifar10 dari dataset TensorFlow Keras dengan total 10 kelas seperti yang ditunjukkan di bawah ini, dalam percobaan kami menggunakan CNN untuk pengklasifikasian serta akan ada perbandingan antara ANN dengan CNN dalam keakuratan pengklasifikasian gambar",
"_____no_output_____"
],
[
"\n\n",
"_____no_output_____"
],
[
"Menginstall library Tensorflow",
"_____no_output_____"
]
],
[
[
"pip install tensorflow",
"_____no_output_____"
]
],
[
[
"Mengimport library Tensorflow, Matplotlib, Numpy, serta mengimport datasets, layers dan models dari Keras",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow.keras import datasets, layers, models\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Mengimport dataset CIFAR10 Dataset dari Keras, dan mengecek ukuran dari X_train yang berisi 50000 training sampel dengan setiap training sample berisi 32 x 32 image dan 3 rgb channel dari setiap image yang dipakai\n",
"_____no_output_____"
]
],
[
[
"(X_train, y_train), (X_test,y_test) = datasets.cifar10.load_data()\nX_train.shape",
"_____no_output_____"
]
],
[
[
"Mengecek ukuran dari X_test yang berisi 10000 training sampel dengan setiap training sample berisi 32 x 32 image dan 3 rgb channel dari setiap image yang dipakai",
"_____no_output_____"
]
],
[
[
"X_test.shape",
"_____no_output_____"
]
],
[
[
"Setelah kita mempunyai 50000 training image dan 10000 test image, kita lakukan pengecekan y_train yang berisi 50000 sampel\n",
"_____no_output_____"
]
],
[
[
"y_train.shape",
"_____no_output_____"
]
],
[
[
"y_train merupakan array 2D, dan untuk percobaan pengklasifikasian yang kita lakukan cukup membutuhkan array 1D karena kategori yang diminta langsung direct ke tujuan, jadi kita akan mengubahnya menjadi array 1D sekarang",
"_____no_output_____"
]
],
[
[
"y_train[:5]",
"_____no_output_____"
]
],
[
[
"Dengan menggunakan fungsi reshape dan parameter kedua dikosongkan maka y_train akan berubah menjadi 1 dimensi lalu kita cek apakah sudah sesuai.",
"_____no_output_____"
]
],
[
[
"y_train = y_train.reshape(-1,)\ny_train[:5]",
"_____no_output_____"
],
[
"y_test = y_test.reshape(-1,)",
"_____no_output_____"
]
],
[
[
"Daftar 10 kelas yang digunakan untuk klasifikasi",
"_____no_output_____"
]
],
[
[
"classes = [\"airplane\",\"automobile\",\"bird\",\"cat\",\"deer\",\"dog\",\"frog\",\"horse\",\"ship\",\"truck\"]",
"_____no_output_____"
]
],
[
[
"Percobaan fungsi plot_sample untuk menampilkan image yang dari dataset Cifar-10",
"_____no_output_____"
]
],
[
[
"def plot_sample(X, y, index):\n plt.figure(figsize = (15,2))\n plt.imshow(X[index])\n plt.xlabel(classes[y[index]])",
"_____no_output_____"
]
],
[
[
"Sebagai contoh dilakukan percobaan untuk menampilkan gambar",
"_____no_output_____"
]
],
[
[
"plot_sample(X_train, y_train, 1)",
"_____no_output_____"
]
],
[
[
"<h4 style=\"color:purple\">Normalisasi training data</h4>",
"_____no_output_____"
],
[
"Normalisasi value gambar dengan range 0 hingga 1. Gambar memiliki 3 value (R, G, B) dan setiap nilai dapat berkisar dari 0 hingga 255. Oleh karena itu untuk menormalkan dalam rentang 0 hingga 1, kita perlu membagi dengan 255 dengan library numpy",
"_____no_output_____"
]
],
[
[
"X_train = X_train / 255.0\nX_test = X_test / 255.0",
"_____no_output_____"
]
],
[
[
"<h4 style=\"color:purple\">Membangun jaringan saraf tiruan sederhana (artificial neural network ANN) untuk klasifikasi gambar</h4>",
"_____no_output_____"
],
[
"Membangun model dan training",
"_____no_output_____"
]
],
[
[
"ann = models.Sequential([\n \n\n layers.Flatten(input_shape=(32,32,3)),\n layers.Dense(3000, activation='relu'),\n layers.Dense(1000, activation='relu'),\n layers.Dense(10, activation='sigmoid') \n ])\n\nann.compile(optimizer='SGD',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\nann.fit(X_train, y_train, epochs=5)",
"Epoch 1/5\n1563/1563 [==============================] - 93s 59ms/step - loss: 1.8121 - accuracy: 0.3524\nEpoch 2/5\n1563/1563 [==============================] - 92s 59ms/step - loss: 1.6241 - accuracy: 0.4274\nEpoch 3/5\n1563/1563 [==============================] - 92s 59ms/step - loss: 1.5418 - accuracy: 0.4581\nEpoch 4/5\n1563/1563 [==============================] - 92s 59ms/step - loss: 1.4811 - accuracy: 0.4786\nEpoch 5/5\n1563/1563 [==============================] - 92s 59ms/step - loss: 1.4332 - accuracy: 0.4981\n"
]
],
[
[
"Pada epoch 5, akurasi berada di angka 49,81% dapat dikategorikan cukup buruk untuk dilakukan pengklasifikasian gambar",
"_____no_output_____"
],
[
"Hasil report klasifikasi dari 10 kelas dengan mengunakan ANN, sebagai contoh kelas truck pada nomer 9 memiliki presisi di angka 0.57, recall 0.51 dan f1-score 0.54 serta didapat rata-rata weight adalah 0.53",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import confusion_matrix , classification_report\nimport numpy as np\ny_pred = ann.predict(X_test)\ny_pred_classes = [np.argmax(element) for element in y_pred]\n\nprint(\"Classification Report: \\n\", classification_report(y_test, y_pred_classes))",
"Classification Report: \n precision recall f1-score support\n\n 0 0.49 0.52 0.50 1000\n 1 0.68 0.53 0.59 1000\n 2 0.55 0.12 0.19 1000\n 3 0.32 0.49 0.39 1000\n 4 0.36 0.54 0.43 1000\n 5 0.59 0.12 0.20 1000\n 6 0.44 0.68 0.53 1000\n 7 0.59 0.47 0.52 1000\n 8 0.45 0.78 0.57 1000\n 9 0.65 0.40 0.50 1000\n\n accuracy 0.46 10000\n macro avg 0.51 0.46 0.44 10000\nweighted avg 0.51 0.46 0.44 10000\n\n"
]
],
[
[
"<h4 style=\"color:purple\">Percobaan membangun jaringan saraf konvolusional (convolutional neural network CNN)</h4>",
"_____no_output_____"
],
[
"Dilakukan pembuatan kernel konvolusi dengan Keras Conv2D yaitu input lapisan yang membantu menghasilkan tensor output dan MaxPooling2D untuk data spasial 2D .Pada filter 32 dan 64 digunakan ukuran kernel (3,3) tipe aktivasi ‘relu’ serta ukuran MaxPooling2D yang dipakai untuk layers pada filter 32 dibawah adalah (2,2) \n",
"_____no_output_____"
]
],
[
[
"cnn = models.Sequential([\n \n #cnn\n layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(32, 32, 3)),\n layers.MaxPooling2D((2, 2)),\n \n layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),\n layers.MaxPooling2D((2, 2)),\n \n #dense\n layers.Flatten(),\n layers.Dense(64, activation='relu'),\n layers.Dense(10, activation='softmax')\n])",
"_____no_output_____"
]
],
[
[
"Meng-compile dengan optimizer 'adam' dan loss dengan 'sparse_categorical_crossentropy' dengan tujuan ",
"_____no_output_____"
]
],
[
[
"cnn.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"Melatih model CNN dengan 50.000 training data pada 10 epochs",
"_____no_output_____"
]
],
[
[
"cnn.fit(X_train, y_train, epochs=10)",
"Epoch 1/10\n1563/1563 [==============================] - 58s 37ms/step - loss: 1.5087 - accuracy: 0.4630\nEpoch 2/10\n1563/1563 [==============================] - 57s 37ms/step - loss: 1.1307 - accuracy: 0.6019\nEpoch 3/10\n1563/1563 [==============================] - 57s 37ms/step - loss: 1.0008 - accuracy: 0.6507\nEpoch 4/10\n1563/1563 [==============================] - 57s 36ms/step - loss: 0.9199 - accuracy: 0.6801\nEpoch 5/10\n1563/1563 [==============================] - 57s 37ms/step - loss: 0.8530 - accuracy: 0.7034\nEpoch 6/10\n1563/1563 [==============================] - 57s 36ms/step - loss: 0.7961 - accuracy: 0.7234\nEpoch 7/10\n1563/1563 [==============================] - 56s 36ms/step - loss: 0.7461 - accuracy: 0.7421\nEpoch 8/10\n1563/1563 [==============================] - 57s 36ms/step - loss: 0.7015 - accuracy: 0.7552\nEpoch 9/10\n1563/1563 [==============================] - 57s 36ms/step - loss: 0.6608 - accuracy: 0.7717\nEpoch 10/10\n1563/1563 [==============================] - 57s 36ms/step - loss: 0.6266 - accuracy: 0.7801\n"
]
],
[
[
"Dengan CNN, pada akhir 5 epoch, akurasi berada di sekitar 70.34% yang merupakan peningkatan signifikan dibandingkan ANN. CNN paling baik untuk klasifikasi gambar dan memberikan akurasi yang luar biasa. Selain itu, komputasi jauh lebih sedikit dibandingkan dengan ANN sederhana karena penggabungan maksimal mengurangi dimensi gambar sambil tetap mempertahankan fitur-fiturnya",
"_____no_output_____"
],
[
"Mengevaluasi hasil dari CNN, didapat bahwa akurasi diangka 0.6909 atau hampir menyentuh 70% yang berarti akurasi dapat dikatakan cukup baik",
"_____no_output_____"
]
],
[
[
"cnn.evaluate(X_test,y_test)",
"313/313 [==============================] - 4s 11ms/step - loss: 0.9332 - accuracy: 0.6909\n"
]
],
[
[
"Mengkonversi menjadi 1D array dan ",
"_____no_output_____"
]
],
[
[
"y_pred = cnn.predict(X_test)\ny_pred[:5]",
"_____no_output_____"
],
[
"y_classes = [np.argmax(element) for element in y_pred]\ny_classes[:10]",
"_____no_output_____"
]
],
[
[
"Array sudah terkonversi menjadi 1D array dan dapat digunakan",
"_____no_output_____"
]
],
[
[
"y_test[:10]",
"_____no_output_____"
]
],
[
[
"Mengambil plot sample pada kelas ke-1 yakni ship",
"_____no_output_____"
]
],
[
[
"plot_sample(X_test, y_test,6)",
"_____no_output_____"
],
[
"classes[y_classes[2]]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec94ee0e1627b325190f508581200eb55a1d9b06 | 153,799 | ipynb | Jupyter Notebook | Coursera/Applied Data Science with Python Specialization/Applied Plotting, Charting & Data Representation in Python/Advanced plotting with interaction.ipynb | ejgarcia1991/Courses-and-other-non-professional-projects | 94794dd1d6cf626de174330311e3fde4d10cd460 | [
"MIT"
] | 1 | 2021-02-19T22:33:55.000Z | 2021-02-19T22:33:55.000Z | Coursera/Applied Data Science with Python Specialization/Applied Plotting, Charting & Data Representation in Python/Advanced plotting with interaction.ipynb | ejgarcia1991/Courses-and-other-non-professional-projects | 94794dd1d6cf626de174330311e3fde4d10cd460 | [
"MIT"
] | null | null | null | Coursera/Applied Data Science with Python Specialization/Applied Plotting, Charting & Data Representation in Python/Advanced plotting with interaction.ipynb | ejgarcia1991/Courses-and-other-non-professional-projects | 94794dd1d6cf626de174330311e3fde4d10cd460 | [
"MIT"
] | null | null | null | 75.651254 | 33,175 | 0.6849 | [
[
[
"# Assignment 3 - Building a Custom Visualization\n\n---\n\nIn this assignment you must choose one of the options presented below and submit a visual as well as your source code for peer grading. The details of how you solve the assignment are up to you, although your assignment must use matplotlib so that your peers can evaluate your work. The options differ in challenge level, but there are no grades associated with the challenge level you chose. However, your peers will be asked to ensure you at least met a minimum quality for a given technique in order to pass. Implement the technique fully (or exceed it!) and you should be able to earn full grades for the assignment.\n\n\n Ferreira, N., Fisher, D., & Konig, A. C. (2014, April). [Sample-oriented task-driven visualizations: allowing users to make better, more confident decisions.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Ferreira_Fisher_Sample_Oriented_Tasks.pdf) \n In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 571-580). ACM. ([video](https://www.youtube.com/watch?v=BI7GAs-va-Q))\n\n\nIn this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Ferreira_Fisher_Sample_Oriented_Tasks.pdf) the authors describe the challenges users face when trying to make judgements about probabilistic data generated through samples. As an example, they look at a bar chart of four years of data (replicated below in Figure 1). Each year has a y-axis value, which is derived from a sample of a larger dataset. For instance, the first value might be the number votes in a given district or riding for 1992, with the average being around 33,000. On top of this is plotted the 95% confidence interval for the mean (see the boxplot lectures for more information, and the yerr parameter of barcharts).\n\n<br>\n<img src=\"readonly/Assignment3Fig1.png\" alt=\"Figure 1\" style=\"width: 400px;\"/>\n<h4 style=\"text-align: center;\" markdown=\"1\"> Figure 1 from (Ferreira et al, 2014).</h4>\n\n<br>\n\nA challenge that users face is that, for a given y-axis value (e.g. 42,000), it is difficult to know which x-axis values are most likely to be representative, because the confidence levels overlap and their distributions are different (the lengths of the confidence interval bars are unequal). One of the solutions the authors propose for this problem (Figure 2c) is to allow users to indicate the y-axis value of interest (e.g. 42,000) and then draw a horizontal line and color bars based on this value. So bars might be colored red if they are definitely above this value (given the confidence interval), blue if they are definitely below this value, or white if they contain this value.\n\n\n<br>\n<img src=\"readonly/Assignment3Fig2c.png\" alt=\"Figure 1\" style=\"width: 400px;\"/>\n<h4 style=\"text-align: center;\" markdown=\"1\"> Figure 2c from (Ferreira et al. 2014). Note that the colorbar legend at the bottom as well as the arrows are not required in the assignment descriptions below.</h4>\n\n<br>\n<br>\n\n**Easiest option:** Implement the bar coloring as described above - a color scale with only three colors, (e.g. blue, white, and red). Assume the user provides the y axis value of interest as a parameter or variable.\n\n\n**Harder option:** Implement the bar coloring as described in the paper, where the color of the bar is actually based on the amount of data covered (e.g. a gradient ranging from dark blue for the distribution being certainly below this y-axis, to white if the value is certainly contained, to dark red if the value is certainly not contained as the distribution is above the axis).\n\n**Even Harder option:** Add interactivity to the above, which allows the user to click on the y axis to set the value of interest. The bar colors should change with respect to what value the user has selected.\n\n**Hardest option:** Allow the user to interactively set a range of y values they are interested in, and recolor based on this (e.g. a y-axis band, see the paper for more details).\n\n---\n\n*Note: The data given for this assignment is not the same as the data used in the article and as a result the visualizations may look a little different.*",
"_____no_output_____"
]
],
[
[
"# Use the following data for this assignment:\n\nimport pandas as pd\nimport numpy as np\n\nnp.random.seed(12345)\n\ndf = pd.DataFrame([np.random.normal(32000,200000,3650), \n np.random.normal(43000,100000,3650), \n np.random.normal(43500,140000,3650), \n np.random.normal(48000,70000,3650)], \n index=[1992,1993,1994,1995])\ndf",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport scipy.stats as st\n%matplotlib notebook\n\n\ndef plotMeanBarCharteasy(y,df):\n dfMean=df.mean(axis=1) #get the mean over columns\n #Get the confidence interval using Scipy, note that we need to get the sem over each row so we use apply\n confInterval=st.t.interval(0.95, len(df.columns)-1, loc=dfMean, scale=df.apply(st.sem, axis=1)) \n confIntervalDown=dfMean.values-confInterval[0] #The results are absolute value, but we need to pass the distance to the mean\n confIntervalUp=confInterval[1]-dfMean.values \n #Set colors depending on the interval range using RGBA\n color=[]\n for low,high in zip(confInterval[0],confInterval[1]):\n if(high < y):\n color.append([0,0,1,1]) #Blue\n elif(low>y):\n color.append([1,0,0,1]) #Red\n else:\n color.append([0.8,0.8,0.8,1]) #Gray\n #Plotting\n plt.figure()\n plt.bar(df.index,dfMean,yerr=[confIntervalDown,confIntervalUp],color=color, capsize=12) #Plot barchart\n plt.xticks(df.index, ('1992', '1993', '1994', '1995')); #set fixed X ticks\n plt.axhline(y=y, color='black', linewidth=2, linestyle='--'); #set horizontal line equivalent to y\nplotMeanBarCharteasy(41500,df)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport scipy.stats as st\n%matplotlib notebook\ndef plotMeanBarChartHardest(ylow,yhigh,df):\n dfMean=df.mean(axis=1) #get the mean over columns\n #Get the confidence interval using Scipy, note that we need to get the sem over each row so we use apply\n confInterval=st.t.interval(0.95, len(df.columns)-1, loc=dfMean, scale=df.apply(st.sem, axis=1)) \n confIntervalDown=dfMean.values-confInterval[0] #The results are absolute value, but we need to pass the distance to the mean\n confIntervalUp=confInterval[1]-dfMean.values \n #Set colors depending on the interval range using RGBA\n def calculateColors(ylow,yhigh):\n color=[]\n for low,high in zip(confInterval[0],confInterval[1]):\n if(high<ylow):\n color.append([0,0,1,1]) #Blue for below bars\n elif(low>yhigh):\n color.append([1,0,0,1]) #Red for above bars\n elif(low<ylow and high>yhigh):\n color.append([0.8,0.8,0.8,0.3]) #Gray for range contained in bars\n elif(low>ylow and high<yhigh):\n color.append([0.1,0.1,0.1,1]) #Black for bars contained in range\n #Both of these Elif deal with the overlapping of ranges and calculate a gradient based on the degree of overlapping \n #as well as the location of the ranges for blue or red colors\n elif(low<ylow and high>ylow):\n overlap=(high-ylow)/(yhigh-ylow)\n color.append([1-overlap,1-overlap,1,1]) \n elif(low>ylow and high>yhigh):\n overlap=(yhigh-low)/(yhigh-ylow)\n color.append([1,1-overlap,1-overlap,1])\n return color\n \n colors=calculateColors(ylow,yhigh) \n #Plotting\n plt.figure()\n bars=plt.bar(df.index,dfMean,yerr=[confIntervalDown,confIntervalUp],color=colors, capsize=12) #Plot barchart\n plt.xticks(df.index, ('1992', '1993', '1994', '1995')); #set fixed X ticks\n yplotmin=plt.axhline(y=ylow, color='black',alpha=1, linewidth=2, linestyle='--'); #set horizontal line equivalent to y\n yplotmax=plt.axhline(y=yhigh, color='black',alpha=1, linewidth=2, linestyle='--'); #set horizontal line equivalent to y\n fill=plt.fill([1991,1991,1996,1996],[ylow,yhigh,yhigh,ylow],color='gray',alpha=0.2) #a gray polygon indicating the range\n plt.xlim([1991, 1996]) #limit axes for better visibility\n \n ymaxtext=plt.text(1996.1, yhigh,s='%d' %yhigh,bbox=dict(facecolor='white', alpha=0.5)) # A couple of text on the side for precision\n ymintext=plt.text(1996.1, ylow,s='%d' %ylow,bbox=dict(facecolor='white', alpha=0.5))\n yplotmin.set_ydata(ylow) #small hack to remove the double array values into a single value\n yplotmax.set_ydata(yhigh)\n \n \n def mouse_press(event):\n dymax=float(abs((event.ydata-yplotmax.get_ydata())/yplotmax.get_ydata())) #get dy to top line\n dymin=float(abs((event.ydata-yplotmin.get_ydata())/yplotmin.get_ydata())) #get dy to bot line\n #if it'sa left click and delta is small enough and the line is selected... this line is selected is using the alpha value\n #This is a less than ideal practice, but there is no easy workaround for this in python, once the canvas is\n #interactive mode you're limited to using variables inside the canvas.\n if(event.button==1 and dymax<=0.01 and yplotmax.get_alpha()==1):\n yplotmax.set_color('Red')\n yplotmax.set_alpha(0.99)\n return\n elif(event.button==1 and dymax<=0.01 and yplotmax.get_alpha()==0.99):\n yplotmax.set_color('Black')\n yplotmax.set_alpha(1)\n return\n elif(event.button==1 and dymin<=0.01 and yplotmin.get_alpha()==1):\n yplotmin.set_color('Red')\n yplotmin.set_alpha(0.99)\n return\n elif(event.button==1 and dymin<=0.01 and yplotmin.get_alpha()==0.99):\n yplotmin.set_color('Black')\n yplotmin.set_alpha(1)\n return\n \n \n def mouse_move(event):\n #Going from the same logic above, we only want to move the line if it's selected, AKA if the alpha is 0.99\n yMaxSelected=yplotmax.get_alpha()==0.99\n yMinSelected=yplotmin.get_alpha()==0.99\n if(yMaxSelected):\n line=yplotmax\n text=ymaxtext\n elif(yMinSelected):\n line=yplotmin\n text=ymintext\n line.set_ydata(event.ydata) #move the line\n text.set_text('%d' %event.ydata) #update the text \n text.set_position((1996.1, event.ydata))#and the position\n #this one is also a bit clunky in matplotlib. A fill is made up of polygon objects, so to access the fill rectangle you\n #need to iterate over the polygons (only one in this case) and update the 5 points of the vertex, 4 vertex + start point to close polygon\n for x in fill:\n x.set_xy([[1991.,yplotmin.get_ydata()],[1991.,yplotmax.get_ydata()],[1996.,yplotmax.get_ydata()],[1996.,yplotmin.get_ydata()],[1991.,yplotmin.get_ydata()]])\n \n #This is just to ensure you're drawing colors based on the location of the lines and not the actual original lines\n botline=min(yplotmin.get_ydata(),yplotmax.get_ydata()) \n topline=max(yplotmin.get_ydata(),yplotmax.get_ydata())\n colors=calculateColors(botline,topline)\n for i in range(4):\n bars[i].set_color(colors[i]) \n \n #Add the events\n plt.gcf().canvas.mpl_connect('motion_notify_event', mouse_move)\n plt.gcf().canvas.mpl_connect('button_press_event', mouse_press)\n return plt\n\npl=plotMeanBarChartHardest(37000,43000,df) #Enjoy!\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec94f2294bc1e3fa542ebca9c7c464b0535ea8f6 | 61,860 | ipynb | Jupyter Notebook | datacamp_ml/ml_classification_regression/multiclass_logistic_regression.ipynb | issagaliyeva/machine_learning | 63f4d39a95147cdac4ef760cb47dffc318793a99 | [
"MIT"
] | null | null | null | datacamp_ml/ml_classification_regression/multiclass_logistic_regression.ipynb | issagaliyeva/machine_learning | 63f4d39a95147cdac4ef760cb47dffc318793a99 | [
"MIT"
] | null | null | null | datacamp_ml/ml_classification_regression/multiclass_logistic_regression.ipynb | issagaliyeva/machine_learning | 63f4d39a95147cdac4ef760cb47dffc318793a99 | [
"MIT"
] | null | null | null | 127.022587 | 24,463 | 0.82753 | [
[
[
"### Multiclass Logistic Regression\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\nsns.set()\n",
"_____no_output_____"
],
[
"# one-vs-all\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import make_classification\n\nX, y = make_classification(n_samples=200, n_features=2, n_classes=3, n_informative=2,\n n_redundant=0, n_clusters_per_class=1, class_sep=2.0, random_state=101)\nplt.scatter(X[:, 0], X[:, 1], marker='o', c=y, linewidths=0, edgecolors=None)\nplt.show()\n",
"_____no_output_____"
],
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import classification_report\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)\nclf = LogisticRegression().fit(X_train, y_train)\ny_pred = clf.predict(X_test)\n\nprint(classification_report(y_test, y_pred))",
" precision recall f1-score support\n\n 0 1.00 1.00 1.00 20\n 1 1.00 1.00 1.00 20\n 2 1.00 1.00 1.00 20\n\n accuracy 1.00 60\n macro avg 1.00 1.00 1.00 60\nweighted avg 1.00 1.00 1.00 60\n\n"
],
[
"# predict one variable & look at results (PROBABILITIES)\nprint(X_test[0])\nprint(y_test[0])\nprint(y_pred[0])\n\nprint(clf.predict_proba(X_test[0].reshape(1, -1)))",
"[-3.26744968 1.19639333]\n0\n0\n[[0.85566647 0.13526263 0.0090709 ]]\n"
],
[
"# hardcore version w/ statsmodels\n\nX, y = make_classification(n_samples=10000, n_features=10, n_informative=10, n_redundant=0, random_state=101)",
"_____no_output_____"
],
[
"import statsmodels.api as sm\n\nXc = sm.add_constant(X)\nlogistic_regression = sm.Logit(y, Xc)\nfitted_model = logistic_regression.fit()\n",
"Optimization terminated successfully.\n Current function value: 0.438685\n Iterations 7\n"
],
[
"fitted_model.summary()",
"_____no_output_____"
]
],
[
[
"- Converged: whether the model reached conversion. Use this model iff it's true\n- LL-Null: LogLikelihood when only the intercept is used as a predictor\n- LLR p-value: chi-squared probability of getting log-likelihood ratio statistically greater\nthan LLR.\n\nTHE BELOW CODE PRODUCES THE SAME RESULTS AS STATSMODELS!!!!!!!!!!!!!!!",
"_____no_output_____"
]
],
[
[
"# Stochastic Gradient Descent\nfrom sklearn.preprocessing import StandardScaler\n\nobservations = len(X)\nvariables = ['VAR' + str(i + 1) for i in range(10)]\n\ndef random_w(p):\n return np.array([np.random.normal() for j in range(p)])\n\ndef sigmoid(X, w):\n return 1. / (1. + np.exp(-np.dot(X, w)))\n\ndef hypothesis(X, w):\n return np.dot(X, w)\n\ndef loss(X, w, y):\n return hypothesis(X, w) - y\n\ndef logit_loss(X, w, y):\n return sigmoid(X, w) - y\n\ndef squared_loss(X, w, y):\n return loss(X, w, y)**2\n\ndef gradient(X, w, y, loss_type=squared_loss):\n gradients = list()\n n = float(len(y))\n\n for j in range(len(w)):\n gradients.append(np.sum(loss_type(X, w, y) * X[:, j]) / n)\n return gradients\n\ndef update(X, w, y, alpha=0.01, loss_type=squared_loss):\n return [t - alpha * g for t, g in zip(w, gradient(X, w, y, loss_type))]\n\ndef optimize(X, y, alpha=0.01, eta=10**-12, loss_type=squared_loss, iterations=1000):\n standardize = StandardScaler()\n Xst = standardize.fit_transform(X)\n orig_means, orig_stds = standardize.mean_, np.sqrt(standardize.var_)\n Xst = np.column_stack((Xst, np.ones(observations)))\n w = random_w(Xst.shape[1])\n path = list()\n for k in range(iterations):\n SSL = np.sum(squared_loss(Xst, w, y))\n new_w = update(Xst, w, y, alpha=alpha, loss_type=logit_loss)\n new_SSL = np.sum(squared_loss(Xst, new_w, y))\n w = new_w\n\n if k >= 5 and (-eta <= new_SSL - SSL <= eta):\n path.append(new_SSL)\n break\n if k % (iterations / 20) == 0:\n path.append(new_SSL)\n unstandardized_beta = w[:-1] / orig_stds\n unstandardized_bias = w[-1] - np.sum((orig_means / orig_stds) * w[:-1])\n return np.insert(unstandardized_beta, 0, unstandardized_bias), path, k\n\nalpha = 1\nw, path, iterations = optimize(X, y, alpha, eta=10**-5, loss_type=logit_loss, iterations=10000)\nprint (\"These are our final standardized coefficients: %s\" % w)\nprint (\"Reached after %i iterations\" % (iterations+1))\n",
"These are our final standardized coefficients: [ 0.42991408 0.0670771 -0.78279578 0.12208733 0.28410285 0.14689341\n -0.34143436 0.05031078 -0.1393206 0.11267402 -0.47916908]\nReached after 431 iterations\n"
]
],
[
[
"## Finally, there are two versions to implement Logistic Regression with sklearn\n### First, we will set the parameter extremely high (unregularized, since C is high) & the stopping parameter is extremely low\n",
"_____no_output_____"
]
],
[
[
"clf = LogisticRegression(C=1e4, tol=1e-25, random_state=101, verbose=1).fit(X, y)\ncoeffs = [clf.intercept_[0]]\ncoeffs.extend(clf.coef_[0])\ncoeffs",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s finished\n"
]
],
[
[
"As the last model, we try the Scikit-learn implementation of the SGD. Getting the same\nweights is really tricky, since the model is really complex, and the parameters should be\noptimized for performance, not for obtaining the same result as for the closed form\napproach. So, use this example to understand the coefficients in the model, but not for\ntraining a real-world model:\n",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import SGDClassifier\nclf = SGDClassifier(loss='log', alpha=1e-4, n_iter_no_change=1e2, random_state=101, verbose=0).fit(X, y)\ncoeffs = [clf.intercept_[0]]\ncoeffs.extend(clf.coef_[0])\ncoeffs",
"_____no_output_____"
],
[
"from sklearn.svm import SVC\nfrom sklearn.feature_selection import RFECV\nfrom sklearn.model_selection import StratifiedKFold\n\nsvc = SVC(kernel='linear')\nmin_features = 1\nrfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(5),\n scoring='accuracy', min_features_to_select=min_features).fit(X, y)\n\noptimal_features = rfecv.n_features_\nplt.title(f'Optimal number of features: {optimal_features}', fontsize=16)\nplt.plot(range(min_features, len(rfecv.grid_scores_) + min_features), rfecv.grid_scores_)\nplt.xlabel('Number of features')\nplt.ylabel('Score')\nplt.show()\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec95019e768f7d784df73c34634f7cb9598e2838 | 3,005 | ipynb | Jupyter Notebook | 108-diophantine-reciprocals-i.ipynb | arkeros/projecteuler | c95db97583034af8fc61d5786692d82eabe50c12 | [
"MIT"
] | 2 | 2017-02-19T12:37:13.000Z | 2021-01-19T04:58:09.000Z | 108-diophantine-reciprocals-i.ipynb | arkeros/projecteuler | c95db97583034af8fc61d5786692d82eabe50c12 | [
"MIT"
] | null | null | null | 108-diophantine-reciprocals-i.ipynb | arkeros/projecteuler | c95db97583034af8fc61d5786692d82eabe50c12 | [
"MIT"
] | 4 | 2018-01-05T14:29:09.000Z | 2020-01-27T13:37:40.000Z | 40.608108 | 168 | 0.540433 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec95050f2b808270228e77f5671ddf22fb7d5532 | 13,433 | ipynb | Jupyter Notebook | Model Results/Sentiment Analysis Model - Option C.ipynb | keenanbernard/NLP-Data | ae3460f02ac913e5482c7b3e63d8760d5d41dbfc | [
"Apache-2.0"
] | null | null | null | Model Results/Sentiment Analysis Model - Option C.ipynb | keenanbernard/NLP-Data | ae3460f02ac913e5482c7b3e63d8760d5d41dbfc | [
"Apache-2.0"
] | null | null | null | Model Results/Sentiment Analysis Model - Option C.ipynb | keenanbernard/NLP-Data | ae3460f02ac913e5482c7b3e63d8760d5d41dbfc | [
"Apache-2.0"
] | null | null | null | 37.00551 | 128 | 0.484404 | [
[
[
"import re\nimport string\nimport numpy as np\nimport pandas as pd\n\n# importing data set into dataframes with two columns: Text and Class\ntestData = pd.read_csv(\"/home/jovyan/binder/test.csv\", names=[\"Review\", \"Class\"], delimiter=\",\", header=None)\ntrainData = pd.read_csv(\"/home/jovyan/binder/train.csv\", names=[\"Review\", \"Class\"], delimiter=\",\", header=None)\nvalData = pd.read_csv(\"/home/jovyan/binder/val.csv\", names=[\"Review\", \"Class\"], delimiter=\",\", header=None)",
"_____no_output_____"
],
[
"print(testData.head())\nprint(\"\")\nprint(trainData.head())\nprint(\"\")\nprint(valData.head())\nprint(\"\")\nprint(\"Test Samples per class: {}\".format(np.bincount(testData.Class)))\nprint(\"Train Samples per class: {}\".format(np.bincount(trainData.Class)))\nprint(\"Val Samples per class: {}\".format(np.bincount(valData.Class)))\n# Count of samples in each data set",
" Review Class\n0 wild things is a suspenseful thriller starring... 1\n1 i know it already opened in december , but i f... 1\n2 what's shocking about \" carlito's way \" is how... 1\n3 uncompromising french director robert bresson'... 1\n4 aggressive , bleak , and unrelenting film abou... 1\n\n Review Class\n0 note : some may consider portions of the follo... 1\n1 note : some may consider portions of the follo... 1\n2 every once in a while you see a film that is s... 1\n3 when i was growing up in 1970s , boys in my sc... 1\n4 the muppet movie is the first , and the best m... 1\n\n Review Class\n0 if he doesn=92t watch out , mel gibson is in d... 1\n1 wong kar-wei's \" fallen angels \" is , on a pur... 1\n2 there is nothing like american history x in th... 1\n3 an unhappy italian housewife , a lonely waiter... 1\n4 when people are talking about good old times ,... 1\n\nTest Samples per class: [200 200]\nTrain Samples per class: [700 700]\nVal Samples per class: [100 100]\n"
],
[
"conda install -c anaconda nltk",
"Collecting package metadata (current_repodata.json): done\nSolving environment: done\n\n\n==> WARNING: A newer version of conda exists. <==\n current version: 4.9.2\n latest version: 4.11.0\n\nPlease update conda by running\n\n $ conda update -n base conda\n\n\n\n## Package Plan ##\n\n environment location: /srv/conda/envs/notebook\n\n added / updated specs:\n - nltk\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n ca-certificates-2020.10.14 | 0 128 KB anaconda\n certifi-2020.6.20 | py36_0 160 KB anaconda\n click-7.1.2 | py_0 67 KB anaconda\n nltk-3.5 | py_0 1.1 MB anaconda\n regex-2020.10.15 | py36h7b6447c_0 361 KB anaconda\n ------------------------------------------------------------\n Total: 1.8 MB\n\nThe following NEW packages will be INSTALLED:\n\n click anaconda/noarch::click-7.1.2-py_0\n nltk anaconda/noarch::nltk-3.5-py_0\n regex anaconda/linux-64::regex-2020.10.15-py36h7b6447c_0\n\nThe following packages will be SUPERSEDED by a higher-priority channel:\n\n ca-certificates conda-forge::ca-certificates-2021.5.3~ --> anaconda::ca-certificates-2020.10.14-0\n certifi conda-forge::certifi-2021.5.30-py36h5~ --> anaconda::certifi-2020.6.20-py36_0\n\n\n\nDownloading and Extracting Packages\ncertifi-2020.6.20 | 160 KB | ##################################### | 100% \nca-certificates-2020 | 128 KB | ##################################### | 100% \nregex-2020.10.15 | 361 KB | ##################################### | 100% \nnltk-3.5 | 1.1 MB | ##################################### | 100% \nclick-7.1.2 | 67 KB | ##################################### | 100% \nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"import nltk\nfrom nltk.corpus import stopwords\nfrom nltk.tokenize import word_tokenize\nnltk.download('punkt')\nnltk.download('stopwords')",
"[nltk_data] Downloading package punkt to /home/jovyan/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n[nltk_data] Downloading package stopwords to /home/jovyan/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n"
],
[
"# function used for text cleaning of input data\ndef clean(df):\n corpus = list() # define empty list for corpus\n lines = df[\"Review\"].values.tolist() # apply text values from \"Review\" column to the data frame\n for text in lines: \n text = text.lower() \n text = re.sub(r\"[,.\\\"!$%^&*(){}?/;`~:<>+=-]\", \"\", text) # regexp used to remove all special characters\n tokens = word_tokenize(text) # splitting text\n table = str.maketrans('', '', string.punctuation) \n stripped = [w.translate(table) for w in tokens]\n words = [word for word in stripped if word.isalpha()]\n stop_words = set(stopwords.words(\"english\"))\n stop_words.discard(\"not\")\n words = ' '.join(words) # joining tokenize words together\n corpus.append(words) # amends cleaned text to corpus\n return corpus",
"_____no_output_____"
],
[
"# applying clean function to data sets\nclTest = clean(testData)\nclTrain = clean(trainData)\nclVal = clean(valData)",
"_____no_output_____"
],
[
"# loading TF-IDF class for feature extraction\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nTF = TfidfVectorizer(min_df=15) \nxTrain = TF.fit_transform(clTrain).toarray() \nyTrain = trainData[['Class']].values\nxTest = TF.transform(clTest).toarray()\nyTest = testData[['Class']].values\nxVal = TF.transform(clVal).toarray()\nyVal = valData[['Class']].values",
"_____no_output_____"
],
[
"# loading Multinomial Naive Bayes model for text classification\nfrom sklearn.naive_bayes import MultinomialNB\nmNB = MultinomialNB()\nmNB.fit(xTrain, np.ravel(yTrain)) \ny_pred_ts = mNB.predict(xTest)\ny_pred_tr = mNB.predict(xTrain)\ny_pred_va = mNB.predict(xVal)",
"_____no_output_____"
],
[
"# classification report used to evaluate perfomance (Accuracy) of ML model on test and val datasets\nfrom sklearn.metrics import classification_report, confusion_matrix\nprint(\"Test Set Metrics:\\n{}\".format(classification_report(yTest, y_pred_ts)))\nprint(\"\")\nprint(\"Confusion Matrix:\\n{}\".format(confusion_matrix(yTest, y_pred_ts)))\nprint(\"\")\nprint(\"Sentiment Analysis on Test Set:\\n{}\".format(mNB.predict(TF.transform(clTest).toarray())))\nprint(\"\")\nprint(\"Validation Set Metrics:\\n{}\".format(classification_report(yVal, y_pred_va)))\nprint(\"\")\nprint(\"Confusion Matrix:\\n{}\".format(confusion_matrix(yVal, y_pred_va)))\nprint(\"\")\nprint(\"Sentiment Analysis on Validation Set:\\n{}\".format(mNB.predict(TF.transform(clVal).toarray())))",
"Test Set Metrics:\n precision recall f1-score support\n\n 0 0.83 0.88 0.86 200\n 1 0.87 0.82 0.85 200\n\n accuracy 0.85 400\n macro avg 0.85 0.85 0.85 400\nweighted avg 0.85 0.85 0.85 400\n\n\nConfusion Matrix:\n[[176 24]\n [ 35 165]]\n\nSentiment Analysis on Test Set:\n[1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1\n 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1\n 1 0 1 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 0\n 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 0 1 0 1 0 1 1 0 1 1 1 1 1 1\n 1 1 1 0 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0\n 0 0 1 1 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0\n 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0\n 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0\n 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0]\n\nValidation Set Metrics:\n precision recall f1-score support\n\n 0 0.80 0.81 0.81 100\n 1 0.81 0.80 0.80 100\n\n accuracy 0.81 200\n macro avg 0.81 0.81 0.80 200\nweighted avg 0.81 0.81 0.80 200\n\n\nConfusion Matrix:\n[[81 19]\n [20 80]]\n\nSentiment Analysis on Validation Set:\n[1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1\n 1 1 1 0 1 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1\n 0 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0\n 0 0 1 0 0 0 1 0 1 1 0 0 0 0 1 0 0 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0\n 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0\n 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec950740481a8fcabf726a80d6aedb8925ac6928 | 563,697 | ipynb | Jupyter Notebook | projects/project-1-cnn-with-tensorflow-keras-for-fashion-mnist.ipynb | cj-asimov12/ai_neural_networks | a89e200822d9136e66ee39da6d84b7c5ded0b7e2 | [
"MIT"
] | null | null | null | projects/project-1-cnn-with-tensorflow-keras-for-fashion-mnist.ipynb | cj-asimov12/ai_neural_networks | a89e200822d9136e66ee39da6d84b7c5ded0b7e2 | [
"MIT"
] | null | null | null | projects/project-1-cnn-with-tensorflow-keras-for-fashion-mnist.ipynb | cj-asimov12/ai_neural_networks | a89e200822d9136e66ee39da6d84b7c5ded0b7e2 | [
"MIT"
] | null | null | null | 169.941815 | 103,112 | 0.851218 | [
[
[
"# <a id=\"4\">Loading required packages</a>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report\nfrom tensorflow import keras\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout, MaxPooling2D\nfrom IPython.display import SVG\nfrom tensorflow.keras.utils import model_to_dot\nfrom tensorflow.keras.utils import plot_model\nfrom tensorflow.keras.utils import to_categorical\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline \nimport plotly.graph_objs as go\nimport plotly.figure_factory as ff\nfrom plotly import tools\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\ninit_notebook_mode(connected=True)\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)",
"_____no_output_____"
]
],
[
[
"# <a id=\"4\">Parameters</a>",
"_____no_output_____"
]
],
[
[
"IMG_ROWS = 28\nIMG_COLS = 28\nNUM_CLASSES = 10\nTEST_SIZE = 0.2\nRANDOM_STATE = 2018\n#Model\nNO_EPOCHS = 50\nBATCH_SIZE = 128\n\nIS_LOCAL = False\n\nimport os\n\nif(IS_LOCAL):\n PATH=\"C:\\\\Users\\\\black\\\\Desktop\\\\ai_py\\\\code_files\\\\input\\\\fashionmnist\"\nelse:\n PATH=\"C:\\\\Users\\\\black\\\\Desktop\\\\ai_py\\\\code_files\\\\input\\\\\"\nprint(os.listdir(PATH))",
"['fashion-mnist_test', 'fashion-mnist_test.csv', 'fashion-mnist_test.csv.zip', 'fashion-mnist_train', 'fashion-mnist_train.csv', 'fashion-mnist_train.csv.zip', 'fashionmnist', 't10k-images-idx3-ubyte.zip', 't10k-labels-idx1-ubyte', 'train-images-idx3-ubyte.zip', 'train-labels-idx1-ubyte']\n"
]
],
[
[
"# <a id=\"3\">Read the data</a>\n\nThere are 10 different classes of images, as following: \n\n* **0**: **T-shirt/top**; \n* **1**: **Trouser**; \n* **2**: **Pullover**; \n* **3**: **Dress**;\n* **4**: **Coat**;\n* **5**: **Sandal**;\n* **6**: **Shirt**;\n* **7**: **Sneaker**;\n* **8**: **Bag**;\n* **9**: **Ankle boot**.\n\nImage dimmensions are **28**x**28**. \n\nThe train set and test set are given in two separate datasets.\n",
"_____no_output_____"
]
],
[
[
"train_file = PATH+\"fashion-mnist_train.csv\"\ntest_file = PATH+\"fashion-mnist_test.csv\"\n\ntrain_data = pd.read_csv(train_file)\ntest_data = pd.read_csv(test_file)",
"_____no_output_____"
]
],
[
[
"# <a id=\"4\">Data exploration</a>",
"_____no_output_____"
],
[
"The dimmension of the original train, test set are as following:",
"_____no_output_____"
]
],
[
[
"print(\"Fashion MNIST train - rows:\",train_data.shape[0],\" columns:\", train_data.shape[1])\nprint(\"Fashion MNIST test - rows:\",test_data.shape[0],\" columns:\", test_data.shape[1])",
"Fashion MNIST train - rows: 60000 columns: 785\nFashion MNIST test - rows: 10000 columns: 785\n"
]
],
[
[
"## <a id=\"41\">Class distribution</a>\n\nLet's see how many number of images are in each class. We start with the train set.\n\n### Train set images class distribution",
"_____no_output_____"
]
],
[
[
"# Create a dictionary for each type of label \nlabels = {0 : \"T-shirt/top\", 1: \"Trouser\", 2: \"Pullover\", 3: \"Dress\", 4: \"Coat\",\n 5: \"Sandal\", 6: \"Shirt\", 7: \"Sneaker\", 8: \"Bag\", 9: \"Ankle Boot\"}\n\ndef get_classes_distribution(data):\n # Get the count for each label\n label_counts = data[\"label\"].value_counts()\n\n # Get total number of samples\n total_samples = len(data)\n\n\n # Count the number of items in each class\n for i in range(len(label_counts)):\n label = labels[label_counts.index[i]]\n count = label_counts.values[i]\n percent = (count / total_samples) * 100\n print(\"{:<20s}: {} or {}%\".format(label, count, percent))\n\nget_classes_distribution(train_data)",
"Pullover : 6000 or 10.0%\nAnkle Boot : 6000 or 10.0%\nShirt : 6000 or 10.0%\nT-shirt/top : 6000 or 10.0%\nDress : 6000 or 10.0%\nCoat : 6000 or 10.0%\nSandal : 6000 or 10.0%\nBag : 6000 or 10.0%\nSneaker : 6000 or 10.0%\nTrouser : 6000 or 10.0%\n"
]
],
[
[
"The classes are equaly distributed in the train set (10% each). Let's check the same for the test set. \nLet's also plot the class distribution.\n\n",
"_____no_output_____"
]
],
[
[
"def plot_label_per_class(data):\n f, ax = plt.subplots(1,1, figsize=(12,4))\n g = sns.countplot(data.label, order = data[\"label\"].value_counts().index)\n g.set_title(\"Number of labels for each class\")\n\n for p, label in zip(g.patches, data[\"label\"].value_counts().index):\n g.annotate(labels[label], (p.get_x(), p.get_height()+0.1))\n plt.show() \n \nplot_label_per_class(train_data)",
"_____no_output_____"
]
],
[
[
"### Test set images class distribution",
"_____no_output_____"
]
],
[
[
"get_classes_distribution(test_data)",
"T-shirt/top : 1000 or 10.0%\nTrouser : 1000 or 10.0%\nPullover : 1000 or 10.0%\nDress : 1000 or 10.0%\nBag : 1000 or 10.0%\nShirt : 1000 or 10.0%\nSandal : 1000 or 10.0%\nCoat : 1000 or 10.0%\nSneaker : 1000 or 10.0%\nAnkle Boot : 1000 or 10.0%\n"
]
],
[
[
"Also in the test set the 10 classes are equaly distributed (10% each). \n\nLets' also plot the class distribution.",
"_____no_output_____"
]
],
[
[
"plot_label_per_class(test_data)",
"_____no_output_____"
]
],
[
[
"## <a id=\"42\">Sample images</a>\n\n### Train set images\n\nLet's plot some samples for the images. \nWe add labels to the train set images, with the corresponding fashion item category. ",
"_____no_output_____"
]
],
[
[
"def sample_images_data(data):\n # An empty list to collect some samples\n sample_images = []\n sample_labels = []\n\n # Iterate over the keys of the labels dictionary defined in the above cell\n for k in labels.keys():\n # Get four samples for each category\n samples = data[data[\"label\"] == k].head(4)\n # Append the samples to the samples list\n for j, s in enumerate(samples.values):\n # First column contain labels, hence index should start from 1\n img = np.array(samples.iloc[j, 1:]).reshape(IMG_ROWS,IMG_COLS)\n sample_images.append(img)\n sample_labels.append(samples.iloc[j, 0])\n\n print(\"Total number of sample images to plot: \", len(sample_images))\n return sample_images, sample_labels\n\ntrain_sample_images, train_sample_labels = sample_images_data(train_data)",
"Total number of sample images to plot: 40\n"
]
],
[
[
"Let's now plot the images. \nThe labels are shown above each image.",
"_____no_output_____"
]
],
[
[
"def plot_sample_images(data_sample_images,data_sample_labels,cmap=\"Blues\"):\n # Plot the sample images now\n f, ax = plt.subplots(5,8, figsize=(16,10))\n\n for i, img in enumerate(data_sample_images):\n ax[i//8, i%8].imshow(img, cmap=cmap)\n ax[i//8, i%8].axis('off')\n ax[i//8, i%8].set_title(labels[data_sample_labels[i]])\n plt.show() \n \nplot_sample_images(train_sample_images,train_sample_labels, \"Greens\")",
"_____no_output_____"
]
],
[
[
"### Test set images\n\nLet's plot now a selection of the test set images. \nLabels are as well added (they are known). ",
"_____no_output_____"
]
],
[
[
"test_sample_images, test_sample_labels = sample_images_data(test_data)\nplot_sample_images(test_sample_images,test_sample_labels)",
"Total number of sample images to plot: 40\n"
]
],
[
[
"# <a id=\"5\">Model</a>\n\nWe start with preparing the model.",
"_____no_output_____"
],
[
"## <a id=\"51\">Prepare the model</a>\n\n## Data preprocessing\n\nFirst we will do a data preprocessing to prepare for the model.\n\nWe reshape the columns from (784) to (28,28,1). We also save label (target) feature as a separate vector.",
"_____no_output_____"
]
],
[
[
"# data preprocessing\ndef data_preprocessing(raw):\n out_y = to_categorical(raw.label, NUM_CLASSES)\n num_images = raw.shape[0]\n x_as_array = raw.values[:,1:]\n x_shaped_array = x_as_array.reshape(num_images, IMG_ROWS, IMG_COLS, 1)\n out_x = x_shaped_array / 255\n return out_x, out_y",
"_____no_output_____"
]
],
[
[
"We process both the train_data and the test_data",
"_____no_output_____"
]
],
[
[
"# prepare the data\nX, y = data_preprocessing(train_data)\nX_test, y_test = data_preprocessing(test_data)",
"_____no_output_____"
]
],
[
[
"## Split train in train and validation set\n\nWe further split the train set in train and validation set. The validation set will be 20% from the original train set, therefore the split will be train/validation of 0.8/0.2.",
"_____no_output_____"
]
],
[
[
"X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE)",
"_____no_output_____"
]
],
[
[
"The dimmension of the processed train, validation and test set are as following:",
"_____no_output_____"
]
],
[
[
"print(\"Fashion MNIST train - rows:\",X_train.shape[0],\" columns:\", X_train.shape[1:4])\nprint(\"Fashion MNIST valid - rows:\",X_val.shape[0],\" columns:\", X_val.shape[1:4])\nprint(\"Fashion MNIST test - rows:\",X_test.shape[0],\" columns:\", X_test.shape[1:4])",
"Fashion MNIST train - rows: 48000 columns: (28, 28, 1)\nFashion MNIST valid - rows: 12000 columns: (28, 28, 1)\nFashion MNIST test - rows: 10000 columns: (28, 28, 1)\n"
]
],
[
[
"Let's check the class inbalance for the rsulted training set.",
"_____no_output_____"
]
],
[
[
"def plot_count_per_class(yd):\n ydf = pd.DataFrame(yd)\n f, ax = plt.subplots(1,1, figsize=(12,4))\n g = sns.countplot(ydf[0], order = np.arange(0,10))\n g.set_title(\"Number of items for each class\")\n g.set_xlabel(\"Category\")\n \n for p, label in zip(g.patches, np.arange(0,10)):\n g.annotate(labels[label], (p.get_x(), p.get_height()+0.1))\n \n plt.show() \n\ndef get_count_per_class(yd):\n ydf = pd.DataFrame(yd)\n # Get the count for each label\n label_counts = ydf[0].value_counts()\n\n # Get total number of samples\n total_samples = len(yd)\n\n\n # Count the number of items in each class\n for i in range(len(label_counts)):\n label = labels[label_counts.index[i]]\n count = label_counts.values[i]\n percent = (count / total_samples) * 100\n print(\"{:<20s}: {} or {}%\".format(label, count, percent))\n \nplot_count_per_class(np.argmax(y_train,axis=1))\nget_count_per_class(np.argmax(y_train,axis=1))",
"_____no_output_____"
]
],
[
[
"And, as well, for the validation set.",
"_____no_output_____"
]
],
[
[
"plot_count_per_class(np.argmax(y_val,axis=1))\nget_count_per_class(np.argmax(y_val,axis=1))",
"_____no_output_____"
]
],
[
[
"Both the train and validation set are unbalanced with respect of distribution of classes. ",
"_____no_output_____"
]
],
[
[
"# Model\nmodel = Sequential()\n# Add convolution 2D\nmodel.add(Conv2D(32, kernel_size=(3, 3),\n activation='relu',\n kernel_initializer='he_normal',\n input_shape=(IMG_ROWS, IMG_COLS, 1)))\nmodel.add(MaxPooling2D((2, 2)))\nmodel.add(Conv2D(64, \n kernel_size=(3, 3), \n activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(128, (3, 3), activation='relu'))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dense(NUM_CLASSES, activation='softmax'))\n\n\nmodel.compile(loss=keras.losses.categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"### Inspect the model\n\nLet's check the model we initialized.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n conv2d (Conv2D) (None, 26, 26, 32) 320 \n \n max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0 \n ) \n \n conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n \n max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0 \n 2D) \n \n conv2d_2 (Conv2D) (None, 3, 3, 128) 73856 \n \n flatten (Flatten) (None, 1152) 0 \n \n dense (Dense) (None, 128) 147584 \n \n dense_1 (Dense) (None, 10) 1290 \n \n=================================================================\nTotal params: 241,546\nTrainable params: 241,546\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Run the model\n\nWe run the model with the training set. We are also using the validation set (a subset from the orginal training set) for validation.",
"_____no_output_____"
]
],
[
[
"train_model = model.fit(X_train, y_train,\n batch_size=BATCH_SIZE,\n epochs=NO_EPOCHS,\n verbose=1,\n validation_data=(X_val, y_val))",
"Epoch 1/50\n375/375 [==============================] - 22s 58ms/step - loss: 0.5318 - accuracy: 0.8040 - val_loss: 0.4068 - val_accuracy: 0.8542\nEpoch 2/50\n375/375 [==============================] - 22s 58ms/step - loss: 0.3403 - accuracy: 0.8770 - val_loss: 0.3466 - val_accuracy: 0.8753\nEpoch 3/50\n375/375 [==============================] - 22s 58ms/step - loss: 0.2910 - accuracy: 0.8943 - val_loss: 0.3022 - val_accuracy: 0.8910\nEpoch 4/50\n375/375 [==============================] - 22s 58ms/step - loss: 0.2618 - accuracy: 0.9042 - val_loss: 0.2826 - val_accuracy: 0.8970\nEpoch 5/50\n375/375 [==============================] - 21s 57ms/step - loss: 0.2303 - accuracy: 0.9150 - val_loss: 0.2666 - val_accuracy: 0.9044\nEpoch 6/50\n375/375 [==============================] - 21s 57ms/step - loss: 0.2130 - accuracy: 0.9226 - val_loss: 0.2684 - val_accuracy: 0.9028\nEpoch 7/50\n375/375 [==============================] - 22s 57ms/step - loss: 0.1936 - accuracy: 0.9285 - val_loss: 0.2598 - val_accuracy: 0.9087\nEpoch 8/50\n375/375 [==============================] - 21s 57ms/step - loss: 0.1739 - accuracy: 0.9360 - val_loss: 0.2667 - val_accuracy: 0.9106\nEpoch 9/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.1558 - accuracy: 0.9423 - val_loss: 0.2740 - val_accuracy: 0.9070\nEpoch 10/50\n375/375 [==============================] - 22s 57ms/step - loss: 0.1430 - accuracy: 0.9449 - val_loss: 0.2644 - val_accuracy: 0.9089\nEpoch 11/50\n375/375 [==============================] - 22s 58ms/step - loss: 0.1260 - accuracy: 0.9535 - val_loss: 0.2770 - val_accuracy: 0.9129\nEpoch 12/50\n375/375 [==============================] - 21s 57ms/step - loss: 0.1140 - accuracy: 0.9573 - val_loss: 0.2875 - val_accuracy: 0.9127\nEpoch 13/50\n375/375 [==============================] - 22s 60ms/step - loss: 0.1040 - accuracy: 0.9620 - val_loss: 0.3049 - val_accuracy: 0.9104\nEpoch 14/50\n375/375 [==============================] - 22s 60ms/step - loss: 0.0883 - accuracy: 0.9669 - val_loss: 0.3139 - val_accuracy: 0.9106\nEpoch 15/50\n375/375 [==============================] - 20s 53ms/step - loss: 0.0788 - accuracy: 0.9701 - val_loss: 0.3297 - val_accuracy: 0.9113\nEpoch 16/50\n375/375 [==============================] - 19s 50ms/step - loss: 0.0717 - accuracy: 0.9737 - val_loss: 0.3523 - val_accuracy: 0.9101\nEpoch 17/50\n375/375 [==============================] - 19s 51ms/step - loss: 0.0661 - accuracy: 0.9755 - val_loss: 0.3694 - val_accuracy: 0.9087\nEpoch 18/50\n375/375 [==============================] - 17s 47ms/step - loss: 0.0588 - accuracy: 0.9788 - val_loss: 0.3770 - val_accuracy: 0.9106\nEpoch 19/50\n375/375 [==============================] - 18s 47ms/step - loss: 0.0501 - accuracy: 0.9811 - val_loss: 0.3911 - val_accuracy: 0.9119\nEpoch 20/50\n375/375 [==============================] - 17s 47ms/step - loss: 0.0468 - accuracy: 0.9825 - val_loss: 0.4208 - val_accuracy: 0.9139\nEpoch 21/50\n375/375 [==============================] - 18s 47ms/step - loss: 0.0416 - accuracy: 0.9846 - val_loss: 0.4334 - val_accuracy: 0.9122\nEpoch 22/50\n375/375 [==============================] - 18s 49ms/step - loss: 0.0407 - accuracy: 0.9848 - val_loss: 0.4679 - val_accuracy: 0.9027\nEpoch 23/50\n375/375 [==============================] - 19s 50ms/step - loss: 0.0333 - accuracy: 0.9880 - val_loss: 0.4995 - val_accuracy: 0.9064\nEpoch 24/50\n375/375 [==============================] - 19s 51ms/step - loss: 0.0341 - accuracy: 0.9881 - val_loss: 0.5023 - val_accuracy: 0.9097\nEpoch 25/50\n375/375 [==============================] - 18s 49ms/step - loss: 0.0395 - accuracy: 0.9858 - val_loss: 0.5038 - val_accuracy: 0.9063\nEpoch 26/50\n375/375 [==============================] - 19s 50ms/step - loss: 0.0249 - accuracy: 0.9910 - val_loss: 0.5409 - val_accuracy: 0.9137\nEpoch 27/50\n375/375 [==============================] - 18s 49ms/step - loss: 0.0230 - accuracy: 0.9919 - val_loss: 0.5157 - val_accuracy: 0.9147\nEpoch 28/50\n375/375 [==============================] - 18s 48ms/step - loss: 0.0313 - accuracy: 0.9881 - val_loss: 0.5542 - val_accuracy: 0.9058\nEpoch 29/50\n375/375 [==============================] - 18s 48ms/step - loss: 0.0285 - accuracy: 0.9896 - val_loss: 0.5736 - val_accuracy: 0.9022\nEpoch 30/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.0300 - accuracy: 0.9893 - val_loss: 0.5539 - val_accuracy: 0.9073\nEpoch 31/50\n375/375 [==============================] - 24s 64ms/step - loss: 0.0211 - accuracy: 0.9930 - val_loss: 0.5714 - val_accuracy: 0.9100\nEpoch 32/50\n375/375 [==============================] - 22s 60ms/step - loss: 0.0225 - accuracy: 0.9917 - val_loss: 0.5692 - val_accuracy: 0.9108\nEpoch 33/50\n375/375 [==============================] - 21s 57ms/step - loss: 0.0209 - accuracy: 0.9924 - val_loss: 0.6140 - val_accuracy: 0.9054\nEpoch 34/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.0247 - accuracy: 0.9911 - val_loss: 0.5741 - val_accuracy: 0.9082\nEpoch 35/50\n375/375 [==============================] - 20s 54ms/step - loss: 0.0182 - accuracy: 0.9937 - val_loss: 0.6306 - val_accuracy: 0.9057\nEpoch 36/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.0212 - accuracy: 0.9924 - val_loss: 0.6207 - val_accuracy: 0.9057\nEpoch 37/50\n375/375 [==============================] - 21s 57ms/step - loss: 0.0162 - accuracy: 0.9945 - val_loss: 0.6423 - val_accuracy: 0.9050\nEpoch 38/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.0175 - accuracy: 0.9937 - val_loss: 0.6500 - val_accuracy: 0.9099\nEpoch 39/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.0169 - accuracy: 0.9937 - val_loss: 0.6839 - val_accuracy: 0.9059\nEpoch 40/50\n375/375 [==============================] - 21s 55ms/step - loss: 0.0226 - accuracy: 0.9924 - val_loss: 0.7038 - val_accuracy: 0.9063\nEpoch 41/50\n375/375 [==============================] - 20s 54ms/step - loss: 0.0203 - accuracy: 0.9929 - val_loss: 0.6748 - val_accuracy: 0.9114\nEpoch 42/50\n375/375 [==============================] - 20s 54ms/step - loss: 0.0143 - accuracy: 0.9950 - val_loss: 0.6896 - val_accuracy: 0.9083\nEpoch 43/50\n375/375 [==============================] - 20s 54ms/step - loss: 0.0202 - accuracy: 0.9926 - val_loss: 0.6354 - val_accuracy: 0.9059\nEpoch 44/50\n375/375 [==============================] - 21s 55ms/step - loss: 0.0117 - accuracy: 0.9957 - val_loss: 0.6781 - val_accuracy: 0.9119\nEpoch 45/50\n375/375 [==============================] - 20s 54ms/step - loss: 0.0121 - accuracy: 0.9956 - val_loss: 0.7480 - val_accuracy: 0.9087\nEpoch 46/50\n375/375 [==============================] - 21s 56ms/step - loss: 0.0197 - accuracy: 0.9935 - val_loss: 0.7091 - val_accuracy: 0.9066\nEpoch 47/50\n375/375 [==============================] - 21s 55ms/step - loss: 0.0187 - accuracy: 0.9933 - val_loss: 0.7679 - val_accuracy: 0.9062\nEpoch 48/50\n375/375 [==============================] - 20s 54ms/step - loss: 0.0165 - accuracy: 0.9942 - val_loss: 0.7678 - val_accuracy: 0.9088\nEpoch 49/50\n375/375 [==============================] - 20s 55ms/step - loss: 0.0137 - accuracy: 0.9955 - val_loss: 0.7794 - val_accuracy: 0.9097\nEpoch 50/50\n375/375 [==============================] - 21s 55ms/step - loss: 0.0167 - accuracy: 0.9940 - val_loss: 0.8102 - val_accuracy: 0.9041\n"
]
],
[
[
"## <a id=\"53\">Test prediction accuracy</a>\n\nWe calculate the test loss and accuracy.",
"_____no_output_____"
]
],
[
[
"score = model.evaluate(X_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Test loss: 0.7389450073242188\nTest accuracy: 0.907800018787384\n"
]
],
[
[
"Test accuracy is around 0.91.\n\nWe evaluated the model accuracy based on the predicted values for the test set. Let's check the validation value during training.\n\n",
"_____no_output_____"
],
[
"## <a id=\"53\">Validation accuracy and loss</a>\n\nLet's plot the train and validation accuracy and loss, from the train history.",
"_____no_output_____"
]
],
[
[
"def create_trace(x,y,ylabel,color):\n trace = go.Scatter(\n x = x,y = y,\n name=ylabel,\n marker=dict(color=color),\n mode = \"markers+lines\",\n text=x\n )\n return trace\n \ndef plot_accuracy_and_loss(train_model):\n hist = train_model.history\n acc = hist['acc']\n val_acc = hist['val_acc']\n loss = hist['loss']\n val_loss = hist['val_loss']\n epochs = list(range(1,len(acc)+1))\n \n trace_ta = create_trace(epochs,acc,\"Training accuracy\", \"Green\")\n trace_va = create_trace(epochs,val_acc,\"Validation accuracy\", \"Red\")\n trace_tl = create_trace(epochs,loss,\"Training loss\", \"Blue\")\n trace_vl = create_trace(epochs,val_loss,\"Validation loss\", \"Magenta\")\n \n fig = tools.make_subplots(rows=1,cols=2, subplot_titles=('Training and validation accuracy',\n 'Training and validation loss'))\n fig.append_trace(trace_ta,1,1)\n fig.append_trace(trace_va,1,1)\n fig.append_trace(trace_tl,1,2)\n fig.append_trace(trace_vl,1,2)\n fig['layout']['xaxis'].update(title = 'Epoch')\n fig['layout']['xaxis2'].update(title = 'Epoch')\n fig['layout']['yaxis'].update(title = 'Accuracy', range=[0,1])\n fig['layout']['yaxis2'].update(title = 'Loss', range=[0,1])\n\n \n iplot(fig, filename='accuracy-loss')\n\nplot_accuracy_and_loss(train_model)",
"This is the format of your plot grid:\n[ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n\n"
]
],
[
[
"The validation accuracy does not improve after few epochs and the validation loss is increasing after few epochs. This confirms our assumption that the model is overfitted. We will try to improve the model by adding Dropout layers.",
"_____no_output_____"
],
[
"## <a id=\"55\">Add Dropout layers to the model</a>\n\nWe add several Dropout layers to the model, to help avoiding overfitting. \nDropout is helping avoid overfitting in several ways, as explained in <a href='#8'>[6]</a> and <a href='#8'>[7]</a>. \n",
"_____no_output_____"
]
],
[
[
"# Model\nmodel = Sequential()\n# Add convolution 2D\nmodel.add(Conv2D(32, kernel_size=(3, 3),\n activation='relu',\n kernel_initializer='he_normal',\n input_shape=(IMG_ROWS, IMG_COLS, 1)))\nmodel.add(MaxPooling2D((2, 2)))\n# Add dropouts to the model\nmodel.add(Dropout(0.25))\nmodel.add(Conv2D(64, \n kernel_size=(3, 3), \n activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n# Add dropouts to the model\nmodel.add(Dropout(0.25))\nmodel.add(Conv2D(128, (3, 3), activation='relu'))\n# Add dropouts to the model\nmodel.add(Dropout(0.4))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu'))\n# Add dropouts to the model\nmodel.add(Dropout(0.3))\nmodel.add(Dense(NUM_CLASSES, activation='softmax'))\n\n\nmodel.compile(loss=keras.losses.categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## <a id=\"56\">Re-train the model</a>\n\nLet's inspect first the model.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_3 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 13, 13, 32) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 13, 13, 32) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 11, 11, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 5, 5, 64) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 5, 5, 64) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 3, 3, 128) 73856 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 3, 3, 128) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 1152) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 128) 147584 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 241,546\nTrainable params: 241,546\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"Let's also plot the model.",
"_____no_output_____"
]
],
[
[
"plot_model(model, to_file='model.png')\nSVG(model_to_dot(model).create(prog='dot', format='svg'))",
"_____no_output_____"
]
],
[
[
"And now let's run the new model.",
"_____no_output_____"
]
],
[
[
"train_model = model.fit(X_train, y_train,\n batch_size=BATCH_SIZE,\n epochs=NO_EPOCHS,\n verbose=1,\n validation_data=(X_val, y_val))",
"Train on 48000 samples, validate on 12000 samples\nEpoch 1/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.7589 - acc: 0.7135 - val_loss: 0.4801 - val_acc: 0.8189\nEpoch 2/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.4995 - acc: 0.8131 - val_loss: 0.3845 - val_acc: 0.8597\nEpoch 3/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.4176 - acc: 0.8469 - val_loss: 0.3389 - val_acc: 0.8745\nEpoch 4/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.3764 - acc: 0.8624 - val_loss: 0.3258 - val_acc: 0.8849\nEpoch 5/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.3527 - acc: 0.8707 - val_loss: 0.2969 - val_acc: 0.8939\nEpoch 6/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.3271 - acc: 0.8812 - val_loss: 0.2959 - val_acc: 0.8943\nEpoch 7/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.3141 - acc: 0.8850 - val_loss: 0.2704 - val_acc: 0.9018\nEpoch 8/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.3011 - acc: 0.8878 - val_loss: 0.2642 - val_acc: 0.9020\nEpoch 9/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2894 - acc: 0.8945 - val_loss: 0.2580 - val_acc: 0.9051\nEpoch 10/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2843 - acc: 0.8953 - val_loss: 0.2547 - val_acc: 0.9077\nEpoch 11/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2753 - acc: 0.8982 - val_loss: 0.2481 - val_acc: 0.9102\nEpoch 12/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2693 - acc: 0.9008 - val_loss: 0.2441 - val_acc: 0.9102\nEpoch 13/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2602 - acc: 0.9034 - val_loss: 0.2423 - val_acc: 0.9107\nEpoch 14/50\n48000/48000 [==============================] - 48s 995us/step - loss: 0.2575 - acc: 0.9046 - val_loss: 0.2390 - val_acc: 0.9121\nEpoch 15/50\n48000/48000 [==============================] - 48s 992us/step - loss: 0.2551 - acc: 0.9047 - val_loss: 0.2402 - val_acc: 0.9118\nEpoch 16/50\n48000/48000 [==============================] - 48s 999us/step - loss: 0.2463 - acc: 0.9088 - val_loss: 0.2318 - val_acc: 0.9149\nEpoch 17/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2472 - acc: 0.9083 - val_loss: 0.2345 - val_acc: 0.9144\nEpoch 18/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2401 - acc: 0.9120 - val_loss: 0.2360 - val_acc: 0.9139\nEpoch 19/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2383 - acc: 0.9111 - val_loss: 0.2289 - val_acc: 0.9163\nEpoch 20/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2300 - acc: 0.9134 - val_loss: 0.2384 - val_acc: 0.9132\nEpoch 21/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2303 - acc: 0.9128 - val_loss: 0.2254 - val_acc: 0.9186\nEpoch 22/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2270 - acc: 0.9165 - val_loss: 0.2275 - val_acc: 0.9170\nEpoch 23/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2229 - acc: 0.9158 - val_loss: 0.2280 - val_acc: 0.9161\nEpoch 24/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2244 - acc: 0.9155 - val_loss: 0.2263 - val_acc: 0.9181\nEpoch 25/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2202 - acc: 0.9173 - val_loss: 0.2233 - val_acc: 0.9197\nEpoch 26/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2166 - acc: 0.9177 - val_loss: 0.2294 - val_acc: 0.9175\nEpoch 27/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2152 - acc: 0.9197 - val_loss: 0.2313 - val_acc: 0.9184\nEpoch 28/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2149 - acc: 0.9200 - val_loss: 0.2250 - val_acc: 0.9208\nEpoch 29/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2108 - acc: 0.9195 - val_loss: 0.2217 - val_acc: 0.9210\nEpoch 30/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2112 - acc: 0.9203 - val_loss: 0.2277 - val_acc: 0.9178\nEpoch 31/50\n48000/48000 [==============================] - 49s 1ms/step - loss: 0.2065 - acc: 0.9223 - val_loss: 0.2185 - val_acc: 0.9208\nEpoch 32/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2048 - acc: 0.9214 - val_loss: 0.2206 - val_acc: 0.9224\nEpoch 33/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2094 - acc: 0.9208 - val_loss: 0.2237 - val_acc: 0.9203\nEpoch 34/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2033 - acc: 0.9240 - val_loss: 0.2340 - val_acc: 0.9156\nEpoch 35/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.2029 - acc: 0.9245 - val_loss: 0.2202 - val_acc: 0.9200\nEpoch 36/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.1990 - acc: 0.9240 - val_loss: 0.2168 - val_acc: 0.9215\nEpoch 37/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.1999 - acc: 0.9250 - val_loss: 0.2246 - val_acc: 0.9197\nEpoch 38/50\n48000/48000 [==============================] - 48s 1ms/step - loss: 0.1997 - acc: 0.9249 - val_loss: 0.2214 - val_acc: 0.9204\nEpoch 39/50\n29056/48000 [=================>............] - ETA: 17s - loss: 0.1987 - acc: 0.9255"
]
],
[
[
"## <a id=\"57\">Prediction accuracy with the new model</a>\n\nLet's re-evaluate the prediction accuracy with the new model.",
"_____no_output_____"
]
],
[
[
"plot_accuracy_and_loss(train_model)",
"This is the format of your plot grid:\n[ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n\n"
]
],
[
[
"After adding the Dropout layers, the validation accuracy and validation loss are much better. Let's check now the prediction for the test set.\n\n\n## <a id=\"58\">Prediction accuracy with the new model</a>\n\nLet's re-evaluate the test prediction accuracy with the new model.",
"_____no_output_____"
]
],
[
[
"score = model.evaluate(X_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"Test loss: 0.20247195200026036\nTest accuracy: 0.9294\n"
]
],
[
[
"Also the test accuracy improved. The test accuracy is now approximately 0.93.",
"_____no_output_____"
]
],
[
[
"#get the predictions for the test data\npredicted_classes = model.predict_classes(X_test)\n#get the indices to be plotted\ny_true = test_data.iloc[:, 0]",
"_____no_output_____"
],
[
"p = predicted_classes[:10000]\ny = y_true[:10000]\ncorrect = np.nonzero(p==y)[0]\nincorrect = np.nonzero(p!=y)[0]",
"_____no_output_____"
],
[
"print(\"Correct predicted classes:\",correct.shape[0])\nprint(\"Incorrect predicted classes:\",incorrect.shape[0])",
"Correct predicted classes: 9294\nIncorrect predicted classes: 706\n"
],
[
"target_names = [\"Class {} ({}) :\".format(i,labels[i]) for i in range(NUM_CLASSES)]\nprint(classification_report(y_true, predicted_classes, target_names=target_names))",
" precision recall f1-score support\n\nClass 0 (T-shirt/top) : 0.88 0.88 0.88 1000\n Class 1 (Trouser) : 0.99 0.99 0.99 1000\n Class 2 (Pullover) : 0.91 0.88 0.89 1000\n Class 3 (Dress) : 0.93 0.93 0.93 1000\n Class 4 (Coat) : 0.88 0.90 0.89 1000\n Class 5 (Sandal) : 0.99 0.98 0.98 1000\n Class 6 (Shirt) : 0.79 0.80 0.80 1000\n Class 7 (Sneaker) : 0.97 0.96 0.96 1000\n Class 8 (Bag) : 0.99 0.99 0.99 1000\n Class 9 (Ankle Boot) : 0.96 0.98 0.97 1000\n\n avg / total 0.93 0.93 0.93 10000\n\n"
]
],
[
[
"\nThe best accuracy is obtained for Class 1, Class 5, Class 8, Class 9 and Class 7. Worst accuracy is for Class 6. \n\nThe recall is highest for Class 8, Class 5 and smallest for Class 6 and Class 4. \n\nf1-score is highest for Class 1, Class 5 and Class 8 and smallest for Class 6 followed by Class 4 and Class 2. \n\nLet's also inspect some of the images. We created two subsets of the predicted images set, correctly and incorrectly classified.",
"_____no_output_____"
],
[
"# <a id=\"6\">Visualize classified images</a>\n\n## <a id=\"61\">Correctly classified images</a>\n\n\nWe visualize few images correctly classified.",
"_____no_output_____"
]
],
[
[
"def plot_images(data_index,cmap=\"Blues\"):\n # Plot the sample images now\n f, ax = plt.subplots(4,4, figsize=(15,15))\n\n for i, indx in enumerate(data_index[:16]):\n ax[i//4, i%4].imshow(X_test[indx].reshape(IMG_ROWS,IMG_COLS), cmap=cmap)\n ax[i//4, i%4].axis('off')\n ax[i//4, i%4].set_title(\"True:{} Pred:{}\".format(labels[y_true[indx]],labels[predicted_classes[indx]]))\n plt.show() \n \nplot_images(correct, \"Greens\")",
"_____no_output_____"
]
],
[
[
"## <a id=\"62\">Incorrectly classified images</a>\n\nLet's see also few images incorrectly classified.",
"_____no_output_____"
]
],
[
[
"plot_images(incorrect, \"Reds\")",
"_____no_output_____"
]
],
[
[
"# <a id=\"7\">Conclusions</a>\n\nWith a complex sequential model with multiple convolution layers and 50 epochs for the training, we obtained an accuracy ~0.91 for test prediction.\nAfter investigating the validation accuracy and loss, we understood that the model is overfitting. \nWe retrained the model with Dropout layers to the model to reduce overfitting. \nWe confirmed the model improvement and with the same number of epochs for the training we obtained with the new model an accuracy of ~0.93 for test prediction. Only few classes are not correctly classified all the time, especially Class 6 (Shirt) and Class 2 (Pullover).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec951813935272dd6dd77901eba60c778bd6a68b | 15,191 | ipynb | Jupyter Notebook | sourcecode/python/notebooks/performance_tests.ipynb | magic-lantern/faker-prototype | 71015a8fce465a0dd554e464eeba27c52353eafa | [
"Apache-2.0"
] | 13 | 2018-06-05T17:04:07.000Z | 2021-09-03T16:38:41.000Z | sourcecode/python/notebooks/performance_tests.ipynb | magic-lantern/faker-prototype | 71015a8fce465a0dd554e464eeba27c52353eafa | [
"Apache-2.0"
] | 6 | 2018-10-15T15:58:00.000Z | 2019-06-13T21:46:00.000Z | sourcecode/python/notebooks/performance_tests.ipynb | magic-lantern/faker-prototype | 71015a8fce465a0dd554e464eeba27c52353eafa | [
"Apache-2.0"
] | 2 | 2018-08-09T04:58:29.000Z | 2018-09-10T21:07:18.000Z | 39.354922 | 278 | 0.514054 | [
[
[
"**How to monitor memory usage while this performance test runs**\n\nAs this process takes \n\nFirst install psrecord:\n\n`pip install psrecord`\n\nNext, find your process PID and substitue into the following command (replace PID with the actual integer value):\n\n`psrecord PID --interval 10 --plot plot1.png`\n\nThe above command will monitor the designated PID every 10 seconds until Ctrl-C is pressed.",
"_____no_output_____"
]
],
[
[
"import sqlite3\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\nimport scipy.stats as stats\nimport pylab as plt\nfrom collections import Counter\nimport datetime\n\n# files and kungfauxpandas loading require reference from one directory level up\nimport os\nos.chdir('..')\n\n# while not currently plotting, would like to add this feature\n%matplotlib notebook\npd.set_option('display.width', 110)\n\n# flag to control where data is loaded to\nmode = 'psycopg2'\n\n# how many times to run each test for tracking mean/std dev\n\n# sqlite stuff\nif mode == 'sqlite3':\n import sqlite3\n conn = sqlite3.connect(\"../../data/sample_data.db\")\n cursor = conn.cursor()\nelif mode == 'psycopg2': # alternatively use postgresql\n import psycopg2\n connect_str = \"dbname='sepsis' user='sepsis' host='localhost' \" + \\\n \"password='sepsis'\"\n conn = psycopg2.connect(connect_str)\n cursor = conn.cursor()\n\nqlog_conn = sqlite3.connect('../../data/kfp_log.db')\nq_cursor = qlog_conn.cursor()\n\nstart = datetime.datetime.now()\n# because names are created as case sensistive in postgres, must be quoted...\n# should probably fix that...\nsql = '''\nSELECT d.\"SubjectId\",\n d.\"EncounterId\",\n d.\"Source\",\n -- d.StartDate,\n d.\"Code\",\n d.\"Type\",\n MAX(\"FlowsheetValue\") AS MaxScore,\n -- AVG(\"FlowsheetValue\") AS MeanScore,\n MIN(\"FlowsheetValue\") AS MinScore,\n COUNT(\"FlowsheetValue\") AS NumLoggedScores\n FROM diagnoses d\n LEFT JOIN flowsheet f\n ON d.\"EncounterId\" = f.\"EncounterId\"\n GROUP BY d.\"SubjectId\", d.\"EncounterId\", d.\"Source\", d.\"Code\", d.\"Type\"\n ORDER BY NumLoggedScores DESC\n limit\n'''\n# timing this query on databases\n\n#start = datetime.datetime.now()\n#df = pd.read_sql(sql,conn)\n#print((datetime.datetime.now() - start).total_seconds())\n# w/no limit - medium sepsis database\n# sqlite - 80 to 160 seconds\n# postgres - 30 seconds\n\n#sql = 'SELECT subjectid, encounterid, source, code, type FROM \"diagnoses\" limit 100'\n",
"_____no_output_____"
],
[
"# query cache\nstore = {}\n\ndef prefetch_query(n):\n if n not in store:\n store[n] = pd.read_sql(sql + n, conn) \n return store[n]",
"_____no_output_____"
],
[
"# sizes of patient population to evaluate\npatient_population = ['10', '100', '1000', '10000', '100000']\n# how many times to run test to calculate mean/std dev\ndefault_repetitions = 1\n\ndef show_timings(df):\n q = pd.read_sql(\"SELECT * FROM kfp_log order by fauxify_end\",qlog_conn)\n print('Method used :', q.tail(1)['faux_method'].iloc[0])\n print('Time for query :', (pd.to_datetime(q.tail(1)['query_end']) - pd.to_datetime(q.tail(1)['query_start'])).iloc[0].total_seconds())\n print('Time for fauxify:', (pd.to_datetime(q.tail(1)['fauxify_end']) - pd.to_datetime(q.tail(1)['fauxify_start'])).iloc[0].total_seconds())\n print('Size of dataset :', len(df), 'rows')\n\n# rerun_query option doesn't time fauxify method... need to fix that\ndef time_method(kfpd, repetitions = default_repetitions, verbose = True, rerun_query = True):\n for n in patient_population:\n fdf = None\n # track each run for calculations\n query_timings = []\n fauxify_timings = []\n for i in range(1, repetitions + 1):\n # if dataframe provided, don't need to re-run query\n if rerun_query:\n fdf=kfpd.read_sql(sql + n,conn)\n q = pd.read_sql(\"SELECT * FROM kfp_log order by fauxify_end\",qlog_conn)\n query_timings.append((pd.to_datetime(q.tail(1)['query_end']) - pd.to_datetime(q.tail(1)['query_start'])).iloc[0].total_seconds())\n fauxify_timings.append((pd.to_datetime(q.tail(1)['fauxify_end']) - pd.to_datetime(q.tail(1)['fauxify_start'])).iloc[0].total_seconds())\n else:\n df = prefetch_query(n)\n start = datetime.datetime.now()\n fdf=kfpd.plugin.fauxify(df)\n fauxify_timings.append((datetime.datetime.now() - start).total_seconds())\n if verbose:\n print('Iteration ', i, 'of ', repetitions)\n print('Method used :', type(kfpd.plugin).__name__)\n print('Size of dataset returned :', len(fdf), 'rows')\n if rerun_query:\n print('Time for query :', query_timings[-1])\n print('Time for fauxify :', fauxify_timings[-1])\n print('Method used :', type(kfpd.plugin).__name__)\n print('Size of dataset returned:', len(fdf), 'rows')\n print(' Fauxify Mean :', np.mean(fauxify_timings))\n print(' Fauxify Std Dev:', np.std(fauxify_timings))\n if rerun_query:\n print(' Query Mean :', np.mean(query_timings))\n print(' Query Std Dev:', np.std(query_timings))\n else:\n print(' See previous run for query timings')\n return fdf",
"_____no_output_____"
],
[
"from importlib import reload\nfrom kungfauxpandas import KungFauxPandas, TrivialPlugin, DataSynthesizerPlugin, KDEPlugin, KFP_DataDescriber\nkfpd = KungFauxPandas()",
"_____no_output_____"
],
[
"#kfpd.plugin = TrivialPlugin()\n#fdf = time_method(kfpd, verbose = False, repetitions = 10)\n#fdf.head()",
"_____no_output_____"
],
[
"kfpd.plugin = TrivialPlugin()\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
]
],
[
[
"### Kernel Density Estimator Plugin testing",
"_____no_output_____"
]
],
[
[
"kfpd.plugin = KDEPlugin(verbose = False, mode='independent_attribute_mode')\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
],
[
"kfpd.plugin = KDEPlugin(verbose = False, mode='correlated_attribute_mode')\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
]
],
[
[
"### DataSynthesizer, two different methods with no configuration",
"_____no_output_____"
]
],
[
[
"#kfpd.plugin = DataSynthesizerPlugin(mode='correlated_attribute_mode')\n#for n in ['10', '100', '1000', '10000', '100000']:\n# fdf=kfpd.read_sql(sql + n,conn)\n# show_timings(fdf)\n\nkfpd.plugin = DataSynthesizerPlugin(mode='correlated_attribute_mode')\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
],
[
"#kfpd.plugin = DataSynthesizerPlugin(mode='independent_attribute_mode')\n#for n in ['10', '100', '1000', '10000', '100000']:\n# fdf=kfpd.read_sql(sql + n,conn)\n# show_timings(fdf)\n\nkfpd.plugin = DataSynthesizerPlugin(mode='independent_attribute_mode')\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
]
],
[
[
"### Now try DataSynthesizerPlugin with some manual configuration",
"_____no_output_____"
]
],
[
[
"kfpd.plugin = DataSynthesizerPlugin(mode='correlated_attribute_mode',\n candidate_keys = {'SubjectId': True, 'EncounterId': True},\n categorical_attributes = {'Source': True,\n 'Code': True,\n 'Type': True,\n 'MaxScore': False,\n 'MinScore': False,\n 'NumLoggedScores': False}\n )\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
],
[
"kfpd.plugin = DataSynthesizerPlugin(mode='independent_attribute_mode',\n candidate_keys = {'SubjectId': True, 'EncounterId': True},\n categorical_attributes = {'Source': True,\n 'Code': True,\n 'Type': True,\n 'MaxScore': False,\n 'MinScore': False,\n 'NumLoggedScores': False})\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
],
[
"# testing changes to degree_of_bayesian_network\nkfpd.plugin = DataSynthesizerPlugin(mode='correlated_attribute_mode',\n candidate_keys = {'SubjectId': True, 'EncounterId': True},\n categorical_attributes = {'Source': True,\n 'Code': True,\n 'Type': True,\n 'MaxScore': False,\n 'MinScore': False,\n 'NumLoggedScores': False},\n degree_of_bayesian_network = 3) # default is 2\nfdf = time_method(kfpd, verbose = False, rerun_query = False, repetitions = 10)\nfdf.head()",
"_____no_output_____"
],
[
"test_df = pd.DataFrame({'unique_id': [40552133, 83299697, 96360391, 43551783, 92110570, 87411981, 26772988, 87390284, 34538374, 13208258],\n #'datetime': ['2017-11-09 02:26:13', '2017-07-20 20:35:41', '2017-12-23 22:48:30', '2017-10-04 05:19:36', '2017-10-15 04:03:31', '2017-08-12 11:35:34', '2017-08-07 12:57:29', '2017-09-20 12:17:48', '2017-08-23 12:39:54', '2017-06-29 07:59:25'],\n 'alpha_numeric_code': ['A4152', 'A414', 'A400', 'A392', 'A4151', 'A392', 'A4181', 'P369', 'B377', 'R6521'],\n 'constant': ['constant_value', 'constant_value', 'constant_value', 'constant_value', 'constant_value', 'constant_value', 'constant_value', 'constant_value', 'constant_value', 'constant_value'],\n 'categorical' : ['category1', 'category2', 'category1', 'category1', 'category2', 'category1', 'category2', 'category1', 'category2', 'category3'],\n #'float_score': [30.80887770115334, 31.647178703213896, 33.23121156661242, 33.64713140102367, 33.07404123596502, 34.206309535666364, 34.90974444556692, 39.06948372169004, 35.94952085309618, 29.5140595543271],\n 'int_score': [294, 286, 278, 272, 256, 242, 216, 210, 208, 190]})\n\nkfpd.plugin = TrivialPlugin()\nfdf=kfpd.plugin.fauxify(test_df)\nprint(fdf.head())\n\nkfpd.plugin = KDEPlugin(verbose = False)\nfdf=kfpd.plugin.fauxify(test_df)\nprint(fdf.head())\n\nkfpd.plugin = DataSynthesizerPlugin(mode=\"independent_attribute_mode\")\nfdf=kfpd.plugin.fauxify(test_df)\nprint(fdf.head())\n\ntest_df.to_csv('sample_data_no_dates.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ec951b4022c90e7fe78320fa28feaeecfa4f8058 | 5,684 | ipynb | Jupyter Notebook | notebooks/SSSP.ipynb | jim22k/dask-grblas | f0090b61b220db06fa6b54b3448297c0e9ce8ca9 | [
"Apache-2.0"
] | null | null | null | notebooks/SSSP.ipynb | jim22k/dask-grblas | f0090b61b220db06fa6b54b3448297c0e9ce8ca9 | [
"Apache-2.0"
] | null | null | null | notebooks/SSSP.ipynb | jim22k/dask-grblas | f0090b61b220db06fa6b54b3448297c0e9ce8ca9 | [
"Apache-2.0"
] | null | null | null | 24.929825 | 86 | 0.517769 | [
[
[
"# SSSP Example",
"_____no_output_____"
]
],
[
[
"import dask\nimport numpy as np\nimport grblas as gb\nimport dask_grblas as dgb\nfrom grblas import op",
"_____no_output_____"
],
[
"from dask.distributed import Client\n\nclient = Client()\nclient",
"_____no_output_____"
],
[
"# Create random data\nN = 1000\nnum_chunks = 4\nr = np.random.rand(N, N) < 0.001\nr = r | r.T # symmetric\nr = r & ~np.diag(np.ones(N, dtype=bool)) # no self edges",
"_____no_output_____"
],
[
"# Option 1: create distributed Matrix from local data\ndef to_matrix(chunk):\n rows, cols = np.nonzero(chunk)\n values = np.random.rand(rows.size)\n return dgb.Matrix.from_values(\n rows, cols, values, nrows=chunk.shape[0], ncols=chunk.shape[1]\n )\n\n\nchunks = np.array_split(r, num_chunks, axis=0)\ndelayed_chunks = [to_matrix(chunk) for chunk in chunks]\nA = dgb.row_stack(delayed_chunks)\nsources = dgb.Vector.from_values(np.random.randint(N), 0, size=N, dtype=A.dtype)",
"_____no_output_____"
],
[
"# Option 2: create distributed Matrix from distributed (delayed) data\nchunks = np.array_split(r, num_chunks, axis=0)\nncols = chunks[0].shape[1]\nrow_lengths = np.array([chunk.shape[0] for chunk in chunks])\nrow_offsets = np.roll(row_lengths.cumsum(), 1)\nrow_offsets[0] = 0\n\nchunked_rows = []\nchunked_cols = []\nchunked_vals = []\nfor chunk, row_offset in zip(chunks, row_offsets):\n rows, cols = np.nonzero(chunk)\n chunked_rows.append(rows + row_offset)\n chunked_cols.append(cols)\n chunked_vals.append(np.random.rand(rows.size))\n\ndelayed_rows = [dask.delayed(rows) for rows in chunked_rows]\ndelayed_cols = [dask.delayed(cols) for cols in chunked_cols]\ndelayed_vals = [dask.delayed(cols) for cols in chunked_vals]\n\n\[email protected]\ndef to_matrix(rows, cols, vals, nrows, ncols):\n # Can also use e.g. gb.Matrix.ss.import_csr\n return gb.Matrix.from_values(rows, cols, vals, nrows=nrows, ncols=ncols)\n\n\ndelayed_matrices = [\n to_matrix(\n delayed_rows[i] - row_offsets[i],\n delayed_cols[i],\n delayed_vals[i],\n row_lengths[i],\n ncols,\n )\n for i in range(num_chunks)\n]\n\ndelayed_chunks = [\n dgb.Matrix.from_delayed(\n delayed_matrices[i],\n gb.dtypes.FP64,\n row_lengths[i],\n ncols,\n )\n for i in range(num_chunks)\n]\n\nA = dgb.row_stack(delayed_chunks)\nsources = dgb.Vector.from_values(np.random.randint(N), 0, size=N, dtype=A.dtype)",
"_____no_output_____"
],
[
"# Calculate expected with grblas\nB = A.compute()\nv = sources.dup().compute()\nv_dup = gb.Vector.new(v.dtype, size=N)\ni = 0\nwhile True:\n i += 1\n v_dup << v\n v(op.min) << B.mxv(v, op.min_plus)\n if v.isequal(v_dup):\n break\nexpected = v\ni",
"_____no_output_____"
],
[
"# Calculate with dask-grblas\ni = 0\nv = sources.dup()\nwhile True:\n i += 1\n v_dup = v.dup()\n v(op.min) << A.mxv(v, op.min_plus)\n # persist so we don't recompute every iteration\n v._delayed = v._delayed.persist() # scheduler='synchronous')\n if v.isequal(v_dup):\n break\ni",
"_____no_output_____"
],
[
"assert expected.isequal(v.compute())",
"_____no_output_____"
],
[
"expected",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9522ee19f9bdcbf8cedd72fa6ef39bd188be66 | 8,306 | ipynb | Jupyter Notebook | Recommender-Systems/Content Based recommendations.ipynb | keshav-b/ML-DL-stuff | 0edacb97f2c62a86205edb9563576ed5267e3881 | [
"MIT"
] | null | null | null | Recommender-Systems/Content Based recommendations.ipynb | keshav-b/ML-DL-stuff | 0edacb97f2c62a86205edb9563576ed5267e3881 | [
"MIT"
] | null | null | null | Recommender-Systems/Content Based recommendations.ipynb | keshav-b/ML-DL-stuff | 0edacb97f2c62a86205edb9563576ed5267e3881 | [
"MIT"
] | null | null | null | 25.478528 | 105 | 0.377318 | [
[
[
"import numpy as np\nimport pandas as pd\n\nimport sklearn\nfrom sklearn.neighbors import NearestNeighbors",
"_____no_output_____"
],
[
"data = pd.read_csv('mtcars.csv')",
"_____no_output_____"
],
[
"data.columns",
"_____no_output_____"
],
[
"data.columns = ['car_names', 'mpg', 'cyl', 'disp', 'hp', 'drat', 'wt', 'qsec', 'vs',\n 'am', 'gear', 'carb']",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"# [miles per gallon, cubic inches, horse power, weight]\n\nt = [15, 300, 160, 3.2]",
"_____no_output_____"
],
[
"x = data.ix[:,(1,3,4,6)].values\nx[0:5]",
"C:\\Users\\balac\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"knn = NearestNeighbors(n_neighbors=1)\n\nknn.fit(x)",
"_____no_output_____"
],
[
"knn.kneighbors([t])",
"_____no_output_____"
]
],
[
[
"Car #22 is the most similiar to the requirements, and hence recommended to the user",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec95348a0cfa678b4984d5444ef8d717a5864aef | 43,772 | ipynb | Jupyter Notebook | 05-25-works-rating.ipynb | amecreate/ao3-data-vis | 1bc9f0e8de2691db4328d381b0189810ea7858fc | [
"MIT"
] | null | null | null | 05-25-works-rating.ipynb | amecreate/ao3-data-vis | 1bc9f0e8de2691db4328d381b0189810ea7858fc | [
"MIT"
] | null | null | null | 05-25-works-rating.ipynb | amecreate/ao3-data-vis | 1bc9f0e8de2691db4328d381b0189810ea7858fc | [
"MIT"
] | null | null | null | 33.748651 | 283 | 0.384561 | [
[
[
"---\nlayout: post\ntitle: \"Rating Tags in Works Part I\"\ndate: 2021-05-25\ncategory: data_cleaning\ntags: Python Pandas \n---",
"_____no_output_____"
],
[
"In part I, we focus on how to find the rating tags in Tags file and how to add a new rating column in Works file.\n\n* Table of Contents\n{:toc}",
"_____no_output_____"
],
[
"# Loading File",
"_____no_output_____"
]
],
[
[
"# Load python libraries\nimport pandas as pd",
"_____no_output_____"
],
[
"# Load works file\nworks= pd.read_csv(\"/home/pi/Downloads/works-20210226.csv\")",
"_____no_output_____"
],
[
"# Load entire tags file\ntags = pd.read_csv(\"/home/pi/Downloads/tags-20210226.csv\")",
"_____no_output_____"
],
[
"# preview file\ntags",
"_____no_output_____"
],
[
"# preview file\nworks",
"_____no_output_____"
]
],
[
[
"From the preview, we see that:\n\n- The tags column in **works** contains tag ids for each work, separated by plus sign\n- **Tags** file has information about tag ids, types, names, etc\n\nFrom previous post, we've found what tag **type** looks like:\n\n- Media\n- Rating\n- ArchiveWarning\n- Category\n- Character\n- Fandom\n- Relationship\n- Freeform\n- UnsortedTag\n\nIn this post, we want to find more information about **Rating** tags.",
"_____no_output_____"
],
[
"# Rating Tags",
"_____no_output_____"
]
],
[
[
"# Find rating tags in tags file\nrating = tags[tags['type'] == 'Rating']\nrating",
"_____no_output_____"
],
[
"# Save rating tags in a csv file\nrating.to_csv('rating.csv', index=False)",
"_____no_output_____"
]
],
[
[
"There are 5 types of ratings on AO3. The last tag \"Teen & Up Audiences\" is a duplicate of \"Teen And Up Audiences\". Because it has a low cached_count compared to others, we discard it in our analysis.\n\nTo simplify the data cleaning process in **Rating Tags in Works Part II**, we export the rating DataFrame to a local csv file.",
"_____no_output_____"
],
[
"# Tags Column in Works\n\nThe tags column in **works** is a long string containing tag ids separated by plus sign. From observation, we find that the first id in the string is most likely a rating id. \n\nTo extract the rating id from the tags column in **works**, we're going to:\n\n- Create a new column named \"rating\" in **works**\n- Extract the first id (which is also the smallest number) from the tags column, add the id to rating column",
"_____no_output_____"
]
],
[
[
"# Check the type of first row in tags column \n# The first row in tags column is a string\n\nprint(works['tags'].iloc[0])\ntype(works['tags'].iloc[0])",
"10+414093+1001939+4577144+1499536+110+4682892+21+16\n"
],
[
"# Check if every row in tags column is string\n# Result shows there're NA values in tags column\n\nworks[works['tags'].apply(lambda x: isinstance(x,str)) == False]",
"_____no_output_____"
],
[
"# The NA values do not interfere with our analysis\n# Drop NA value in tags column\n\nworks = works.dropna(subset = ['tags'])",
"_____no_output_____"
]
],
[
[
"To extract the smallest number from a string, we first split the string into a list using .split() method; then we iterate the list in order to change the object type from string to interger; lastly, we're able to select the minimum number from the list with min() function.\n\nTo apply the above steps on a Series (the tags column in **works**), we use [pandas.Series.apply](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html) function.",
"_____no_output_____"
]
],
[
[
"# Function to find the mimnimum value in the string, and return that value\n# First we split the string into a list by the plus sign\n# Then we iterate the list, change the object type from string to integer\n# Finally we find the minimum value of the list\n\ndef find_rating(x):\n return min([int(n) for n in x.split('+')])",
"_____no_output_____"
],
[
"# Create a new column named 'rating'\n# Use apply() to apply a function to each row\n# The function returns minimum value in tags column\n# The minimun value should be rating id, from observation\n\nworks['rating'] = works['tags'].apply(lambda x: find_rating(x))\nworks",
"_____no_output_____"
]
],
[
[
"We assumed that the first id in the tags string is a rating id, however, extra steps should be taken to check if our assumption is correct. \n\nFrom **tags** file, we extracted all correct rating ids. If any row in rating column in **works** falls outside, we'll know there're outliers.",
"_____no_output_____"
]
],
[
[
"# Check if rating column is indeed rating id\nworks['rating'].isin(rating['id']).all()",
"_____no_output_____"
],
[
"# Find the row in rating column that is not rating id\n# ~ is negative operator\nworks[~works['rating'].isin(rating['id'])]",
"_____no_output_____"
]
],
[
[
"There are 488 works with no rating. This is actually a [know issue](https://otwarchive.atlassian.net/browse/AO3-6065) that the volunteers are actively working on behind-the-curtain. Thus, we simply drop these works from our data set for now. ",
"_____no_output_____"
]
],
[
[
"# Drop works with no rating\nworks = works[works['rating'].isin(rating['id'])]",
"_____no_output_____"
]
],
[
[
"Now we have a file that has all the rating information extracted to a single column. We'll create graphs based on the information in the next post.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec953aae93f74941f74bc3fc6635288193dd7094 | 118,428 | ipynb | Jupyter Notebook | trial/color_choice.ipynb | jphacks/F_2007 | e18deaa4ca7779ce6826289d6502cacf8af5a755 | [
"MIT"
] | 4 | 2020-10-31T06:16:22.000Z | 2020-11-05T11:21:06.000Z | trial/color_choice.ipynb | jphacks/F_2007 | e18deaa4ca7779ce6826289d6502cacf8af5a755 | [
"MIT"
] | null | null | null | trial/color_choice.ipynb | jphacks/F_2007 | e18deaa4ca7779ce6826289d6502cacf8af5a755 | [
"MIT"
] | null | null | null | 148.035 | 28,704 | 0.901552 | [
[
[
"from transformers import AutoModel, AutoTokenizer",
"_____no_output_____"
],
[
"# tokenizer = AutoTokenizer.from_pretrained(\"bert-base-japanese-whole-word-masking\")\ntokenizer = AutoTokenizer.from_pretrained(\"cl-tohoku/bert-base-japanese-whole-word-masking\")",
"_____no_output_____"
],
[
"model = AutoModel.from_pretrained(\"/Users/kubota/Sandbox/JPHACKS_2020/DistilBERT-base-jp\")",
"_____no_output_____"
],
[
"import torch\ndef get_embedding(model, tokenizer, text):\n tokenized_text = tokenizer.tokenize(text)\n tokenized_text.insert(0, '[CLS]')\n tokenized_text.append('[SEP]')\n tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\n tokens_tensor = torch.tensor([tokens])\n model.eval()\n with torch.no_grad():\n layers, _ = model(tokens_tensor)\n target_layer = -2\n embedding = layers[0][target_layer].numpy()\n return embedding",
"_____no_output_____"
],
[
"import numpy as np\nembedding_list = []\nf = open('./input.txt')\nsentens = f.readlines()\nf.close()",
"_____no_output_____"
],
[
"for s in sentens:\n mbedding = get_embedding(model, tokenizer, s.strip())\n embedding_list.append(mbedding)",
"_____no_output_____"
],
[
"len(mbedding)",
"_____no_output_____"
],
[
"len(embedding_list)",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA(n_components = 2)",
"_____no_output_____"
],
[
"res = pca.fit_transform(X = embedding_list)",
"_____no_output_____"
],
[
"res",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport japanize_matplotlib",
"_____no_output_____"
],
[
"plt.scatter(res[:,0], res[:, 1])\nfor i in range(len(res)):\n plt.text(x = res[i,0], y = res[i, 1], s = sentens[i])",
"_____no_output_____"
],
[
"embedding_list = []\nf = open('./colorlist.txt')\nsentens = f.readlines()\nf.close()\nfor s in sentens:\n mbedding = get_embedding(model, tokenizer, s.strip())\n embedding_list.append(mbedding)",
"_____no_output_____"
],
[
"pca = PCA(n_components = 2)\nres = pca.fit_transform(X = embedding_list)\nplt.figure(figsize = (15,5))\nplt.scatter(res[:,0], res[:, 1])\nfor i in range(len(res)):\n plt.text(x = res[i,0], y = res[i, 1], s = sentens[i])\n \nplt.savefig(\"color_pca.png\")",
"_____no_output_____"
],
[
"f = open('./colorlist.txt')\nsentens = f.readlines()\nf.close()\n\ntest_text = \"光\"\nsentens.append(test_text)\n\nembedding_list = []\nfor s in sentens:\n mbedding = get_embedding(model, tokenizer, s.strip())\n embedding_list.append(mbedding)",
"_____no_output_____"
],
[
"pca = PCA(n_components = 2)\nres = pca.fit_transform(X = embedding_list)\nplt.figure(figsize = (15,5))\nplt.scatter(res[:,0], res[:, 1])\nfor i in range(len(sentens)):\n plt.text(x = res[i,0], y = res[i, 1], s = sentens[i])",
"_____no_output_____"
],
[
"f = open('./colorlist.txt')\nsentens = f.readlines()\nf.close()\n\ncolor_embedding_list = []\nfor s in sentens:\n mbedding = get_embedding(model, tokenizer, s.strip())\n color_embedding_list.append(mbedding)",
"_____no_output_____"
],
[
"test_text = \"\"\"まっかだな まっかだな\nつたの 葉っぱが まっかだな\nもみじの 葉っぱも まっかだな\n沈む 夕日に てらされて\nまっかなほっぺたの 君と僕\nまっかな 秋に かこまれて いる\n\nまっかだな まっかだな\nからすうりって まっかだな\nとんぼのせなかも まっかだな\n夕焼雲(ゆうやけぐも)を ゆびさして\nまっかなほっぺたの 君と僕\nまっかな 秋に よびかけて いる\n\nまっかだな まっかだな\nひがん花って まっかだな\n遠くの たき火も まっかだな\nお宮の 鳥居(とりい)を くぐりぬけ\nまっかなほっぺたの 君と僕\nまっかな 秋を たずねて まわる\"\"\"",
"_____no_output_____"
],
[
"res = get_embedding(model, tokenizer, test_text.strip())",
"_____no_output_____"
],
[
"embedding_list = color_embedding_list[:]",
"_____no_output_____"
],
[
"embedding_list.append(res)",
"_____no_output_____"
],
[
"len(embedding_list)",
"_____no_output_____"
],
[
"len(color_embedding_list)",
"_____no_output_____"
],
[
"pca = PCA(n_components = 2)\npca_res = pca.fit_transform(X = embedding_list)\nplt.figure(figsize = (15,5))\nplt.scatter(pca_res[:,0], pca_res[:, 1])\nfor i in range(len(sentens)):\n plt.text(x = pca_res[i,0], y = pca_res[i, 1], s = sentens[i])",
"_____no_output_____"
],
[
"len(v)",
"_____no_output_____"
],
[
"prod = []\nfor v in color_embedding_list:\n prod.append(np.linalg.norm(v-res, ord = 2))",
"_____no_output_____"
],
[
"prod",
"_____no_output_____"
],
[
"sentens[np.array(prod).argmin()]",
"_____no_output_____"
],
[
"def choose_color(s):\n res = get_embedding(model, tokenizer, s.strip())\n prod = []\n for v in color_embedding_list:\n prod.append(np.linalg.norm(v-res, ord = 2))\n return sentens[np.array(prod).argmin()].strip()",
"_____no_output_____"
],
[
"choose_color(\"みかん\")",
"_____no_output_____"
],
[
"choose_color(\"カツオ\")",
"_____no_output_____"
],
[
"choose_color(\"焼肉\")",
"_____no_output_____"
],
[
"s = \"\"\"\nともだちができた すいかの名産地\nなかよしこよし すいかの名産地\nすいかの名産地 すてきなところよ\nきれいなあの娘の晴れ姿 すいかの名産地\n\n五月のある日 すいかの名産地\n結婚式をあげよう すいかの名産地\nすいかの名産地 すてきなところよ\nきれいなあの娘の晴れ姿 すいかの名産地\n\nとんもろこしの花婿 すいかの名産地\n小麦の花嫁 すいかの名産地\nすいかの名産地 すてきなところよ\nきれいなあの娘の晴れ姿 すいかの名産地\n\"\"\"\nchoose_color(s)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9540f60c8ef856d6b638c2b9aead099ddf32ef | 326,193 | ipynb | Jupyter Notebook | _notebooks/2022-05-20-first-map-exercise4.ipynb | KANGSUIN01/suin_blog | 15c3aafa412f82cb63b0d2febc03e5839c55b9ea | [
"Apache-2.0"
] | null | null | null | _notebooks/2022-05-20-first-map-exercise4.ipynb | KANGSUIN01/suin_blog | 15c3aafa412f82cb63b0d2febc03e5839c55b9ea | [
"Apache-2.0"
] | null | null | null | _notebooks/2022-05-20-first-map-exercise4.ipynb | KANGSUIN01/suin_blog | 15c3aafa412f82cb63b0d2febc03e5839c55b9ea | [
"Apache-2.0"
] | null | null | null | 45.041839 | 511 | 0.426533 | [
[
[
"# \"kaggle - Geospatial Analysis04\"",
"_____no_output_____"
],
[
"# 4. Exercise: Manipulating Geospatial Data",
"_____no_output_____"
],
[
"소개\n당신은 Starbucks Reserve Roastery의 다음 매장을 찾고 있는 Starbucks 빅 데이터 분석가입니다. 이 로스터리는 일반적인 스타벅스 매장보다 훨씬 크며 고급 라운지 공간과 함께 다양한 음식과 와인 옵션을 비롯한 몇 가지 추가 기능을 갖추고 있습니다. 캘리포니아 주에 있는 여러 카운티의 인구 통계를 조사하여 잠재적으로 적합한 위치를 결정합니다.",
"_____no_output_____"
]
],
[
[
"import math\nimport pandas as pd\nimport geopandas as gpd\nfrom geopy.geocoders import Nominatim # What you'd normally run\n#from learntools.geospatial.tools import Nominatim # Just for this exercise\n\nimport folium \nfrom folium import Marker\nfrom folium.plugins import MarkerCluster",
"_____no_output_____"
]
],
[
[
"이전 연습의 embed_map() 함수를 사용하여 지도를 시각화합니다.",
"_____no_output_____"
]
],
[
[
"def embed_map(m, file_name):\n from IPython.display import IFrame\n m.save(file_name)\n return IFrame(file_name, width='100%', height='500px')",
"_____no_output_____"
]
],
[
[
"#### 4.1 누락된 위치를 지오코딩합니다.\n다음 코드 셀을 실행하여 캘리포니아 주에 있는 Starbucks 위치를 포함하는 DataFrame 스타벅스를 만듭니다.",
"_____no_output_____"
]
],
[
[
"# Load and preview Starbucks locations in California\nstarbucks = pd.read_csv(\"C:/Users/Kangdaeyong/Desktop/datamining/kaggle_geospatial_analysis/archive/starbucks_locations.csv\")\nstarbucks.head()",
"_____no_output_____"
]
],
[
[
"대부분의 상점은 (위도, 경도) 위치를 알고 있습니다. 그러나 버클리시의 모든 위치가 누락되었습니다.",
"_____no_output_____"
]
],
[
[
"# How many rows in each column have missing values?\nprint(starbucks.isnull().sum())\n\n# View rows with missing locations\nrows_with_missing = starbucks[starbucks[\"City\"]==\"Berkeley\"]\nrows_with_missing",
"Store Number 0\nStore Name 0\nAddress 0\nCity 0\nLongitude 5\nLatitude 5\ndtype: int64\n"
]
],
[
[
"아래 코드 셀을 사용하여 Nominatim 지오코더로 이 값을 채우십시오.\n\n튜토리얼에서 우리는 값을 지오코딩하기 위해 Nominatim()(geopy.geocoders에서)을 사용했으며 이것은 이 과정 이외의 자체 프로젝트에서 사용할 수 있는 것입니다.\n\n이 연습에서는 약간 다른 함수 Nominatim()을 사용합니다(learntools.geospatial.tools에서). 이 기능은 노트북 상단에서 가져온 것으로 GeoPandas의 기능과 동일하게 작동합니다.\n\n즉,노트북 상단의 import 문을 변경하지 않고\n아래 코드 셀에서 지오코딩 함수를 geocode()로 호출합니다.\n코드가 의도한 대로 작동합니다!",
"_____no_output_____"
]
],
[
[
"# Create the geocoder\ngeolocator = Nominatim(user_agent=\"kaggle_learn\")\n\ndef my_geocoder(row):\n point = geolocator.geocode(row).point\n return pd.Series({'Latitude': point.latitude, 'Longitude': point.longitude})\n\nberkeley_locations = rows_with_missing.apply(lambda x: my_geocoder(x['Address']), axis=1)\nstarbucks.update(berkeley_locations)",
"_____no_output_____"
]
],
[
[
"#### 4.2. 버클리 위치 보기¶\n방금 찾은 위치를 살펴보겠습니다. OpenStreetMap 스타일로 버클리의 (위도, 경도) 위치를 시각화합니다.",
"_____no_output_____"
]
],
[
[
"# Create a base map\nm_2 = folium.Map(location=[37.88,-122.26], zoom_start=13)\n\n# Your code here: Add a marker for each Berkeley location\n# Add a marker for each Berkeley location\nfor idx, row in starbucks[starbucks[\"City\"]=='Berkeley'].iterrows():\n Marker([row['Latitude'], row['Longitude']]).add_to(m_2)\n\n\nm_2",
"_____no_output_____"
]
],
[
[
"버클리의 5개 위치만 고려할 때 (위도, 경도) 위치가 잠재적으로 정확해 보이는 위치(올바른 도시에 위치)는 몇 개입니까?\n\n=> 5개 모두 다 맞는 것 같습니다.",
"_____no_output_____"
],
[
"#### 4.3. 데이터를 통합합니다.\n아래 코드를 실행하여 캘리포니아 주의 각 카운티에 대한 이름, 면적(제곱 킬로미터) 및 고유 ID(\"GEOID\" 열에 있음)를 포함하는 GeoDataFrame CA_counties를 로드합니다. \"형상\" 열에는 카운티 경계가 있는 다각형이 포함되어 있습니다.",
"_____no_output_____"
]
],
[
[
"CA_counties = gpd.read_file(\"C:/Users/Kangdaeyong/Desktop/datamining/kaggle_geospatial_analysis/archive/CA_county_boundaries/CA_county_boundaries/CA_county_boundaries.shp\")\nCA_counties.head()",
"_____no_output_____"
]
],
[
[
"다음으로 3개의 DataFrame을 생성합니다.\n\nCA_pop에는 각 카운티의 인구 추정치가 포함됩니다.\nCA_high_earners에는 연간 소득이 $150,000 이상인 가구 수가 포함됩니다.\nCA_median_age에는 각 카운티의 중간 연령이 포함됩니다.",
"_____no_output_____"
]
],
[
[
"CA_pop = pd.read_csv(\"C:/Users/Kangdaeyong/Desktop/datamining/kaggle_geospatial_analysis/archive/CA_county_population.csv\", index_col=\"GEOID\")\nCA_high_earners = pd.read_csv(\"C:/Users/Kangdaeyong/Desktop/datamining/kaggle_geospatial_analysis/archive/CA_county_high_earners.csv\", index_col=\"GEOID\")\nCA_median_age = pd.read_csv(\"C:/Users/Kangdaeyong/Desktop/datamining/kaggle_geospatial_analysis/archive/CA_county_median_age.csv\", index_col=\"GEOID\")",
"_____no_output_____"
]
],
[
[
"다음 코드 셀을 사용하여 CA_counties GeoDataFrame을 CA_pop, CA_high_earners 및 CA_median_age와 결합합니다.\n\n결과 GeoDataFrame CA_stats의 이름을 지정하고 \"GEOID\", \"name\", \"area_sqkm\", \"geometry\", \"population\", \"high_earners\" 및 \"median_age\"의 8개 열이 있는지 확인합니다. 또한 CRS가 {'init': 'epsg:4326'}으로 설정되어 있는지 확인합니다.",
"_____no_output_____"
]
],
[
[
"cols_to_add = CA_pop.join([CA_high_earners, CA_median_age]).reset_index()\nCA_stats = CA_counties.merge(cols_to_add, on=\"GEOID\")\nCA_stats.crs = {'init': 'epsg:4326'}",
"c:\\Users\\Kangdaeyong\\anaconda3\\lib\\site-packages\\pyproj\\crs\\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6\n in_crs_string = _prepare_from_proj_string(in_crs_string)\n"
]
],
[
[
"\n이제 모든 데이터가 한 곳에 있으므로 열 조합을 사용하는 통계를 훨씬 쉽게 계산할 수 있습니다. 다음 코드 셀을 실행하여 인구 밀도가 있는 \"밀도\" 열을 만듭니다.",
"_____no_output_____"
]
],
[
[
"CA_stats[\"density\"] = CA_stats[\"population\"] / CA_stats[\"area_sqkm\"]",
"_____no_output_____"
]
],
[
[
"#### 4.4. 어느 카운티가 유망해 보입니까?\n모든 정보를 단일 GeoDataFrame으로 축소하면 특정 기준을 충족하는 카운티를 훨씬 더 쉽게 선택할 수 있습니다.\n\n다음 코드 셀을 사용하여 CA_stats GeoDataFrame에서 행(및 모든 열)의 하위 집합을 포함하는 GeoDataFrame sel_counties를 만듭니다. 특히 다음과 같은 카운티를 선택해야 합니다.\n- 연간 $150,000를 버는 적어도 100,000 가구가 있고,\n- 중위 연령이 38.5세 미만이고,\n- 주민 밀도는 최소 285명(제곱 킬로미터당)입니다.\n- 또한 선택한 카운티는 다음 기준 중 하나 이상을 충족해야 합니다.\n- 연간 $150,000를 버는 적어도 500,000 가구가 있고,\n- 중위 연령이 35.5세 미만이거나\n- 주민 밀도는 최소 1400명(제곱 킬로미터당)입니다.",
"_____no_output_____"
]
],
[
[
"# Your code here\nsel_counties = sel_counties = CA_stats[((CA_stats.high_earners > 100000) &\n (CA_stats.median_age < 38.5) &\n (CA_stats.density > 285) &\n ((CA_stats.median_age < 35.5) |\n (CA_stats.density > 1400) |\n (CA_stats.high_earners > 500000)))]\n\nsel_counties",
"_____no_output_____"
]
],
[
[
"#### 4.5. 몇 개의 매장을 식별했습니까?\n다음 Starbucks Reserve Roastery 위치를 찾을 때 선택한 카운티 내의 모든 매장을 고려하고 싶습니다. 그렇다면 선택한 카운티 내에 몇 개의 매장이 있습니까?\n\n이 질문에 답할 준비를 하려면 다음 코드 셀을 실행하여 모든 스타벅스 위치가 포함된 GeoDataFrame starbucks_gdf를 만듭니다.",
"_____no_output_____"
]
],
[
[
"starbucks_gdf = gpd.GeoDataFrame(starbucks, geometry=gpd.points_from_xy(starbucks.Longitude, starbucks.Latitude))\nstarbucks_gdf.crs = {'init': 'epsg:4326'}",
"c:\\Users\\Kangdaeyong\\anaconda3\\lib\\site-packages\\pyproj\\crs\\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6\n in_crs_string = _prepare_from_proj_string(in_crs_string)\n"
]
],
[
[
"그렇다면 선택한 카운티에는 몇 개의 매장이 있습니까?",
"_____no_output_____"
]
],
[
[
"num_stores = locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)\nnum_stores = len(locations_of_interest)\nnum_stores",
"_____no_output_____"
]
],
[
[
"#### 4.6. 매장 위치를 시각화합니다.\n이전 질문에서 식별한 상점의 위치를 보여주는 지도를 만드십시오.",
"_____no_output_____"
]
],
[
[
"# Create a base map\nm_6 = folium.Map(location=[37,-120], zoom_start=6)\n\n# Show selected store locations\nmc = MarkerCluster()\n\nlocations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)\nfor idx, row in locations_of_interest.iterrows():\n if not math.isnan(row['Longitude']) and not math.isnan(row['Latitude']):\n mc.add_child(folium.Marker([row['Latitude'], row['Longitude']]))\n\nm_6.add_child(mc)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec954912f2538e066d875d20f38f64d8f17492f6 | 229,045 | ipynb | Jupyter Notebook | Courses/IadMl/IntroToDeepLearning/seminars/sem04/sem04_solution.ipynb | searayeah/sublime-snippets | deff53a06948691cd5e5d7dcfa85515ddd8fab0b | [
"MIT"
] | null | null | null | Courses/IadMl/IntroToDeepLearning/seminars/sem04/sem04_solution.ipynb | searayeah/sublime-snippets | deff53a06948691cd5e5d7dcfa85515ddd8fab0b | [
"MIT"
] | null | null | null | Courses/IadMl/IntroToDeepLearning/seminars/sem04/sem04_solution.ipynb | searayeah/sublime-snippets | deff53a06948691cd5e5d7dcfa85515ddd8fab0b | [
"MIT"
] | null | null | null | 58.910751 | 30,164 | 0.759807 | [
[
[
"Заполненный ноутбук в колабе: https://colab.research.google.com/drive/1dqq5e-c_yMrpiKpXGn4NzFQ0_3OOSCwO?usp=sharing",
"_____no_output_____"
]
],
[
[
"import math\nimport os\nimport random\nimport sys\nimport warnings\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom tqdm.auto import tqdm\n\n\n%matplotlib inline\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
]
],
[
[
"## Методы оптимизации\nКак вам показали на лекции, большинство методов машинного обучения сводятся к простому поиску параметров, который бы минимизировал ошибку на тренировочной выборке:\n$$\n\\min_{\\theta} \\sum_{x \\in X_{test}} L(p_{\\theta}(x), y)\n$$\nЗдесь:\n* $L$ - некоторый лосс,\n* $p_{\\theta}$ - нейронная сеть с параметрами $\\theta$,\n* $X$ - данные для обучения,\n* $y$ - ответы\n\nНапишем алгоритм для поиска минимума некоторой функции\n$$\nf(x) = x^{3} + 2x^{2} + 2\n$$",
"_____no_output_____"
]
],
[
[
"# Наша функция f(x)\nfunc = lambda x: x ** 3 - 2 * x ** 2 + 2\n\n# Производная функции f(x)\nd_func = lambda x: 3 * x ** 2 - 4 * x\n\n# Сделаем массив из 1000 элементов от -3 до 3\nx = np.linspace(-3, 3, 1000)\n\n# Определим границы по y для графика\nplt.ylim([-1, 3])\nplt.plot(x, func(x))\nplt.show()",
"_____no_output_____"
]
],
[
[
"Определим функцию для оптимизации $f(x)$, которая должна принимать на вход learning rate, максимальное количество итераций",
"_____no_output_____"
]
],
[
[
"def find_minimum_first_order(\n learning_rate=0.01,\n eps=1e-4,\n max_iterations=1000,\n anneal_learning_rate=None\n):\n i = 0\n x_old, x_new = 0, 2\n # Будем сохранятся обновлённые значения x и y\n x_list, y_list = [x_old], [func(x_old)]\n if not anneal_learning_rate:\n anneal_learning_rate = lambda lr, step: lr\n # TODO:\n # Your code here\n # --------------\n while abs(x_new - x_old) > eps and i < max_iterations:\n # Получим learning rate для текущей итерации\n learning_rate = anneal_learning_rate(learning_rate, step=i)\n # Обновим x_old\n x_old = x_new\n # Сделаем один шаг gradient descent\n x_new = x_old - learning_rate * d_func(x_old)\n # Добавим новые значения для визуализации сходимости\n x_list.append(x_new)\n y_list.append(func(x_new))\n i += 1\n # --------------\n print(\"Найденный локальный минимум:\", x_new)\n print(\"Количество шагов:\", len(x_list))\n # Визуализируем сходимость\n plt.figure(figsize=[6, 4])\n plt.ylim([-3, 8])\n plt.scatter(x_list, y_list, c=\"r\", edgecolors='k')\n plt.plot(x_list, y_list, c=\"r\")\n plt.plot(x, func(x), c=\"b\")\n plt.title(\"Descent trajectory\")\n plt.show()",
"_____no_output_____"
]
],
[
[
"Попробуем различные learning rate и посмотрим на поведение оптимизации",
"_____no_output_____"
]
],
[
[
"find_minimum_first_order(0.001)",
"Найденный локальный минимум: 1.3577577123861129\nКоличество шагов: 729\n"
]
],
[
[
"Слишком мало, будем очень долго идти к локальному минимуму",
"_____no_output_____"
]
],
[
[
"find_minimum_first_order(0.01)",
"Найденный локальный минимум: 1.3356881625009205\nКоличество шагов: 129\n"
]
],
[
[
"Уже лучше",
"_____no_output_____"
]
],
[
[
"find_minimum_first_order(0.3)",
"Найденный локальный минимум: 1.3333495713163788\nКоличество шагов: 8\n"
],
[
"find_minimum_first_order(0.6)",
"_____no_output_____"
]
],
[
[
"Ууупс, получили Overflow. Значит learning rate слишком большой. Хотя большой learning rate опасен возможностью overflow у него есть ряд преимуществ. Чем больше темп обучения, тем большие расстояния мы преодолеваем за один шаг и тем выше вероятность быстрее найти хорошее пространство локальных минимумов.\n\nХорошая стратегия — начинать с достаточно большого шага (чтобы хорошо попутешествовать по функции), а потом постепенно его уменьшать, чтобы стабилизировать процесс обучения в каком-то локальном минимуме.",
"_____no_output_____"
]
],
[
[
"find_minimum_first_order(0.6, anneal_learning_rate=lambda lr, step: 0.3 * lr)",
"Найденный локальный минимум: 1.294744839667743\nКоличество шагов: 7\n"
]
],
[
[
"# Описание алгоритмов градиентного спуска\n\n### SGD\nSGD - этот же самый gradient descent, что мы рассматривали ранее, вот только подсчёт градиентов производится не по всему множеству данных, а по отдельно взятому сэмплу. Очевидно, такая оптимизация будет очень шумной, что усложнит обучение модели. Поэтому обычно используют MiniBatch-SGD, где вместо одного сэмпла мы берём $k$ семплов. У такого подхода есть несколько плюсов:\n\n* ниже variance в сравнении с обычным SGD, что приводит к более стабильному процессу оптимизации\n* хорошо работает с DL библиотеками, так как теперь мы работаем с матрицами\n\n$$\n\\begin{eqnarray}\ng &=& \\frac{1}{m}\\nabla_w \\sum_i L(f(x_{i};w), y_{i}) \\\\\nw &=& w - \\eta \\times g\n\\end{eqnarray}\n$$\n\n### SGD with Momentum\n\n\n\nПопытаемся добавить SGD эффект инерции. Теперь, вместо того чтобы двигаться строго в направлении градиента в каждой точке, мы стараемся продолжить движение в том же направлении, в котором двигались ранее. То есть у нашей точки, которая спускается по многомерной поверхности, появляется импульс (momentum), который контролируется при помощи параметра $\\alpha$. Он определяет какую часть прошлого градиента мы хотим использовать на текущем шаге.\n$$\n\\begin{eqnarray}\ng_{t} &=& \\alpha g_{t-1} + \\eta \\frac{1}{m}\\nabla_w \\sum_i L(f(x_{i};w), y_{i}) \\\\\nw &=& w - \\eta \\times g\n\\end{eqnarray}\n$$\n\n## Адаптивные варианты градиентного спуска\nВо всех предыдущих алгоритмах у нас был фиксированный learning rate. Начиная с Adagrad у нас будет идти алгоритмы, которые подстраивают learning rate в зависимости от обучения. Они называются адаптивными вариантами градиентного спуска.\n\nАдаптивные варианты градиентного спуска подстраивает темп обучения таким образом, чтобы делать большие или маленькие обновления отдельных параметров. Например, может так сложиться, что некоторые веса близки к своим локальным минимумам, тогда по этим координатам нужно двигаться медленнее, а другие веса ещё только в середине, значит их можно менять гораздо быстрее. Подобные методы часты приводят к более обоснованной модели и сходятся гораздо быстрее.\n\n### Adagrad\n$$\n\\begin{eqnarray}\ng &=& \\frac{1}{m}\\nabla_w \\sum_i L(f(x_{i};w), y_{i}) \\\\\ns &=& s + diag(gg^{T}) \\\\\nw &=& w - \\frac{\\eta}{\\sqrt{s+eps}} \\odot g\n\\end{eqnarray}\n$$\nТеперь нам не нужно сильно волноваться о правильном подборе $\\eta$, так как $s$ контролирует скорость обучения для каждого параметра.\n\n### RMSprop\nУ Adagrad есть сильный минус. $s$ - всегда положительна и постоянно растёт во время обучения, что приводит к ситуации, когда у нас learning rate становится слишком маленький, и мы перестаём учиться. RMSprop исправляет эту проблему при помощи экспоненциального сглаживания\n$$\n\\begin{eqnarray}\ng &=& \\frac{1}{m}\\nabla_w \\sum_i L(f(x_{i};w), y_{i}) \\\\\ns &=& \\rho s + (1 - \\rho) diag(gg^{T}) \\\\\nw &=& w - \\frac{\\eta}{\\sqrt{s+eps}} \\odot g\n\\end{eqnarray}\n$$\n\n### Adam\nДобавим не только моменты второго порядка, но и первого при обновлении параметров\n$$\n\\begin{eqnarray}\ng &=& \\frac{1}{m}\\nabla_w \\sum_i L(f(x_{i};w), y_{i}) \\\\\nm &=& \\beta_1 m + (1 - \\beta_1) g \\\\\nv &=& \\beta_2 v + (1 - \\beta_2) diag(gg^{T}) \\\\\n\\hat{m} &=& \\frac{m}{1 - \\beta_1^{t}} \\\\\n\\hat{v} &=& \\frac{v}{1 - \\beta_2^{t}} \\\\\nw &=& w - \\frac{\\eta}{\\sqrt{\\hat{v} + \\epsilon}} \\odot \\hat{m}\n\\end{eqnarray}\n$$\n\n### Схема\n<div>\n<img src=\"Modifications.png\" width=\"300\"/>\n</div>",
"_____no_output_____"
],
[
"# PyTorch Optimizer\nОчевидно, что для своих нейронных сетей не нужно каждый раз писать свой алгоритм и за вас уже сделаны все самые популярные методы. Их можно найти в **torch.optim**.",
"_____no_output_____"
]
],
[
[
"[elem for elem in dir(torch.optim) if not elem.startswith(\"_\")]",
"_____no_output_____"
]
],
[
[
"Основные функции PyTorch Optimizer:\n* __step__ - обновление весов модели\n* __zero_grad__ - занулить веса модели (по умолчанию градиенты в PyTorch аккумулируются) ~ `for each param in params: param.grad = None`\n* __state_dict__ - получить текущее состояние Optimizer. Для адаптивных методов тут будут храниться аккумулированные квадраты градиентов",
"_____no_output_____"
],
[
"## Как сделать instance PyTorch Optimizer?\nДостаточно передать параметры модели (их можно получить при помощи функции `parameters()`) и гипер-параметоры для метода оптимизации.\n\nПример:",
"_____no_output_____"
]
],
[
[
"?torch.optim.SGD",
"_____no_output_____"
],
[
"model = torch.nn.Linear(1, 1)\nlist(model.parameters()), torch.optim.SGD(model.parameters(), lr=0.01)",
"_____no_output_____"
]
],
[
[
"Или же вот так",
"_____no_output_____"
]
],
[
[
"# Зададим PyTorch модули в качестве словаря\nmodel = torch.nn.ModuleDict({\n \"linear_1\": torch.nn.Linear(1, 1),\n \"linear_2\": torch.nn.Linear(2, 2)\n})\ntorch.optim.SGD([\n {\"params\": model[\"linear_1\"].parameters(), \"lr\": 0.3},\n {\"params\": model[\"linear_2\"].parameters()}\n], lr=0.5)",
"_____no_output_____"
]
],
[
[
"Последнее очень полезно для Transfer Learning, когда мы хотим, чтобы предобученная модель тренировалась с другим learning rate",
"_____no_output_____"
],
[
"## Делаем свой Optimizer\nДля того чтобы сделать свой Optimizer, не нужно писать свою имплементацию каждой из основных функций. Достаточно переопределить только одну из них - **step**.\n\nПопробуем реализовать несколько своих Optimizer. В качестве данных для модели воспользуемся `make_regression` из `sklearn`.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import make_regression\n\n\ndef seed_everything(seed):\n # Зафиксировать seed.\n # Это понадобится, чтобы убедиться\n # в правильности работы нашего Optimizer\n random.seed(seed)\n os.environ[\"PYTHONHASHSEED\"] = str(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed(seed)\n torch.backends.cudnn.deterministic = True\n\n\n# make_regression возвращает 2 переменные: данные и таргет для них\n# так как они возвращаётся как np.array,\n# вызовем для каждого из них команду torch.from_numpy\nX, y = map(\n lambda x: torch.from_numpy(x).float(),\n make_regression(n_samples=200, n_features=2)\n)\n\n\ndef get_model():\n # Таким образом, мы при каждом вызове будем получить\n # модель с одной и той же инициализацией весов\n seed_everything(13)\n return torch.nn.Sequential(\n torch.nn.Linear(2, 10),\n torch.nn.Linear(10, 1)\n )",
"_____no_output_____"
]
],
[
[
"Как мы заметили ранее Optimizer работает с группами параметров. Поэтому нам необходимо делать отдельно update для каждой группы параметров (-> ещё один for loop)",
"_____no_output_____"
]
],
[
[
"from torch.optim import Optimizer\n\n\nclass InClassOptimizer(Optimizer):\n def step(self):\n \"\"\"Perform a single optimization step.\"\"\"\n with torch.no_grad(): # выключим градиенты\n for group in self.param_groups:\n self._group_step(group)\n\n def _group_step(self, group):\n # group ~ dict[str, ...]\n \"\"\"\n Private helper function to perform\n single optimization step on model parameters.\n \"\"\"\n raise NotImplementedError()",
"_____no_output_____"
],
[
"class Adagrad(InClassOptimizer):\n def __init__(self, params, lr=0.001, eps=1e-13):\n defaults = dict(lr=lr, eps=eps)\n super().__init__(params, defaults)\n\n def _group_step(self, group):\n # One group contains information about values passed in init\n # and model parameters to update\n lr = group[\"lr\"]\n eps = group[\"eps\"]\n for param in filter(lambda x: x.grad is not None, group[\"params\"]):\n # TODO:\n # Your code here\n # --------------\n self._init_adagrad_buffer(param)\n d_param = param.grad\n buffer = self._get_adagrad_buffer(param)\n buffer.add_(d_param ** 2)\n d_param /= torch.sqrt(buffer + eps)\n # Inplace update of params multiplied by -lr\n param.add_(d_param, alpha=-lr)\n # --------------\n\n def _get_adagrad_buffer(self, param):\n \"\"\"\n Get accumulated gradients for Adagrad.\n\n Parameters\n ----------\n param : `torch.Tensor`, required\n Model parameter to get accumulated gradeints for Adagrad.\n\n Returns\n -------\n Accumulated Adagrad gradients for parameter.\n \"\"\"\n param_state = self.state[param]\n \n return param_state[\"adagrad_buffer\"]\n\n def _init_adagrad_buffer(self, param):\n \"\"\"\n Initialize accumulated gradeints for SGD momentum.\n\n Parameters\n ----------\n param : `torch.Tensor`, required\n Model parameter to get accumulated gradeints for Adagrad.\n \"\"\"\n param_state = self.state[param]\n if \"adagrad_buffer\" not in param_state:\n param_state[\"adagrad_buffer\"] = torch.zeros_like(param)",
"_____no_output_____"
],
[
"def check_optimizer(model, optim, num_iter):\n loss = torch.nn.MSELoss()\n for i in range(num_iter):\n output = loss(model(X), y.unsqueeze(-1))\n output.backward()\n optim.step()\n optim.zero_grad()\n if i % 100 == 0:\n print(f\"Iteration {i} loss: {output.item()}\")",
"_____no_output_____"
]
],
[
[
"Проверим, что написанный Optimizer работает корректно",
"_____no_output_____"
]
],
[
[
"model = get_model()\noptim = Adagrad(model.parameters(), lr=0.001)\ncheck_optimizer(model, optim, num_iter=1000)",
"Iteration 0 loss: 2803.320556640625\nIteration 100 loss: 2789.825927734375\nIteration 200 loss: 2783.95849609375\nIteration 300 loss: 2779.49853515625\nIteration 400 loss: 2775.7607421875\nIteration 500 loss: 2772.4775390625\nIteration 600 loss: 2769.51318359375\nIteration 700 loss: 2766.786865234375\nIteration 800 loss: 2764.246337890625\nIteration 900 loss: 2761.857177734375\n"
],
[
"model = get_model()\noptim = torch.optim.Adagrad(model.parameters(), lr=0.001)\ncheck_optimizer(model, optim, num_iter=1000)",
"Iteration 0 loss: 2803.320556640625\nIteration 100 loss: 2789.825927734375\nIteration 200 loss: 2783.95849609375\nIteration 300 loss: 2779.49853515625\nIteration 400 loss: 2775.7607421875\nIteration 500 loss: 2772.4775390625\nIteration 600 loss: 2769.51318359375\nIteration 700 loss: 2766.786865234375\nIteration 800 loss: 2764.246337890625\nIteration 900 loss: 2761.857177734375\n"
]
],
[
[
"Почему такой большой лосс?\n\nЕсли посмотреть на optim.state, то сразу становится ясно, что квадраты градиентов очень большие, следовательно, апдейт будет совсем небольшой.\n\nПовысим learning rate и посмотрим на поведение модели.",
"_____no_output_____"
]
],
[
[
"model = get_model()\noptim = Adagrad(model.parameters(), lr=0.1)\ncheck_optimizer(model, optim, num_iter=1000)",
"Iteration 0 loss: 2803.320556640625\nIteration 100 loss: 48.26877975463867\nIteration 200 loss: 0.12364918738603592\nIteration 300 loss: 0.00025058610481210053\nIteration 400 loss: 5.303460284267203e-07\nIteration 500 loss: 7.903008025778036e-09\nIteration 600 loss: 6.416297715361452e-09\nIteration 700 loss: 6.416297715361452e-09\nIteration 800 loss: 6.416297715361452e-09\nIteration 900 loss: 6.416297715361452e-09\n"
]
],
[
[
"`Какая мораль?`\n\nДаже если у вас есть методы с адаптивным градиентом спуском, полностью забывать о настройке learning rate не стоит.",
"_____no_output_____"
],
[
"## Сравнение методов оптимизации",
"_____no_output_____"
]
],
[
[
"import torchvision\nimport torchvision.transforms as transforms\nfrom torch.utils.data import DataLoader\n\n\n# Train data\nfashion_mnist_train = torchvision.datasets.FashionMNIST(\n \"./data\",\n download=True,\n transform=transforms.Compose([transforms.ToTensor()])\n)\ntrain_dataloader = DataLoader(\n fashion_mnist_train, \n batch_size=128, \n shuffle=True, \n num_workers=2\n)\n\n# Validation data\nfashion_mnist_eval = torchvision.datasets.FashionMNIST(\n \"./data\",\n train=False,\n download=True,\n transform=transforms.Compose([transforms.ToTensor()])\n)\neval_dataloader = DataLoader(\n fashion_mnist_eval, \n batch_size=128, \n num_workers=2\n)",
"_____no_output_____"
],
[
"from collections import defaultdict\n\n\nidx_to_label = defaultdict(lambda: None, {\n 0: \"T-shirt/Top\",\n 1: \"Trouser\",\n 2: \"Pullover\",\n 3: \"Dress\",\n 4: \"Coat\",\n 5: \"Sandal\",\n 6: \"Shirt\",\n 7: \"Sneaker\",\n 8: \"Bag\",\n 9: \"Ankle Boot\"\n})",
"_____no_output_____"
],
[
"class Accuracy:\n def __init__(self):\n self._all_predictions = torch.LongTensor()\n self._all_labels = torch.LongTensor()\n\n def __call__(self, predictions, labels):\n # predictions ~ (batch size)\n # labels ~ (batch size)\n self._all_predictions = torch.cat([\n self._all_predictions,\n predictions\n ], dim=0)\n self._all_labels = torch.cat([\n self._all_labels,\n labels\n ], dim=0)\n\n def get_metric(self, reset=False):\n correct = (self._all_predictions == self._all_labels).long()\n accuracy = correct.sum().float() / self._all_labels.size(0)\n if reset:\n self.reset()\n return accuracy\n\n def reset(self):\n self._all_predictions = torch.LongTensor()\n self._all_labels = torch.LongTensor()",
"_____no_output_____"
]
],
[
[
"# Модель\n\n1. BatchNorm\n2. Conv(out=32, kernel=3) -> ReLu -> MaxPool(kernel=2)\n3. Conv(out=64, kernel=3) -> ReLu -> MaxPool(kernel=2)\n4. Flatten\n5. Linear(out=128)\n6. ReLu\n7. Dropout\n8. Linear(out=64)\n9. ReLu\n10. Linear(out=10)\n\n",
"_____no_output_____"
]
],
[
[
"class SimpleNetEncoder(torch.nn.Module):\n def __init__(self, dropout=0.4):\n super().__init__()\n # TODO:\n # Your code here:\n # --------------\n self.batch_norm = torch.nn.BatchNorm2d(1)\n self.conv1 = torch.nn.Sequential(\n torch.nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3),\n torch.nn.ReLU(),\n torch.nn.MaxPool2d(kernel_size=2),\n )\n self.conv2 = torch.nn.Sequential(\n torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),\n torch.nn.ReLU(),\n torch.nn.MaxPool2d(kernel_size=2),\n )\n self.linear1 = torch.nn.Linear(in_features=64 * 5 * 5, out_features=128)\n self.linear2 = torch.nn.Linear(in_features=128, out_features=64)\n self.output = torch.nn.Linear(in_features=64, out_features=10)\n self.dropout = torch.nn.Dropout(p=dropout)\n # --------------\n\n def forward(self, x):\n # TODO:\n # Your code here:\n # --------------\n x = self.batch_norm(x)\n x = self.conv1(x)\n x = self.conv2(x)\n x = x.view(x.size(0), -1)\n x = F.relu(self.linear1(x))\n x = self.dropout(x)\n x = F.relu(self.linear2(x))\n \n return self.output(x)\n # --------------\n\n\nclass SimpleNet(torch.nn.Module):\n def __init__(self, encoder):\n super().__init__()\n self._encoder = encoder\n self._accuracy = Accuracy()\n\n def forward(self, images, target=None):\n # images ~ (batch size, num channels, height, width)\n # target ~ (batch size)\n # output ~ (batch size, num classes)\n output = self._encoder(images)\n output_dict = {\"logits\": output, \"probs\": torch.softmax(output, dim=-1)}\n output_dict[\"preds\"] = torch.argmax(output_dict[\"probs\"], dim=-1)\n if target is not None:\n # CrossEntropy Loss\n log_softmax = torch.log_softmax(output, dim=-1)\n output_dict[\"loss\"] = F.nll_loss(log_softmax, target)\n self._accuracy(\n output_dict[\"preds\"].cpu(),\n target.cpu()\n )\n return output_dict\n\n def decode(self, output_dict):\n # output_dict ~ dict with torch.Tensors (output_dict from forward)\n return [idx_to_label[int(x)] for x in output_dict[\"preds\"]]\n\n def get_metrics(self, reset=False):\n return {\"accuracy\": self._accuracy.get_metric(reset)}",
"_____no_output_____"
],
[
"def train_epoch(\n model,\n data_loader,\n optimizer,\n return_losses=False,\n device=\"cuda:0\",\n):\n model = model.train()\n total_loss = 0\n num_batches = 0\n all_losses = []\n with tqdm(total=len(data_loader), file=sys.stdout) as prbar:\n for batch in data_loader:\n # Move Batch to GPU\n batch = [x.to(device=device) for x in batch]\n output_dict = model(*batch)\n loss = output_dict[\"loss\"]\n # Update weights\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n # Update descirption for tqdm\n metrics = model.get_metrics()\n prbar.set_description(\n f\"Loss: {round(loss.item(), 4)} \"\n f\"Accuracy: {round(metrics['accuracy'].item() * 100, 4)}\"\n )\n prbar.update(1)\n total_loss += loss.item()\n num_batches += 1\n all_losses.append(loss.detach().item())\n metrics = {\"loss\": total_loss / num_batches}\n metrics.update(model.get_metrics(reset=True))\n if return_losses:\n return metrics, all_losses\n else:\n return metrics\n\n\ndef validate(model, data_loader, device=\"cuda:0\"):\n model = model.eval()\n total_loss = 0\n num_batches = 0\n with tqdm(total=len(data_loader), file=sys.stdout) as prbar:\n for batch in data_loader:\n batch = [x.to(device=device, non_blocking=True) for x in batch]\n output_dict = model(*batch)\n loss = output_dict['loss']\n metrics = model.get_metrics()\n prbar.set_description(\n f\"Loss: {round(loss.item(), 4)} \"\n f\"Accuracy: {round(metrics['accuracy'].item() * 100, 4)}\"\n )\n prbar.update(1)\n total_loss += loss.item()\n num_batches += 1\n metrics = {\"loss\": total_loss / num_batches}\n metrics.update(model.get_metrics(reset=True))\n return metrics",
"_____no_output_____"
],
[
"from collections import namedtuple\n\n\nLossInfo = namedtuple(\n \"LossInfo\", \n [\"full_train_losses\", \"train_epoch_losses\", \"eval_epoch_losses\"]\n)\n\n\nEPOCHS = 7\nLR = 0.001",
"_____no_output_____"
],
[
"def fit(\n model,\n epochs,\n train_data_loader,\n validation_data_loader,\n optimizer,\n device\n):\n all_train_losses = []\n epoch_train_losses = []\n epoch_eval_losses = []\n for epoch in range(epochs):\n # Construct iterators\n train_iterator = iter(train_data_loader)\n validation_iterator = iter(validation_data_loader)\n # Train step\n print(f\"Train Epoch: {epoch}\")\n train_metrics, one_epoch_train_losses = train_epoch(\n model=model,\n data_loader=train_iterator,\n optimizer=optimizer,\n return_losses=True,\n device=device\n )\n # Save Train losses\n all_train_losses.extend(one_epoch_train_losses)\n epoch_train_losses.append(train_metrics[\"loss\"])\n # Eval step\n print(f\"Validation Epoch: {epoch}\")\n with torch.no_grad():\n validation_metrics = validate(\n model=model,\n data_loader=validation_iterator,\n device=device\n )\n # Save eval losses\n epoch_eval_losses.append(validation_metrics[\"loss\"])\n return LossInfo(all_train_losses, epoch_train_losses, epoch_eval_losses)",
"_____no_output_____"
]
],
[
[
"SGD",
"_____no_output_____"
]
],
[
[
"device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\nmodel = SimpleNet(encoder=SimpleNetEncoder()).to(device=device)\nsgd_loss_info = fit(\n model=model,\n epochs=EPOCHS,\n train_data_loader=train_dataloader,\n validation_data_loader=eval_dataloader,\n optimizer=torch.optim.SGD(model.parameters(), lr=LR),\n device=device\n)",
"Train Epoch: 0\n"
]
],
[
[
"\nSGD with Momentum",
"_____no_output_____"
]
],
[
[
"model = SimpleNet(encoder=SimpleNetEncoder()).to(device=device)\nsgd_momentum_loss_info = fit(\n model=model,\n epochs=EPOCHS,\n train_data_loader=train_dataloader,\n validation_data_loader=eval_dataloader,\n optimizer=torch.optim.SGD(model.parameters(), momentum=0.9, lr=LR),\n device=device\n)",
"Train Epoch: 0\n"
]
],
[
[
"RMSprop",
"_____no_output_____"
]
],
[
[
"model = SimpleNet(encoder=SimpleNetEncoder()).to(device=device)\nrmsprop_loss_info = fit(\n model=model,\n epochs=EPOCHS,\n train_data_loader=train_dataloader,\n validation_data_loader=eval_dataloader,\n optimizer=torch.optim.RMSprop(model.parameters(), lr=LR),\n device=device\n)",
"Train Epoch: 0\n"
]
],
[
[
"Adam",
"_____no_output_____"
]
],
[
[
"model = SimpleNet(encoder=SimpleNetEncoder()).to(device=device)\nadam_loss_info = fit(\n model=model,\n epochs=EPOCHS,\n train_data_loader=train_dataloader,\n validation_data_loader=eval_dataloader,\n optimizer=torch.optim.Adam(model.parameters(), lr=LR),\n device=device\n)",
"Train Epoch: 0\n"
],
[
"plt.plot(\n np.arange(len(train_dataloader) * EPOCHS),\n sgd_loss_info.full_train_losses,\n label=\"SGD\", c=\"grey\"\n)\nplt.plot(\n np.arange(len(train_dataloader) * EPOCHS),\n sgd_momentum_loss_info.full_train_losses,\n label=\"SGD Momentum\", c=\"blue\"\n)\nplt.plot(\n np.arange(len(train_dataloader) * EPOCHS),\n rmsprop_loss_info.full_train_losses,\n label=\"RMSProp\", c=\"green\"\n)\nplt.plot(\n np.arange(len(train_dataloader) * EPOCHS),\n adam_loss_info.full_train_losses,\n label=\"Adam\", c=\"red\"\n)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(\n np.arange(EPOCHS), sgd_loss_info.eval_epoch_losses,\n label=\"SGD\", c=\"grey\"\n)\nplt.plot(\n np.arange(EPOCHS), sgd_momentum_loss_info.eval_epoch_losses,\n label=\"SGD Momentum\", c=\"blue\"\n)\nplt.plot(\n np.arange(EPOCHS), rmsprop_loss_info.eval_epoch_losses,\n label=\"RMSprop\", c=\"green\"\n)\nplt.plot(\n np.arange(EPOCHS), adam_loss_info.eval_epoch_losses,\n label=\"Adam\", c=\"red\"\n)\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Оптимизация второго порядка\nТеперь вернёмся немного назад к функции $f(x)$ и рассмотрим оптимизацию второго порядка [методом Ньютона](https://streletzcoder.ru/nahozhdenit-lokalnyih-ekstremumov-funktsiy-s-pomoshhyu-metoda-nyutona/). Вместо того чтобы приближать функцию в текущей точке линейно можно это делать при помощи квадратов.",
"_____no_output_____"
]
],
[
[
"d_2_func = lambda x: 6 * x - 4",
"_____no_output_____"
],
[
"def find_minimum_second_order(eps=1e-4, max_iterations=1000):\n i = 0\n x_old, x_new = 0, 2\n x_list, y_list = [x_old], [func(x_old)]\n while abs(x_new - x_old) > eps and i < max_iterations:\n # Обновим x_old\n x_old = x_new\n # Сделаем один шаг gradient descent со 2 порядком градиентов\n x_new = x_old - d_func(x_old) / d_2_func(x_old)\n # Сохраним значения для визуализации\n x_list.append(x_new)\n y_list.append(func(x_new))\n i += 1\n print(\"Найденный локальный минимум:\", x_new)\n print(\"Количество шагов:\", len(x_list))\n # Визуализируем сходимость\n plt.figure(figsize=[6, 4])\n plt.ylim([-3, 8])\n plt.scatter(x_list, y_list, c=\"r\", edgecolors=\"k\")\n plt.plot(x_list, y_list, c=\"r\")\n plt.plot(x, func(x), c=\"b\")\n plt.title(\"Descent trajectory\")\n plt.show()",
"_____no_output_____"
],
[
"find_minimum_second_order()",
"Найденный локальный минимум: 1.333333333333334\nКоличество шагов: 6\n"
]
],
[
[
"В итоге мы пришли к минимуму гораздо быстрее. И если же методы второго порядка такие крутые и быстрые, то почему их не используют в нейронных сетях? Для ответа на этот вопрос сначала рассмотрим плюсы и минусы данного подхода.\n\nПлюсы методов второго порядка:\n* Быстрее, чем методы оптимизации первого порядка\n* Нет необходимости настраивать learning_rate\n\nМожете ли вы предположить минусы методов оптимизации второго порядка или же просто методов Ньютона?\n\nОтвет:\n* Сложность вычисления градиента второго порядка\n* В многомерном случае необходимо хранить матрицу размерности N x N\n\nПроблема с памятью наиболее острая, так как современные нейронные сети имеют миллионы параметров и хранить матрицу миллион на миллион очень сложно.",
"_____no_output_____"
],
[
"## Зачем мы вообще начали разговор о 2 порядке?\nОтвет в том, что методы с адаптивными градиентным являются аппроксимацией методов 2 порядка. Отсюда становится понятно, почему мы делим на матрицу квадратов в Adagrad и других его модификациях\n$$\n\\mathbb{E}[gg^{T}] \\sim \\mathbb{E}[H(x)]\n$$\nЗдесь:\n* $gg^{T}$ - квадратная матрица квадратов градиентов\n* $\\mathbb{E}[H(x)]$ - ожидаемое значение Гессиана (матрица градиентов 2 порядка). В адаптивном градиенте разница лишь в том, что мы берём $\\sqrt{diag(gg^{T})}$, так как $gg^{T}$ занимает слишком много места.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ec954aeb7ff464c0d513d935627b43afffdaf81a | 15,796 | ipynb | Jupyter Notebook | 210124_kfold_AirQual_RandomForest.ipynb | MartinTschendel/AirQualityBeijing | f0da2b9013bdeb32449bae241b3e1eac730c3c27 | [
"MIT"
] | 1 | 2021-02-15T02:55:23.000Z | 2021-02-15T02:55:23.000Z | 210124_kfold_AirQual_RandomForest.ipynb | MartinTschendel/AirQualityBeijing | f0da2b9013bdeb32449bae241b3e1eac730c3c27 | [
"MIT"
] | null | null | null | 210124_kfold_AirQual_RandomForest.ipynb | MartinTschendel/AirQualityBeijing | f0da2b9013bdeb32449bae241b3e1eac730c3c27 | [
"MIT"
] | null | null | null | 34.264642 | 319 | 0.338124 | [
[
[
"import numpy as np\r\nimport pandas as pd\r\ndf = pd.read_csv('dataset_AirQual.csv')\r\n\r\n#use fillna() method to replace missing values with mean value\r\ndf['pm2.5'].fillna(df['pm2.5'].mean(), inplace = True)\r\n\r\n#one hot encoding\r\ncols = df.columns.tolist()\r\ndf_new = pd.get_dummies(df[cols])\r\n\r\n#put column pm2.5 at the end of the df\r\n#avoid one of the column rearrangement steps\r\ncols = df_new.columns.tolist()\r\ncols_new = cols[:5] + cols[6:] + cols[5:6]\r\ndf_new = df_new[cols_new]\r\ndf_new.head()",
"_____no_output_____"
]
],
[
[
"Before I start to build, train and validate the model, I want to check the correlation between the indepependent variables and the dependent variable pm2.5. The higher the cumulated wind speed (lws) and the more the wind is blowin from north west (cbwd_NW), the lower the concentration of pm2.5. <br>\r\nThe more the wind is blowing from south west (cbwd_cv) and the higher the dew point (DEWP), the higher the concentration of pm2.5 in the air. The dew point indicates the absolute humidity. During times with high humidity, more pm2.5 particles can connect themselves with water droplets, that hover in the air.",
"_____no_output_____"
]
],
[
[
"indep_var = cols_new[:-1]",
"_____no_output_____"
],
[
"df_new[indep_var].corrwith(df_new['pm2.5']).sort_values()",
"_____no_output_____"
],
[
"#get matrix arrays of dependent and independent variables\r\nX = df_new.iloc[:, :-1].values\r\ny = df_new.iloc[:, -1].values",
"_____no_output_____"
],
[
"#train random forest regression model\r\n\r\nfrom sklearn.preprocessing import StandardScaler\r\nfrom sklearn.linear_model import LinearRegression\r\nfrom sklearn.ensemble import RandomForestRegressor\r\n\r\n#training the model\r\ndef train(X_train, y):\r\n #scale the training set data\r\n sc = StandardScaler()\r\n X_train_trans = sc.fit_transform(X_train)\r\n regressor = RandomForestRegressor(n_estimators = 10, random_state=1)\r\n regressor.fit(X_train_trans, y)\r\n\r\n return regressor",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\r\n\r\n#make predictions (apply model to new data)\r\ndef predict(X_val, regressor):\r\n #scale the new data\r\n sc = StandardScaler()\r\n X_val_trans = sc.fit_transform(X_val)\r\n y_pred = regressor.predict(X_val_trans)\r\n\r\n return y_pred",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_squared_error\r\n\r\n#do k-fold cross-validation\r\nfrom sklearn.model_selection import KFold\r\nkfold = KFold(n_splits=10, shuffle=True, random_state=1)\r\nmse_list = []\r\n\r\n\r\nfor train_idx, val_idx in kfold.split(X):\r\n #split data in train & val sets\r\n X_train = X[train_idx]\r\n X_val = X[val_idx]\r\n y_train = y[train_idx]\r\n y_val = y[val_idx]\r\n #train model and make predictions\r\n model = train(X_train, y_train)\r\n y_pred = predict(X_val, model)\r\n #evaluate\r\n mse = mean_squared_error(y_val, y_pred)\r\n mse_list.append(mse) ",
"_____no_output_____"
],
[
"print('mse = %0.3f ± %0.3f' % (np.mean(mse_list), np.std(mse_list)))",
"mse = 2293.779 ± 135.446\n"
],
[
"#compare predicted values with real ones\r\nnp.set_printoptions(precision=2)\r\nconc_vec = np.concatenate((y_pred.reshape(len(y_pred),1), y_val.reshape(len(y_val),1)), 1)\r\nconc_vec[50:100]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec955161c090d382ea5d16014d8557543ba4eb90 | 13,249 | ipynb | Jupyter Notebook | expirements.ipynb | YacineAll/ViT-pytorch_lightning | 98b9d70349942780fc16b54364a84557ae91fc67 | [
"Apache-2.0"
] | 3 | 2021-02-24T09:44:36.000Z | 2021-12-01T15:58:28.000Z | expirements.ipynb | YacineAll/ViT-pytorch_lightning | 98b9d70349942780fc16b54364a84557ae91fc67 | [
"Apache-2.0"
] | null | null | null | expirements.ipynb | YacineAll/ViT-pytorch_lightning | 98b9d70349942780fc16b54364a84557ae91fc67 | [
"Apache-2.0"
] | null | null | null | 26.02947 | 163 | 0.495358 | [
[
[
"import sys \nimport os\nimport argparse\nsys.path.append('./project/')\nsys.path.append(f'/users/Etu2/3701222/.local/lib/python3.7/site-packages')\n\n\nimport pytorch_lightning as pl\nfrom vision_transformer_org import VisionTransformer\nfrom lightning_modules import CIFAR10DataModule, LitClassifierModel, load_from_checkpoint\nfrom pytorch_lightning.callbacks import ModelCheckpoint, LearningRateMonitor\n",
"_____no_output_____"
],
[
"! mkdir -p /tempory/vit",
"_____no_output_____"
],
[
"directory = \"/tempory/vit\"\n\"\"\"\nargs = argparse.Namespace(\n ##########useful args\n fit=True,\n default_root_dir=f\"{directory}/model\",\n data_path=f\"{directory}/cifar10\",\n gpus=-1,\n ########Data Args##########\n image_size=224,\n num_classes=10,\n ########Training Args##########\n learning_rate=1e-4,\n val_size=0.2,\n batch_size=32,\n num_workers=14,\n #######Model args############\n patch_size=32,\n emb_dim=768,\n mlp_dim=3072,\n num_heads=12,\n num_layers=12,\n attn_dropout_rate=0.0,\n dropout_rate=0.1,\n ########Trainer Args##########\n progress_bar_refresh_rate=25,\n)\n\"\"\"\nargs = argparse.Namespace(\n ##########useful args\n fit=True,\n default_root_dir=f\"{directory}/model\",\n data_dir=f\"{directory}/cifar10\",\n gpus=-1,\n ########Data Args##########\n image_size=224,\n num_classes=10,\n ########Optimization Args##########\n learning_rate=1e-4,\n weight_decay=0.01,\n ########Training Args##########\n val_size=0.2,\n batch_size=16,\n num_workers=14,\n #######Model args############\n patch_size=16,\n emb_dim=768,\n mlp_dim=3072,\n num_heads=12,\n num_layers=12,\n attn_dropout_rate=0.0,\n dropout_rate=0.1,\n embedding_mode=\"linear\",\n ########Trainer Args##########\n progress_bar_refresh_rate=25,\n)\nload =True",
"_____no_output_____"
],
[
"datamodule = CIFAR10DataModule(**vars(args))\n\nvit_Backbone = VisionTransformer(**vars(args))\n\ncheckpoint_callback = ModelCheckpoint(\n monitor='val_acc',\n filename='vit-{epoch:02d}-{val_loss:.2f}-{val_acc:.2f}',\n mode='max',\n)\nlr_monitor = LearningRateMonitor(logging_interval='step')\n",
"_____no_output_____"
],
[
"trainer = pl.Trainer.from_argparse_args(args, callbacks=[checkpoint_callback, lr_monitor])",
"GPU available: True, used: True\nGPU available: True, used: True\nTPU available: None, using: 0 TPU cores\nTPU available: None, using: 0 TPU cores\nLOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\nLOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n"
],
[
"if not load:\n model = LitClassifierModel(vit_Backbone, **vars(args))\nelse:\n model = LitClassifierModel(load_from_checkpoint(\n VisionTransformer, \n LitClassifierModel, \n hparams_file=\"/tempory/vit/model/lightning_logs/version_12/hparams.yaml\",\n checkpoint_file=\"/tempory/vit/model/lightning_logs/version_12/checkpoints/vit-epoch=16-val_loss=1.11-val_acc=0.74.ckpt\",\n ).backbone,\n **vars(args)\n ) ",
"_____no_output_____"
],
[
"trainer.fit(model, datamodule)",
"Files already downloaded and verified\nFiles already downloaded and verified\n"
],
[
"trainer.test(model, datamodule=datamodule)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec955f466721d9777f87d0909e2d81a23ae81f09 | 76,020 | ipynb | Jupyter Notebook | network_classification/relation_class_NSk.ipynb | HaTT2018/NET_louvain_DAN | f77ac0e846c3274535dff1928a0b2ce3915ff573 | [
"MIT"
] | 3 | 2021-11-19T08:07:33.000Z | 2022-01-06T08:30:59.000Z | network_classification/relation_class_NSk.ipynb | HaTT2018/NET_louvain_DAN | f77ac0e846c3274535dff1928a0b2ce3915ff573 | [
"MIT"
] | null | null | null | network_classification/relation_class_NSk.ipynb | HaTT2018/NET_louvain_DAN | f77ac0e846c3274535dff1928a0b2ce3915ff573 | [
"MIT"
] | null | null | null | 152.344689 | 59,736 | 0.844843 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport os\n\nimport ipdb",
"_____no_output_____"
],
[
"dir_ = os.listdir('../res/data_rel_class_NSk/')",
"_____no_output_____"
],
[
"res_set = []\n\nfor i in dir_:\n if i.split('.')[-1] != 'csv':\n continue\n res_term = i.split('_')[1]\n res_ind = int(res_term[res_term.find('s')+1:])\n if not (res_ind in res_set):\n res_set.append(res_ind)",
"_____no_output_____"
],
[
"rel_table = pd.DataFrame([], columns=['randseed', 'resolution', 'num_class', 'NSk_mean', 'NSk_min', 'NSk_max', 'NSk_std', 'TV', 'CH'])",
"_____no_output_____"
],
[
"for i in range(len(dir_)):\n try:\n index_ind = dir_[i].split('_')[2]\n except:\n continue\n \n if index_ind == 'index.csv':\n res_term = dir_[i].split('_')[1]\n res_ind = int(res_term[res_term.find('s')+1:])\n \n data = pd.read_csv('../res/data_rel_class_NSk/'+dir_[i])\n rel_table.loc[i, 'resolution'] = res_ind\n rel_table.loc[i, ['NSk_mean', 'NSk_min', 'NSk_max', 'NSk_std', 'TV', 'CH']] = data.loc[0, ['NSk_mean', 'NSk_min', 'NSk_max', 'NSk_std', 'TV', 'CH']]\n \n data2 = pd.read_csv('../res/data_rel_class_NSk/'+dir_[i+1])\n rel_table.loc[i, 'num_class'] = data2.iloc[:, 0].drop_duplicates().shape[0]\n \n rel_table.loc[i, 'randseed'] = int(dir_[i].split('_')[0])\n ",
"_____no_output_____"
],
[
"rel_table.loc[(rel_table['num_class']==5) & (rel_table['NSk_mean']<0.89)].sort_values(by='CH')",
"_____no_output_____"
],
[
"class_set = list(rel_table['num_class'].drop_duplicates().sort_values().values)\nclass_NSk_min = [rel_table[rel_table['num_class']==i]['NSk_mean'].values.min() for i in class_set]\nclass_TV_min = [rel_table[rel_table['num_class']==i]['TV'].values.min() for i in class_set]\nclass_CH_min = [rel_table[rel_table['num_class']==i]['CH'].values.min() for i in class_set]\n\nfig = plt.figure(figsize=[15, 4], dpi=100)\nax1 = fig.add_subplot(131)\nax1.plot(class_set, class_NSk_min)\nax1.plot(class_set, class_NSk_min, 'ro')\nax1.set_xlabel('number of classes')\nax1.set_ylabel('NSk mean')\n\nax2 = fig.add_subplot(132)\nax2.plot(class_set, class_TV_min)\nax2.plot(class_set, class_TV_min, 'ro')\nax2.set_xlabel('number of classes')\nax2.set_ylabel('TV mean')\n\nax1 = fig.add_subplot(133)\nax1.plot(class_set, class_CH_min)\nax1.plot(class_set, class_CH_min, 'ro')\nax1.set_xlabel('number of classes')\nax1.set_ylabel('CH mean')",
"_____no_output_____"
],
[
"res_df = pd.DataFrame([], columns=[['NSk', 'TV', 'CH']], index=range(len(class_CH_min)))\nres_df['NSk'] = class_NSk_min\nres_df['TV'] = class_TV_min\nres_df['CH'] = class_CH_min\nres_df.to_csv('../res/data_rel_class_NSk/results.csv')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec957d7052bbb8290b6dbaf1e25e238bdb2f5910 | 6,928 | ipynb | Jupyter Notebook | l3/Untitled.ipynb | Iamsdt/mlcourse.ai-practice | 15f7ed0c3abf8950662b72a0de38c2fe129a1746 | [
"Apache-2.0"
] | null | null | null | l3/Untitled.ipynb | Iamsdt/mlcourse.ai-practice | 15f7ed0c3abf8950662b72a0de38c2fe129a1746 | [
"Apache-2.0"
] | null | null | null | l3/Untitled.ipynb | Iamsdt/mlcourse.ai-practice | 15f7ed0c3abf8950662b72a0de38c2fe129a1746 | [
"Apache-2.0"
] | 1 | 2019-09-11T16:41:31.000Z | 2019-09-11T16:41:31.000Z | 17.022113 | 60 | 0.467234 | [
[
[
"# Question 1",
"_____no_output_____"
],
[
"### Identifying a topic of a live-chat with a customer",
"_____no_output_____"
],
[
"# Question 2",
"_____no_output_____"
],
[
"## logN",
"_____no_output_____"
],
[
"# Question 3",
"_____no_output_____"
],
[
"# Calculate S1\n- blue 4\n- red 5",
"_____no_output_____"
]
],
[
[
"import math\nb = (4/9)\nr = (5/9)\n\ns1 = -b*math.log(b, 2) - r*math.log(r, 2)\ns1",
"_____no_output_____"
]
],
[
[
"# Calculate s2",
"_____no_output_____"
]
],
[
[
"b = (5/11)\nr = (6/11)\ns2 = -b*math.log(b, 2) - r*math.log(r, 2)\ns2",
"_____no_output_____"
]
],
[
[
"# Calculate S0",
"_____no_output_____"
]
],
[
[
"b = (9/20)\nr = (11/20)\ns0 = -b*math.log(b, 2) - r*math.log(r, 2)\ns0",
"_____no_output_____"
]
],
[
[
"# Calculate IG",
"_____no_output_____"
]
],
[
[
"s1 = b*s1\ns1",
"_____no_output_____"
],
[
"s2 = r*s2\ns2",
"_____no_output_____"
],
[
"s0 - s1 - s2",
"_____no_output_____"
]
],
[
[
"# Question 4",
"_____no_output_____"
],
[
"## answer: 2,3, 4",
"_____no_output_____"
],
[
"# Question 5",
"_____no_output_____"
],
[
"# answer: 7",
"_____no_output_____"
]
],
[
[
"# Question 6",
"_____no_output_____"
],
[
"# Question 7",
"_____no_output_____"
],
[
"# 3",
"_____no_output_____"
],
[
"# Question 8",
"_____no_output_____"
],
[
"# 1 true",
"_____no_output_____"
],
[
"# Question 9",
"_____no_output_____"
],
[
"# 2 NO",
"_____no_output_____"
],
[
"# Question 10",
"_____no_output_____"
],
[
"2, 3, 4",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec958941e7b93992acd1346f55cb2361c28c86d6 | 1,555 | ipynb | Jupyter Notebook | Solution/Day_12_Solution.ipynb | YenLinWu/DataScienceMarathon | 06ac92af91c02c5a2f9530fbfe68339df4fe177f | [
"MIT"
] | 8 | 2021-02-25T08:26:52.000Z | 2022-01-01T07:51:52.000Z | Solution/Day_12_Solution.ipynb | YenLinWu/DataScienceMarathon | 06ac92af91c02c5a2f9530fbfe68339df4fe177f | [
"MIT"
] | null | null | null | Solution/Day_12_Solution.ipynb | YenLinWu/DataScienceMarathon | 06ac92af91c02c5a2f9530fbfe68339df4fe177f | [
"MIT"
] | 6 | 2021-01-28T14:26:21.000Z | 2022-03-21T12:58:46.000Z | 1,555 | 1,555 | 0.668167 | [
[
[
"作業目標:<br>\r\n1. 靈活運用圖表在各種情況下\r\n2. 圖表的解讀",
"_____no_output_____"
],
[
"作業重點:<br>\r\n1. 依據需求畫出圖表<br>\r\n2. 在做圖表解釋時,須了解圖表中的含意",
"_____no_output_____"
],
[
"題目 : 將資料夾中boston.csv讀進來,並用圖表分析欄位。<br>\r\n1.畫出箱型圖,並判斷哪個欄位的中位數在300~400之間?<br>\r\n2.畫出散佈圖 x='NOX', y='DIS' ,並說明這兩欄位有什麼關係?\r\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\r\nimport numpy as np",
"_____no_output_____"
],
[
"#1.畫出箱型圖,並判斷哪個欄位的中位數在300~400之間?\r\n#欄位TAX\r\ndf = pd.read_csv(\"boston.csv\")\r\ndf.boxplot()",
"/content\n"
],
[
"#2. 畫出散佈圖 x='NOX', y='DIS' ,並說明這兩欄位有什麼關係?\r\n#成反比關係\r\ndf.plot.scatter(x='RM', y='LSTAT')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec959e1d265ee80872da536a5b090d40a41854a2 | 52,621 | ipynb | Jupyter Notebook | lab.ipynb | SealedSaint/CarND-Term1-TensorFlow-notMNIST | 01b89dcf05107ed2361c46260f0ffd08d509ea37 | [
"MIT"
] | null | null | null | lab.ipynb | SealedSaint/CarND-Term1-TensorFlow-notMNIST | 01b89dcf05107ed2361c46260f0ffd08d509ea37 | [
"MIT"
] | null | null | null | lab.ipynb | SealedSaint/CarND-Term1-TensorFlow-notMNIST | 01b89dcf05107ed2361c46260f0ffd08d509ea37 | [
"MIT"
] | null | null | null | 60.903935 | 23,506 | 0.739933 | [
[
[
"<h1 align=\"center\">TensorFlow Neural Network Lab</h1>",
"_____no_output_____"
],
[
"<img src=\"image/notmnist.png\">\nIn this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, <a href=\"http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html\">notMNIST</a>, consists of images of a letter from A to J in differents font.\n\nThe above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!",
"_____no_output_____"
],
[
"To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print \"`All modules imported`\".",
"_____no_output_____"
]
],
[
[
"import hashlib\nimport os\nimport pickle\nfrom urllib.request import urlretrieve\n\nimport numpy as np\nfrom PIL import Image\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelBinarizer\nfrom sklearn.utils import resample\nfrom tqdm import tqdm\nfrom zipfile import ZipFile\n\nprint('All modules imported.')",
"All modules imported.\n"
]
],
[
[
"The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).",
"_____no_output_____"
]
],
[
[
"def download(url, file):\n \"\"\"\n Download file from <url>\n :param url: URL to file\n :param file: Local file path\n \"\"\"\n if not os.path.isfile(file):\n print('Downloading ' + file + '...')\n urlretrieve(url, file)\n print('Download Finished')\n\n# Download the training and test dataset.\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')\n\n# Make sure the files aren't corrupted\nassert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\\\n 'notMNIST_train.zip file is corrupted. Remove the file and try again.'\nassert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\\\n 'notMNIST_test.zip file is corrupted. Remove the file and try again.'\n\n# Wait until you see that all files have been downloaded.\nprint('All files downloaded.')",
"Downloading notMNIST_train.zip...\nDownload Finished\nDownloading notMNIST_test.zip...\nDownload Finished\nAll files downloaded.\n"
],
[
"def uncompress_features_labels(file):\n \"\"\"\n Uncompress features and labels from a zip file\n :param file: The zip file to extract the data from\n \"\"\"\n features = []\n labels = []\n\n with ZipFile(file) as zipf:\n # Progress Bar\n filenames_pbar = tqdm(zipf.namelist(), unit='files')\n \n # Get features and labels from all files\n for filename in filenames_pbar:\n # Check if the file is a directory\n if not filename.endswith('/'):\n with zipf.open(filename) as image_file:\n image = Image.open(image_file)\n image.load()\n # Load image data as 1 dimensional array\n # We're using float32 to save on memory space\n feature = np.array(image, dtype=np.float32).flatten()\n\n # Get the the letter from the filename. This is the letter of the image.\n label = os.path.split(filename)[1][0]\n\n features.append(feature)\n labels.append(label)\n return np.array(features), np.array(labels)\n\n# Get the features and labels from the zip files\ntrain_features, train_labels = uncompress_features_labels('notMNIST_train.zip')\ntest_features, test_labels = uncompress_features_labels('notMNIST_test.zip')\n\n# Limit the amount of data to work with a docker container\ndocker_size_limit = 150000\ntrain_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)\n\n# Set flags for feature engineering. This will prevent you from skipping an important step.\nis_features_normal = False\nis_labels_encod = False\n\n# Wait until you see that all features and labels have been uncompressed.\nprint('All features and labels uncompressed.')",
"100%|██████████| 210001/210001 [00:46<00:00, 4545.94files/s]\n100%|██████████| 10001/10001 [00:02<00:00, 4618.16files/s]\n"
]
],
[
[
"<img src=\"image/mean_variance.png\" style=\"height: 75%;width: 75%; position: relative; right: 5%\">\n## Problem 1\nThe first problem involves normalizing the features for your training and test data.\n\nImplement Min-Max scaling in the `normalize()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.\n\nSince the raw notMNIST image data is in [grayscale](https://en.wikipedia.org/wiki/Grayscale), the current values range from a min of 0 to a max of 255.\n\nMin-Max Scaling:\n$\nX'=a+{\\frac {\\left(X-X_{\\min }\\right)\\left(b-a\\right)}{X_{\\max }-X_{\\min }}}\n$\n\n*If you're having trouble solving problem 1, you can view the solution [here](https://github.com/udacity/CarND-TensorFlow-Lab/blob/master/solutions.ipynb).*",
"_____no_output_____"
]
],
[
[
"# Problem 1 - Implement Min-Max scaling for grayscale image data\ndef normalize_grayscale(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n # TODO: Implement Min-Max scaling for grayscale image data\n i_min, i_max = 0, 255\n f_min, f_max = 0.1, 0.9\n zero_one_scaled = image_data / 255\n return zero_one_scaled * (f_max - f_min) + f_min\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Test Cases\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),\n [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,\n 0.125098039216, 0.128235294118, 0.13137254902, 0.9],\n decimal=3)\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),\n [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,\n 0.896862745098, 0.9])\n\nif not is_features_normal:\n train_features = normalize_grayscale(train_features)\n test_features = normalize_grayscale(test_features)\n is_features_normal = True\n\nprint('Tests Passed!')",
"Tests Passed!\n"
],
[
"if not is_labels_encod:\n # Turn labels into numbers and apply One-Hot Encoding\n encoder = LabelBinarizer()\n encoder.fit(train_labels)\n train_labels = encoder.transform(train_labels)\n test_labels = encoder.transform(test_labels)\n\n # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32\n train_labels = train_labels.astype(np.float32)\n test_labels = test_labels.astype(np.float32)\n is_labels_encod = True\n\nprint('Labels One-Hot Encoded')",
"Labels One-Hot Encoded\n"
],
[
"assert is_features_normal, 'You skipped the step to normalize the features'\nassert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'\n\n# Get randomized datasets for training and validation\ntrain_features, valid_features, train_labels, valid_labels = train_test_split(\n train_features,\n train_labels,\n test_size=0.05,\n random_state=832289)\n\nprint('Training features and labels randomized and split.')",
"Training features and labels randomized and split.\n"
],
[
"# Save the data for easy access\npickle_file = 'notMNIST.pickle'\nif not os.path.isfile(pickle_file):\n print('Saving data to pickle file...')\n try:\n with open('notMNIST.pickle', 'wb') as pfile:\n pickle.dump(\n {\n 'train_dataset': train_features,\n 'train_labels': train_labels,\n 'valid_dataset': valid_features,\n 'valid_labels': valid_labels,\n 'test_dataset': test_features,\n 'test_labels': test_labels,\n },\n pfile, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nprint('Data cached in pickle file.')",
"Saving data to pickle file...\nData cached in pickle file.\n"
]
],
[
[
"# Checkpoint\nAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\n# Load the modules\nimport pickle\nimport math\n\nimport numpy as np\nimport tensorflow as tf\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\n# Reload the data\npickle_file = 'notMNIST.pickle'\nwith open(pickle_file, 'rb') as f:\n pickle_data = pickle.load(f)\n train_features = pickle_data['train_dataset']\n train_labels = pickle_data['train_labels']\n valid_features = pickle_data['valid_dataset']\n valid_labels = pickle_data['valid_labels']\n test_features = pickle_data['test_dataset']\n test_labels = pickle_data['test_labels']\n del pickle_data # Free up memory\n\n\nprint('Data and modules loaded.')",
"Data and modules loaded.\n"
]
],
[
[
"<img src=\"image/weight_biases.png\" style=\"height: 60%;width: 60%; position: relative; right: 10%\">\n## Problem 2\nFor the neural network to train on your data, you need the following <a href=\"https://www.tensorflow.org/resources/dims_types.html#data-types\">float32</a> tensors:\n - `features`\n - Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`)\n - `labels`\n - Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`)\n - `weights`\n - Variable Tensor with random numbers from a truncated normal distribution.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal\">`tf.truncated_normal()` documentation</a> for help.\n - `biases`\n - Variable Tensor with all zeros.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#zeros\"> `tf.zeros()` documentation</a> for help.\n\n*If you're having trouble solving problem 2, review \"TensorFlow Linear Function\" section of the class. If that doesn't help, the solution for this problem is available [here](https://github.com/udacity/CarND-TensorFlow-Lab/blob/master/solutions.ipynb).*",
"_____no_output_____"
]
],
[
[
"features_count = 784\nlabels_count = 10\n\n# TODO: Set the features and labels tensors\nfeatures = tf.placeholder(tf.float32, [None, features_count])\nlabels = tf.placeholder(tf.float32, [None, labels_count])\n\n# TODO: Set the weights and biases tensors\nweights = tf.Variable(tf.truncated_normal([features_count, labels_count]))\nbiases = tf.Variable(tf.zeros([labels_count]))\n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n\n#Test Cases\nfrom tensorflow.python.ops.variables import Variable\n\nassert features._op.name.startswith('Placeholder'), 'features must be a placeholder'\nassert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'\nassert isinstance(weights, Variable), 'weights must be a TensorFlow variable'\nassert isinstance(biases, Variable), 'biases must be a TensorFlow variable'\n\nassert features._shape == None or (\\\n features._shape.dims[0].value is None and\\\n features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'\nassert labels._shape == None or (\\\n labels._shape.dims[0].value is None and\\\n labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'\nassert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'\nassert biases._variable._shape == (10), 'The shape of biases is incorrect'\n\nassert features._dtype == tf.float32, 'features must be type float32'\nassert labels._dtype == tf.float32, 'labels must be type float32'\n\n# Feed dicts for training, validation, and test session\ntrain_feed_dict = {features: train_features, labels: train_labels}\nvalid_feed_dict = {features: valid_features, labels: valid_labels}\ntest_feed_dict = {features: test_features, labels: test_labels}\n\n# Linear Function WX + b\nlogits = tf.matmul(features, weights) + biases\n\nprediction = tf.nn.softmax(logits)\n\n# Cross entropy\ncross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)\n\n# Training loss\nloss = tf.reduce_mean(cross_entropy)\n\n# Create an operation that initializes all variables\ninit = tf.global_variables_initializer()\n\n# Test Cases\nwith tf.Session() as session:\n session.run(init)\n session.run(loss, feed_dict=train_feed_dict)\n session.run(loss, feed_dict=valid_feed_dict)\n session.run(loss, feed_dict=test_feed_dict)\n biases_data = session.run(biases)\n\nassert not np.count_nonzero(biases_data), 'biases must be zeros'\n\nprint('Tests Passed!')",
"Tests Passed!\n"
],
[
"# Determine if the predictions are correct\nis_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))\n# Calculate the accuracy of the predictions\naccuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))\n\nprint('Accuracy function created.')",
"Accuracy function created.\n"
]
],
[
[
"<img src=\"image/learn_rate_tune.png\" style=\"height: 60%;width: 60%\">\n## Problem 3\nBelow are 3 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.\n\nParameter configurations:\n\nConfiguration 1\n* **Epochs:** 1\n* **Batch Size:**\n * 2000\n * 1000\n * 500\n * 300\n * 50\n* **Learning Rate:** 0.01\n\nConfiguration 2\n* **Epochs:** 1\n* **Batch Size:** 100\n* **Learning Rate:**\n * 0.8\n * 0.5\n * 0.1\n * 0.05\n * 0.01\n\nConfiguration 3\n* **Epochs:**\n * 1\n * 2\n * 3\n * 4\n * 5\n* **Batch Size:** 100\n* **Learning Rate:** 0.2\n\nThe code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.\n\n*If you're having trouble solving problem 3, you can view the solution [here](https://github.com/udacity/CarND-TensorFlow-Lab/blob/master/solutions.ipynb).*",
"_____no_output_____"
]
],
[
[
"# TODO: Find the best parameters for each configuration\n# epochs = 1\n# batch_size = 50\n# learning_rate = .01\n\n# epochs = 1\n# batch_size = 100\n# learning_rate = .5\n\n# 5 = .78\nepochs = 1\nbatch_size = 100\nlearning_rate = .2\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Gradient Descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) \n\n# The accuracy measured against the validation set\nvalidation_accuracy = 0.0\n\n# Measurements use for graphing loss and accuracy\nlog_batch_step = 50\nbatches = []\nloss_batch = []\ntrain_acc_batch = []\nvalid_acc_batch = []\n\nwith tf.Session() as session:\n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer and get loss\n _, l = session.run(\n [optimizer, loss],\n feed_dict={features: batch_features, labels: batch_labels})\n\n # Log every 50 batches\n if not batch_i % log_batch_step:\n # Calculate Training and Validation accuracy\n training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\n # Log batches\n previous_batch = batches[-1] if batches else 0\n batches.append(log_batch_step + previous_batch)\n loss_batch.append(l)\n train_acc_batch.append(training_accuracy)\n valid_acc_batch.append(validation_accuracy)\n\n # Check accuracy against Validation data\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\nloss_plot = plt.subplot(211)\nloss_plot.set_title('Loss')\nloss_plot.plot(batches, loss_batch, 'g')\nloss_plot.set_xlim([batches[0], batches[-1]])\nacc_plot = plt.subplot(212)\nacc_plot.set_title('Accuracy')\nacc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')\nacc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')\nacc_plot.set_ylim([0, 1.0])\nacc_plot.set_xlim([batches[0], batches[-1]])\nacc_plot.legend(loc=4)\nplt.tight_layout()\nplt.show()\n\nprint('Validation accuracy at {}'.format(validation_accuracy))",
"Epoch 1/1: 100%|██████████| 1425/1425 [00:16<00:00, 85.30batches/s] \n"
]
],
[
[
"## Test\nSet the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.",
"_____no_output_____"
]
],
[
[
"# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3\nepochs = 5\nbatch_size = 50\nlearning_rate = .01\n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n# The accuracy measured against the test set\ntest_accuracy = 0.0\n\nwith tf.Session() as session:\n \n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer\n _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})\n\n # Check accuracy against Test data\n test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)\n\n\nassert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)\nprint('Nice Job! Test Accuracy is {}'.format(test_accuracy))",
"Epoch 1/5: 100%|██████████| 2850/2850 [00:02<00:00, 1113.80batches/s]\nEpoch 2/5: 100%|██████████| 2850/2850 [00:02<00:00, 1153.74batches/s]\nEpoch 3/5: 100%|██████████| 2850/2850 [00:02<00:00, 1145.07batches/s]\nEpoch 4/5: 100%|██████████| 2850/2850 [00:02<00:00, 1048.08batches/s]\nEpoch 5/5: 100%|██████████| 2850/2850 [00:02<00:00, 1119.99batches/s]"
]
],
[
[
"# Multiple layers\nGood job! You built a one layer TensorFlow network! However, you want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec95a315c77a53d71e9017eb1014fdc3034fabcc | 13,323 | ipynb | Jupyter Notebook | 6_Extending_the_Toolkit-Part2.ipynb | vicory/CourseInBiomedicalImageAnalysisVisualizationAndArtificialIntelligence | ed848e8d7cb781019adb453660535f184f2a15d6 | [
"Apache-2.0"
] | null | null | null | 6_Extending_the_Toolkit-Part2.ipynb | vicory/CourseInBiomedicalImageAnalysisVisualizationAndArtificialIntelligence | ed848e8d7cb781019adb453660535f184f2a15d6 | [
"Apache-2.0"
] | null | null | null | 6_Extending_the_Toolkit-Part2.ipynb | vicory/CourseInBiomedicalImageAnalysisVisualizationAndArtificialIntelligence | ed848e8d7cb781019adb453660535f184f2a15d6 | [
"Apache-2.0"
] | null | null | null | 27.024341 | 331 | 0.575246 | [
[
[
"# Extend ITK with your own module - Part 2",
"_____no_output_____"
],
[
"## Creation of a remote module: Overview\n\n1. The developer creates a new module containing new ITK filters.\n * The new module is its own independent GitHub project.\n * The new module can be easily be compiled and used in combination with ITK.\n2. The developer writes an Insight Journal article\n * The module is more visible to the community.\n * An option can be added to ITK to compile the remote module as part of ITK.",
"_____no_output_____"
],
[
"## Creation of a remote module: details\n\n* The template project source code is here: [https://github.com/InsightSoftwareConsortium/ITKModuleTemplate](https://github.com/InsightSoftwareConsortium/ITKModuleTemplate)",
"_____no_output_____"
],
[
"* Run the following commands:\n python -m pip install cookiecutter\n python -m cookiecutter gh:InsightSoftwareConsortium/ITKModuleTemplate",
"_____no_output_____"
],
[
"* Provide requested information.\n Answer the following questions (Pressing \"Enter\" will use the default option):\n full_name [Insight Software Consortium]:\n email [[email protected]]:\n github_username [itkrobot]:\n project_name [ITKModuleTemplate]: \n module_name [ModuleTemplate]: \n python_package_name [itk-moduletemplate]: \n download_url [https://github.com/InsightSoftwareConsortium/ITKModuleTemplate]: \n project_short_description [This is a template that serves as a starting point for a new module.]: \n project_long_description [ITK is an open-source, cross-platform library that provides developers with an extensive suite of software tools for image analysis. Developed through extreme programming methodologies, ITK employs leading-edge algorithms for registering and segmenting multidimensional scientific images.]: ",
"_____no_output_____"
],
[
"## New Module Content\n<pre>\n (itk) fbudin:ITKModuleTemplate/ $ tree -a\n .\n ├── appveyor.yml\n ├── .circleci\n │ └── config.yml\n ├── CMakeLists.txt\n ├── CTestConfig.cmake\n ├── include\n │ ├── itkMinimalStandardRandomVariateGenerator.h\n │ ├── itkMyFilter.h\n │ ├── itkMyFilter.hxx\n │ ├── itkNormalDistributionImageSource.h\n │ └── itkNormalDistributionImageSource.hxx\n ├── itk-module.cmake\n ├── LICENSE\n ├── README.rst\n ├── setup.py\n</pre>",
"_____no_output_____"
],
[
"<pre>\n ├── src\n │ ├── CMakeLists.txt\n │ └── itkMinimalStandardRandomVariateGenerator.cxx\n ├── test\n │ ├── Baseline\n │ │ ├── itkMyFilterTestOutput.mha.sha512\n │ │ └── itkNormalDistributionImageSourceTestOutput.mha.sha512\n │ ├── CMakeLists.txt\n │ ├── itkMinimalStandardRandomVariateGeneratorTest.cxx\n │ ├── itkMyFilterTest.cxx\n │ └── itkNormalDistributionImageSourceTest.cxx\n ├── .travis.yml\n └── wrapping\n ├── CMakeLists.txt\n ├── itkMinimalStandardRandomVariateGenerator.wrap\n └── itkNormalDistributionImageSource.wrap\n</pre>",
"_____no_output_____"
],
[
"## Directory structure\n\n* `src` and `include`: header files and source code\n* `test`: unit tests\n* `wrapping`: Required files to automatically create Python bindings.",
"_____no_output_____"
],
[
"## Filter code\n\n<pre>\ntemplate< typename TInputImage, typename TOutputImage >\nvoid\nMyFilter< TInputImage, TOutputImage >\n::DynamicThreadedGenerateData( const OutputRegionType & outputRegion)\n{\n OutputImageType * output = this->GetOutput();\n const InputImageType * input = this->GetInput();\n using InputRegionType = typename InputImageType::RegionType;\n InputRegionType inputRegion = InputRegionType(outputRegion.GetSize());\n\n itk::ImageRegionConstIterator<InputImageType> in(input, inputRegion);\n itk::ImageRegionIterator<OutputImageType> out(output, outputRegion);\n\n for (in.GoToBegin(), out.GoToBegin(); !in.IsAtEnd() && !out.IsAtEnd(); ++in, ++out)\n {\n out.Set( in.Get() );\n }\n}\n</pre>",
"_____no_output_____"
],
[
"## Continuous integration\n\n* Appveyor (Windows)\n* Travis (MacOS)\n* CircleCI (Linux)\n* Azure pipeline (Windows, Linux, MacOS)",
"_____no_output_____"
],
[
"## Python packages\n\n* Automatically generated with Azure Pipeline\n* Python Wheels automatically uploaded to [PyPI.org](https://pypi.org/search/?q=itk)",
"_____no_output_____"
],
[
"## Where to find more information:\n\n* ITK Software Guide\n * [Configuring and building ITK](https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch2.html#x22-130002)\n * [Create a remote module](https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch9.html#x55-1640009.7)\n * [How to write a filter](https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch8.html#x54-1330008)\n * [Iterators](https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch6.html#x44-1020006)\n * [Modules](https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch9.html#x48-1480009)\n* [Discourse forum](https://discourse.itk.org/)",
"_____no_output_____"
],
[
"## Exercises",
"_____no_output_____"
],
[
"### Exercise 1: Create the skeleton of a remote module\n\n* Hint1: Open a Notebook terminal (File->Open, New->Terminal)\n* Hint2: You will need to add the argument '--no-input' to the command you are using. This is a limitation due to this notebook environment.",
"_____no_output_____"
]
],
[
[
"# %load solutions/6_Extending_the_toolkit_exercise1.py",
"_____no_output_____"
]
],
[
[
"### Exercise 2: Modify the filter\n\n* Add a constant value\n* Multiply by a constant factor\n",
"_____no_output_____"
]
],
[
[
"# %load solutions/6_Extending_the_toolkit_exercise2.py",
"_____no_output_____"
]
],
[
[
"## Github and Continuous Integration (CI)",
"_____no_output_____"
],
[
"Taking a look at an existing remote module: [ITKSplitComponents](https://github.com/InsightSoftwareConsortium/ITKSplitComponents)",
"_____no_output_____"
],
[
"* It is very similar to the module we created on our computer.\n* You can modify it directly in your browser.",
"_____no_output_____"
],
[
"[](https://github.com/InsightSoftwareConsortium/ITKSplitComponents)",
"_____no_output_____"
],
[
"[](https://github.com/InsightSoftwareConsortium/ITKSplitComponents/blob/master/README.rst)",
"_____no_output_____"
],
[
"[](https://github.com/InsightSoftwareConsortium/ITKSplitComponents/edit/master/README.rst)",
"_____no_output_____"
],
[
"[](https://github.com/InsightSoftwareConsortium/ITKSplitComponents)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"[](https://github.com/InsightSoftwareConsortium/ITKTwoProjectionRegistration/pull/11/checks?check_run_id=77068613)",
"_____no_output_____"
],
[
"[](https://dev.azure.com/InsightSoftwareConsortium/ITKModules/_build/results?buildId=217)",
"_____no_output_____"
],
[
"[](https://dev.azure.com/InsightSoftwareConsortium/ITKModules/_release?view=mine&definitionId=6)",
"_____no_output_____"
],
[
"[](https://dev.azure.com/InsightSoftwareConsortium/ITKModules/_releaseProgress?_a=release-pipeline-progress&releaseId=39)",
"_____no_output_____"
],
[
"[](https://dev.azure.com/InsightSoftwareConsortium/ITKModules/_releaseProgress?_a=release-pipeline-progress&releaseId=33)",
"_____no_output_____"
],
[
"[](https://pypi.org/project/itk-splitcomponents/#files)",
"_____no_output_____"
],
[
"### Enjoy ITK!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ec95a96f67a1c87b42bda293887a6836c5d57bf6 | 13,979 | ipynb | Jupyter Notebook | Generators/Generator functions.ipynb | cr62-b/dat129_ccac | 7fc9ba89e32aa4e3850ae1a2347a85eec60e9535 | [
"Apache-2.0"
] | 1 | 2020-12-15T16:24:42.000Z | 2020-12-15T16:24:42.000Z | Generators/Generator functions.ipynb | cr62-b/dat129_ccac | 7fc9ba89e32aa4e3850ae1a2347a85eec60e9535 | [
"Apache-2.0"
] | null | null | null | Generators/Generator functions.ipynb | cr62-b/dat129_ccac | 7fc9ba89e32aa4e3850ae1a2347a85eec60e9535 | [
"Apache-2.0"
] | null | null | null | 33.929612 | 1,417 | 0.572716 | [
[
[
"# Generators and Iterators",
"_____no_output_____"
],
[
"# dat129_ccac\nA collection of example code using generators with the build in filter method lambdas for dat129 Python 2.",
"_____no_output_____"
],
[
"## Iterable\nAn iterable object is an object that implements __iter__, which is expected to return an iterator object.\nA list, strings, tuple, dictionary, set and any custom object which either returns a value from their __iter__() method.\nSimply said it looped over or is iterable.\nReference for python iterators:\n[Python Iterator](https://wiki.python.org/moin/Iterator)",
"_____no_output_____"
]
],
[
[
"my_list = [1,2,3]\nprint(my_list)\n\nfor value in my_list:\n print(value) #looping over the list to display one valve at a time\n\nprint(\"-\"*127)\n#print the list of dunder methods associated with my-list list object\nprint(dir(my_list)) #if the dir function lists the __iter__ (dunder method iter) is is iterable and can be looped over",
"[1, 2, 3]\n1\n2\n3\n-------------------------------------------------------------------------------------------------------------------------------\n['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n"
]
],
[
[
"## Iterators\nAn iterator is an object that can be iterated upon, meaning that you can traverse through all the values. An iterator is an object which consist of the dunder methods \\__iter__() and \\__next__() .\nAn iterator is an object that implements next method, which is expected to return the next element of the iterable object (list, string, tuple, dictionary) that returned it, and raise a StopIteration exception when no more elements are available.\n\nReference for python iterators:\n[Python Iterator](https://wiki.python.org/moin/Iterator)",
"_____no_output_____"
]
],
[
[
"#Using a while loop with a try and exception to manually insert a StopIteration exception\nmy_list = [1,2,3] #numberic list\nmy_iter = iter(my_list) #calls the iter method in the background\n\nwhile True:\n try:\n item = next(my_iter)\n print(item)\n except StopIteration:\n break\n\nprint(\"-\"*127)\nprint(dir(my_iter))",
"1\n2\n3\n-------------------------------------------------------------------------------------------------------------------------------\n['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__length_hint__', '__lt__', '__ne__', '__new__', '__next__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__']\n"
],
[
"#Using the iterator method and the next method to display the values in the list\nmy_list = [1,2,3] #numberic list\nmy_iter = iter(my_list) #calls the iter method in the background\n\nprint(my_iter) #displays the list iterator object and memory location\nprint(next(my_iter)) #prints the first value in the list object\nprint(next(my_iter)) #prints the second value in the list object using the next dunder method\nprint(next(my_iter)) #prints the third valve in the list object using the next dunder method\nprint(next(my_iter)) #will print the \"StopIteration\" exception because it has exhausted all of the values\n\nprint(\"-\"*127)\nprint(dir(my_iter)) #print the list of dunder methods associated with my-iter iterator object\n#Notice the StopIteration exception; the for loop handled the exception in the back ground and the while loop used an exception\n#Note: an iterator can never go backwards",
"<list_iterator object at 0x000001E68D6D0B48>\n1\n2\n3\n"
]
],
[
[
"## Generators\nGenerator functions allow you to declare a function that behaves like an iterator, i.e. it can be used in a for loop. Python generators are a simple way of creating iterators. ... Simply speaking, a generator is a function that returns an object (iterator) which we can iterate over (one value at a time).\nTherefore a generator is a special type of iterable which is able to generate data on demand rather than all the data existing at the time the iteration starts. This is expecially important in memory management; if and\nReference for python generators.\n[Python Generators](https://wiki.python.org/moin/Generators)",
"_____no_output_____"
]
],
[
[
"#Generator using the filter function\n#The gererator prints a list of all integers and filters out the strings.\n#This simplified version returns the isinstance of x that are integers\nmy_list = [1,\"x\",2,\"y\",\"3\",\"z\",3]\n\ndef my_int(x):\n #The isinstance() function returns True if the specified object is of the specified type, otherwise False.\n return isinstance(x, int)\n\nfilter_list = filter(my_int, my_list) #the filter function requires a function & iterable (my_int function and my_list iterable)\n\nprint(filter_list)\nprint(list(filter_list))",
"<filter object at 0x000001E68DA2E088>\n[1, 2, 3]\n"
]
],
[
[
"### Generator function, list and list comprehension ",
"_____no_output_____"
]
],
[
[
"#Generator function to display the values 1, 2, 3 without using a list\ndef my_gen(start, end):\n current = start\n while current <= end:\n yield current\n current += 1\n\nmy_list = my_gen(1,3)\n\nprint(my_list)\nfor value in my_list: #The for loop uses the iter and next methods in the background\n print(value)\n \n",
"<generator object my_gen at 0x000001E68DA238C8>\n1\n2\n3\n"
],
[
"#The yield keyword makes this a generator\n#The generator does not hold all of the results in memory it yields the square of a number one result at a time.\ndef my_gen(squ_nums):\n for current in squ_nums:\n yield (current*current)\n \nmy_list = my_gen([1,2,3])\n\nprint(my_list) #displays the generator object and the memory location\n\nfor value in my_list:\n print(value)",
"<generator object my_gen at 0x000001E68DA1B048>\n1\n4\n9\n"
]
],
[
[
"#### List and list comprehensions",
"_____no_output_____"
]
],
[
[
"#Normal list stores all of the values to memory and processes the entire list of variables\nmy_list = []\nfor value in (1,2,3):\n my_list.append(value**2) #add to the list the square of each value in the tuple\n\nprint(my_list)\n",
"[1, 4, 9]\n"
],
[
"#List comprehension generator generates one value at a time\nmy_list = (x **2 for x in (1,2,3)) #building a list of the square of the each value in the tuple\n\nprint(my_list)\nfor value in my_list:\n print(value)",
"<generator object <genexpr> at 0x000001E68DA231C8>\n1\n4\n9\n"
],
[
"#A large amount of dat can be stored in memory as a list\nmy_list = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]\nprint(my_list)",
"[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]\n"
],
[
"#Or a list can be generated one execution at a time saving execution time and memory resources\n#Generator function to display the values 1, 2, 3,... 20 without using a list\ndef my_gen(start, end):\n current = start\n while current <= end:\n yield current\n current += 1\n\nmy_list = my_gen(1,20)\n\nprint(my_list)\nfor value in my_list: #The for loop uses the iter and next methods in the background\n print(value)",
"<generator object my_gen at 0x000001E68DA234C8>\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n"
]
],
[
[
"The biggest avantages of generators over list. A list stores all of the data in the list where the generator preforms on execution at a time conserving memory and execution time.\nNote: All generators are iterators but not all iterators are generators.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec95c5a5952daf3dae3868f99b01a8963dfc2403 | 4,527 | ipynb | Jupyter Notebook | KafkaSparkBILab/KafkaToCosmos/TelegrafToKafkaToCosmos.ipynb | liupeirong/Azure | 7066eede6d9edff54e787b748ac490939d153dd8 | [
"MIT"
] | 18 | 2015-02-21T14:15:04.000Z | 2020-01-31T14:52:02.000Z | KafkaSparkBILab/KafkaToCosmos/TelegrafToKafkaToCosmos.ipynb | nj94ray39/cloudera-deo | 7c4e49ef6c2a94c47e4c046fb4a51f0f9d74b1ba | [
"MIT"
] | 6 | 2015-04-01T01:39:52.000Z | 2018-06-13T09:09:09.000Z | KafkaSparkBILab/KafkaToCosmos/TelegrafToKafkaToCosmos.ipynb | nj94ray39/cloudera-deo | 7c4e49ef6c2a94c47e4c046fb4a51f0f9d74b1ba | [
"MIT"
] | 20 | 2015-02-13T14:17:52.000Z | 2020-01-31T14:52:07.000Z | 4,527 | 4,527 | 0.673294 | [
[
[
"# This notebook takes input from HDInsight Kafka, which in turn takes input from Telegraf, and sends the metrics to Cosmos DB\n\nNote that cosmosdb spark connector must be a uber jar located in HDFS as shown below, the one in Maven repo doesn't have all the dependencies.",
"_____no_output_____"
]
],
[
[
"%%configure\n{ \n \"executorCores\": 2, \n \"driverMemory\" : \"2G\", \n \"jars\": [\"/path/to/azure-cosmosdb-spark_2.3.0_2.11-1.2.2-uber.jar\"],\n \"conf\": {\"spark.jars.packages\": \"org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0\",\n \"spark.jars.excludes\": \"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.11\"\n }\n}",
"_____no_output_____"
],
[
"val kafkaBrokers=\"host1:9092,host2:9092...\"\nval kafkaTopic=\"telegraf\"",
"_____no_output_____"
],
[
"import org.apache.spark.sql.types._\nimport org.apache.spark.sql.functions._\nimport spark.implicits._\n",
"_____no_output_____"
],
[
"/*\n{\n\"fields\":{\n \"usage_guest\":0,\n \"usage_guest_nice\":0,\n \"usage_idle\":97.28643216079983,\n \"usage_iowait\":1.4070351758792998,\n \"usage_irq\":0,\n \"usage_nice\":0,\n \"usage_softirq\":0,\n \"usage_steal\":0,\n \"usage_system\":0.40201005025121833,\n \"usage_user\":0.9045226130652948},\n\"name\":\"cpu\",\n\"tags\":{\n \"cpu\":\"cpu0\",\n \"host\":\"pliukafkawus2\"},\n\"timestamp\":1534985650\n}\n*/\n\nval payloadSchema = new StructType().\n add(\"fields\", StringType).\n add(\"name\", StringType).\n add(\"tags\",StringType).\n add(\"timestamp\",TimestampType)\n\nval df = spark.\n readStream.\n format(\"kafka\").\n option(\"kafka.bootstrap.servers\", kafkaBrokers).\n option(\"subscribe\", kafkaTopic).\n load\n\nval payloaddf = df.\n select(from_json($\"value\".cast(StringType), payloadSchema).alias(\"payload\")).\n select($\"payload.timestamp\".cast(StringType).alias(\"ts\"), //throws error if timestamp is not cast to string\n get_json_object($\"payload.fields\", \"$.usage_idle\").alias(\"usage_idle\"),\n get_json_object($\"payload.fields\", \"$.usage_iowait\").alias(\"usage_iowait\"),\n get_json_object($\"payload.fields\", \"$.usage_system\").alias(\"usage_system\"),\n get_json_object($\"payload.fields\", \"$.usage_user\").alias(\"usage_user\"))\n\n/*\nval query = payloaddf.\n writeStream.\n format(\"console\").\n start\n*/",
"_____no_output_____"
],
[
"import org.joda.time._\nimport org.joda.time.format._\nimport com.microsoft.azure.cosmosdb.spark.schema._\nimport com.microsoft.azure.cosmosdb.spark.streaming.CosmosDBSinkProvider\nimport com.microsoft.azure.cosmosdb.spark.config.Config",
"_____no_output_____"
],
[
"val cosmosdbEndpoint = \"https://{cosmosdb_account}.documents.azure.com:443/\"\nval cosmosdbMasterKey = \"{cosmosdb_account_key}\"\nval cosmosdbDatabase = \"metricdb\"\nval cosmosdbCollection = \"metriccollection\"",
"_____no_output_____"
],
[
"val configMap = Map(\n \"Endpoint\" -> cosmosdbEndpoint,\n \"Masterkey\" -> cosmosdbMasterKey,\n \"Database\" -> cosmosdbDatabase,\n \"Collection\" -> cosmosdbCollection)\n\nval query = payloaddf.\n writeStream.\n format(classOf[CosmosDBSinkProvider].getName).\n outputMode(\"append\").\n options(configMap).\n option(\"checkpointLocation\", \"/path/to/cosmoscheckpoint\").\n start\n",
"_____no_output_____"
],
[
"//for batch instead of streaming, not yet tested\nimport org.apache.spark.sql.{Row, SaveMode, SparkSession}\n\nval writeConfig = Config(configMap)\ndf.write.mode(SaveMode.Overwrite).cosmosDB(writeConfig)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec95c61b2434885d9da6e08d96228cc07adf47ab | 14,457 | ipynb | Jupyter Notebook | examples/tutorials/translations/japanese/Part 11 - Secure Deep Learning Classification.ipynb | NicoSerranoP/PySyft | 87fcd566c46fce4c16d363c94396dd26bd82a016 | [
"Apache-2.0"
] | 3 | 2020-11-24T05:15:57.000Z | 2020-12-07T09:52:45.000Z | examples/tutorials/translations/japanese/Part 11 - Secure Deep Learning Classification.ipynb | NicoSerranoP/PySyft | 87fcd566c46fce4c16d363c94396dd26bd82a016 | [
"Apache-2.0"
] | 2 | 2020-03-09T09:17:06.000Z | 2020-04-09T13:33:12.000Z | examples/tutorials/translations/japanese/Part 11 - Secure Deep Learning Classification.ipynb | NicoSerranoP/PySyft | 87fcd566c46fce4c16d363c94396dd26bd82a016 | [
"Apache-2.0"
] | 1 | 2021-01-31T15:16:34.000Z | 2021-01-31T15:16:34.000Z | 32.48764 | 599 | 0.595974 | [
[
[
"epochs = 10\nn_test_batches = 200",
"_____no_output_____"
]
],
[
[
"# Part 11 - プライバシーに配慮したディープラーニングで分類問題を解く\n\n\n\n## データの機密性は重要です。と同時に、モデルの機密性も重要です\n\nデータは機械学習の肝です。組織はデータを作成したり集めたりすることで、独自のモデルをトレーニングすることができ、それをサービス(MLaaS)として外部に公開できます。自分たちでモデルのトレーニングを行えない組織は、公開されたサービスを使って自分たちのデータを推論することができます。\n\nしかし、クラウド上のモデルにはプライバシーや知財の問題があります。外部の組織が使おうと思うと、推論したいデータをクラウドにアップロードするか、もしくはモデルをダウンロードする必要があります。入力データのアップロードにはプライバシーの問題がありますし、モデルのダウンロードはモデル所有者が知財を失ってしまうリスクがあります。\n\n\n## 暗号化されたデータを使ってのコンピューテーション\n\nこういった状況下における潜在的な解決策は、データとモデルの両方を暗号化し、お互いに知財を非公開とする事です。それを可能にする暗号化手法はいくつか存在します。その中でも、Secure Multi-Party Computation (SMPC)とHomomorphic Encryption (FHE/SHE) 、それに Functional Encryption (FE)はよく知られています。ここでは\"Secure Multi-Party Computation\" ([introduced in detail here in tutorial 5](https://github.com/OpenMined/PySyft/blob/dev/examples/tutorials/Part%205%20-%20Intro%20to%20Encrypted%20Programs.ipynb))について扱います。\"Secure Multi-Party Computation\"は`shares`を使って暗号化を行う手法でSecureNNやSPDZと呼ばれるライブラリを使用します。詳細は[こちらのブログ](https://mortendahl.github.io/2017/09/19/private-image-analysis-with-mpc/)にてご確認ください。\n\nこれらのプロトコルは、暗号化されたデータを使ってのコンピューテーションにおいて、目覚ましい成果を上げています。私たちはこれらのプロトコルを開発者が個々に実装することなく(場合によっては裏で動いている暗号技術を意識することもなく)使える仕組みを開発しています。それでは、始めましょう。\n\n## セットアップ\n\nこのチュートリアルに必要な設定は次の通りです。データは手元にあると仮定してください。まず、手元にあるデータを使ってプライバシーに配慮したディープラーニングの手法を使ってモデルの定義とトレーニングを行います。次に何らかのデータを保持していて、モデルを使いたいユーザーと連携します。ここではモデルをトレーニングして公開する主体をサーバー(このケースではあなた)、モデルを使いたいユーザーをクライアントと呼ぶことにします。\n\nサーバー(あなた)はモデルを暗号化し、クライアントはデータを暗号化します。あなたとクライアントはどちらも暗号化されたモデルとデータを使ってデータの分類を行います。その後、推論結果を暗号化された状態のままクライアントへ戻します。その際、サーバーはデータについて一切知ることはありません。(入力データ、推論結果のどちらに関してもです。)\n\n理想的には`client`も`server`も`shares`をもつべきですが、今回のケースでは簡単のため、`shares`はBobとAliceという2つのリモートワーカーに分配します。もし、aliceはクラアントに、Bobはサーバーに属すと仮定すれば、正にサーバーとクライアントで`shares`を分け合っている状態です。\n\nこの手法は、悪意の無い関係者間で、安全なコンピューテーションを実現できます。想定する環境は[many MPC frameworks](https://arxiv.org/pdf/1801.03239.pdf)にて標準化されています。\nここで言う悪意の無い関係者とは、データがそのまま(閲覧可能な状態で)送られてきたら見てしまうかもしれないけれど、基本的には正直で悪意のない関係者(サーバー、クライアント)という意味です。\n\n**準備はと問いました。早速見ていきましょう**\n\n\nAuthor:\n- Théo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel) · GitHub: [@LaRiffle](https://github.com/LaRiffle)\n",
"_____no_output_____"
],
[
"### ライブラリのインポート",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms",
"_____no_output_____"
]
],
[
[
"PySyft関連のライブラリをインポートします。何名かのリモートワーカー(ここでは `client`、 `bob`、それに `alice`の3名です)と暗号化技術のプリミティブを提供する`crypto_provider`を作成します。暗号化技術の基本データ型の詳細については[See our tutorial on SMPC for more details](https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/Part%2009%20-%20Intro%20to%20Encrypted%20Programs.ipynb)を参照してください。\n",
"_____no_output_____"
]
],
[
[
"import syft as sy\nhook = sy.TorchHook(torch) \nclient = sy.VirtualWorker(hook, id=\"client\")\nbob = sy.VirtualWorker(hook, id=\"bob\")\nalice = sy.VirtualWorker(hook, id=\"alice\")\ncrypto_provider = sy.VirtualWorker(hook, id=\"crypto_provider\") ",
"_____no_output_____"
]
],
[
[
"ここで、トレーニングで使用するハイパーパラメータを定義します。",
"_____no_output_____"
]
],
[
[
"class Arguments():\n def __init__(self):\n self.batch_size = 64\n self.test_batch_size = 50\n self.epochs = epochs\n self.lr = 0.001\n self.log_interval = 100\n\nargs = Arguments()",
"_____no_output_____"
]
],
[
[
"### データの準備\n\n今回の設定では、サーバーがモデルと学習データを保持していると仮定しています。今回扱うデータはMNISTです。",
"_____no_output_____"
]
],
[
[
"train_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=args.batch_size, shuffle=True)",
"_____no_output_____"
]
],
[
[
"次に、クライアントは、サーバーが提供するモデルを使って推論を行いたい、何らかのデータを持っていると仮定しているので、その準備をします。クライアントは`shares`を`alice` と `bob`に分割することでデータを暗号化します。\n\n> SMPCは整数で動く暗号化プロトコルを使います。PySyftのtensor拡張機能、`.fix_precision()`を使って不動小数から整数へ変換を行います。例えば、精度を2とすると、0.123は小数点第2位以下が丸められ、12になります。",
"_____no_output_____"
]
],
[
[
"test_loader = torch.utils.data.DataLoader(\n datasets.MNIST('../data', train=False,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])),\n batch_size=args.test_batch_size, shuffle=True)\n\nprivate_test_loader = []\nfor data, target in test_loader:\n private_test_loader.append((\n data.fix_precision().share(alice, bob, crypto_provider=crypto_provider),\n target.fix_precision().share(alice, bob, crypto_provider=crypto_provider)\n ))",
"_____no_output_____"
]
],
[
[
"### モデルの定義\n\"Feed Forward\"だけからなる基本的なモデルを定義します。このモデルはサーバーによって定義されます。",
"_____no_output_____"
]
],
[
[
"class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.fc1 = nn.Linear(784, 500)\n self.fc2 = nn.Linear(500, 10)\n\n def forward(self, x):\n x = x.view(-1, 784)\n x = self.fc1(x)\n x = F.relu(x)\n x = self.fc2(x)\n return x",
"_____no_output_____"
]
],
[
[
"### トレーニングループを定義\n\nこの学習はサーバーのローカル環境下で行われます。ごく普通のPyTorchのトレーニングです。",
"_____no_output_____"
]
],
[
[
"def train(args, model, train_loader, optimizer, epoch):\n model.train()\n for batch_idx, (data, target) in enumerate(train_loader):\n optimizer.zero_grad()\n output = model(data)\n output = F.log_softmax(output, dim=1)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.step()\n if batch_idx % args.log_interval == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * args.batch_size, len(train_loader) * args.batch_size,\n 100. * batch_idx / len(train_loader), loss.item()))",
"_____no_output_____"
],
[
"model = Net()\noptimizer = torch.optim.Adam(model.parameters(), lr=args.lr)\n\nfor epoch in range(1, args.epochs + 1):\n train(args, model, train_loader, optimizer, epoch)\n",
"_____no_output_____"
],
[
"def test(args, model, test_loader):\n model.eval()\n test_loss = 0\n correct = 0\n with torch.no_grad():\n for data, target in test_loader:\n output = model(data)\n output = F.log_softmax(output, dim=1)\n test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss\n pred = output.argmax(1, keepdim=True) # get the index of the max log-probability \n correct += pred.eq(target.view_as(pred)).sum().item()\n\n test_loss /= len(test_loader.dataset)\n\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))",
"_____no_output_____"
],
[
"test(args, model, test_loader)",
"_____no_output_____"
]
],
[
[
"モデルの学習が完了しました。準備OKです。",
"_____no_output_____"
],
[
"## 暗号化されたデータとモデルを使っての評価",
"_____no_output_____"
],
[
"それでは、クライアントがクライアントのデータに対して推論を行えるよう、モデルをクライアントへ送りましょう。ですが、このモデルはデリケートな情報を含むため(トレーニングで時間と労力がかかっています!)、そのウェイトは非公開にしたいですよね。ここまでのチュートリアルで暗号化されたデータをリモートワーカーへ送ったように。",
"_____no_output_____"
]
],
[
[
"model.fix_precision().share(alice, bob, crypto_provider=crypto_provider)",
"_____no_output_____"
]
],
[
[
"このテスト関数は暗号化されたデータを使ってのテストができる関数です。モデルのウェイト、入力データ、推論結果、そして正解データは全て暗号化されています。\n\nですが、構文はピュアなPyTorchとほとんど同じですね。\n\n唯一サーバー側で複合化するのは最終的な精度のスコアだけです。スコアは推論結果を評価するために必要です。",
"_____no_output_____"
]
],
[
[
"def test(args, model, test_loader):\n model.eval()\n n_correct_priv = 0\n n_total = 0\n with torch.no_grad():\n for data, target in test_loader[:n_test_batches]:\n output = model(data)\n pred = output.argmax(dim=1) \n n_correct_priv += pred.eq(target.view_as(pred)).sum()\n n_total += args.test_batch_size\n # このテスト関数は暗号化されたデータでの評価を行えます。モデルのウェイト(パラメータ)、入力データ、推論結果、そ\n # して正解ラベルと全てが暗号されています。\n \n # しかしながら、みなさんお気づきの通り、ごくごく一般的なPyTorchのテストスクリプトとほとんど同じです。\n \n # 唯一複合化しているのは、200アイテムのバッチ事に計算している精度確認のためのすこだけです。\n # この数字を見ることで学習されたモデルの性能が良いのか悪いのか評価できます。\n \n n_correct = n_correct_priv.copy().get().float_precision().long().item()\n \n print('Test set: Accuracy: {}/{} ({:.0f}%)'.format(\n n_correct, n_total,\n 100. * n_correct / n_total))\n",
"_____no_output_____"
],
[
"test(args, model, private_test_loader)",
"_____no_output_____"
]
],
[
[
"ジャジャーン!今回は暗号化されたデータを使っての推論処理に関する一通りのプロセスを学習しました。モデルのウェイトはクライアント側からは見えませんし、クライアントの入力データや推論結果もサーバー側からは見えません。\n\nパフォーマンスに関してですが、1枚の画像の分類に掛かる時間は**0.1秒以下**です。私のノートブック(2.7 GHz Intel Core i7, 16GB RAM)でざっと**33ミリ秒**といったところでしょうか。ですが、今回のチュートリアルでは全てのワーカーが実際には私のマシン上にいるため、通信に時間が掛かっていません。実際の環境でそれぞれのワーカーが別々の場所に存在する場合は、ワーカー間の通信速度に大きく影響を受けます。\n",
"_____no_output_____"
],
[
"## Conclusion\n\n今回のチュートリアルでは、PyTorchとPySyftを使うことで、暗号化技術の専門家でなくても、機密データを使った、実践的、かつセキュアなディープラーニングが簡単に実行できることを学びました。\n\n本トピックについてはより多くの事例が追加されていく予定です。畳み込み層を使ったニューラルネットワークや、他のライブラリとのパフォーマンス比較や、外部にある機密データを扱ってのトレーニングなどなどです。お楽しみに。\n\nもし、このチュートリアルを気に入って、プライバシーに配慮した非中央集権的なAI技術や付随する(データやモデルの)サプライチェーンにご興味があって、プロジェクトに参加したいと思われるなら、以下の方法で可能です。\n### PySyftのGitHubレポジトリにスターをつける\n\n一番簡単に貢献できる方法はこのGitHubのレポジトリにスターを付けていただくことです。スターが増えると露出が増え、より多くのデベロッパーにこのクールな技術の事を知って貰えます。\n\n- [Star PySyft](https://github.com/OpenMined/PySyft)\n\n### Slackに入る\n\n最新の開発状況のトラッキングする一番良い方法はSlackに入ることです。\n下記フォームから入る事ができます。\n[http://slack.openmined.org](http://slack.openmined.org)\n\n### コードプロジェクトに参加する\n\nコミュニティに貢献する一番良い方法はソースコードのコントリビューターになることです。PySyftのGitHubへアクセスしてIssueのページを開き、\"Projects\"で検索してみてください。参加し得るプロジェクトの状況を把握することができます。また、\"good first issue\"とマークされているIssueを探す事でミニプロジェクトを探すこともできます。\n\n- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)\n- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)\n\n### 寄付\n\nもし、ソースコードで貢献できるほどの時間は取れないけど、是非何かサポートしたいという場合は、寄付をしていただくことも可能です。寄附金の全ては、ハッカソンやミートアップの開催といった、コミュニティ運営経費として利用されます。\n\n[OpenMined's Open Collective Page](https://opencollective.com/openmined)\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ec95c8e0d079962a3896b4d92cba4f1ad68a7f1a | 126,210 | ipynb | Jupyter Notebook | MLGame/games/arkanoid/K-Means KNN Tutorial.ipynb | Liuian/1092_INTRODUCTION-TO-MACHINE-LEARNING-AND-ITS-APPLICATION-TO-GAMING | f4a58d0d9f5832a77a4a86352e084065dc7bae50 | [
"MIT"
] | null | null | null | MLGame/games/arkanoid/K-Means KNN Tutorial.ipynb | Liuian/1092_INTRODUCTION-TO-MACHINE-LEARNING-AND-ITS-APPLICATION-TO-GAMING | f4a58d0d9f5832a77a4a86352e084065dc7bae50 | [
"MIT"
] | null | null | null | MLGame/games/arkanoid/K-Means KNN Tutorial.ipynb | Liuian/1092_INTRODUCTION-TO-MACHINE-LEARNING-AND-ITS-APPLICATION-TO-GAMING | f4a58d0d9f5832a77a4a86352e084065dc7bae50 | [
"MIT"
] | null | null | null | 200.971338 | 69,247 | 0.63487 | [
[
[
"# K-means\nhttps://scikit-learn.org/stable/\n<img src=\"https://mofanpy.com/static/results/sklearn/2_1_1.png\">\n\n## K-means是一種分群方法,為非監督式學習\n\n### 1. 設定n群\n### 2. K-means隨機給予n個群心\n### 3. 每個點用距離公式計算並分類給最近的群\n### 4. 用每一群的點重新計算群心\n### 5. 重複3、4步驟直到收斂",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport random\nimport numpy as np\nfrom sklearn import cluster, metrics\nimport matplotlib.pyplot as plt\n\nfeature = np.array([2, 2])\nfor i in range(3000):\n if i%3 == 0:\n x = 3 + random.normalvariate(0, 1.2)\n y = 3 + random.normalvariate(0, 1.2)\n feature = np.vstack((feature, [x, y]))\n plt.scatter(x, y , color='b', s=2)\n elif i%3 == 1:\n x = 7 + random.normalvariate(0, 1)\n y = 7 + random.normalvariate(0, 1)\n feature = np.vstack((feature, [x, y]))\n plt.scatter(x, y , color='r', s=2)\n else:\n x = 8 + random.normalvariate(0, 0.7)\n y = 2 + random.normalvariate(0, 0.7)\n feature = np.vstack((feature, [x, y]))\n plt.scatter(x, y , color='g', s=2)\nfeature = feature[1:]\n\nplt.xlim(0, 10)\nplt.ylim(0, 10)\nplt.show()",
"_____no_output_____"
],
[
"feature",
"_____no_output_____"
]
],
[
[
"### K-means官方文件\nhttps://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html",
"_____no_output_____"
]
],
[
[
"# 迴圈\nsilhouette_avgs = []\nks = range(2, 7)\nfor k in ks:\n kmeans_fit = cluster.KMeans(n_clusters = k).fit(feature)\n cluster_labels = kmeans_fit.labels_\n silhouette_avg = metrics.silhouette_score(feature, cluster_labels) # -1 ~ 1\n silhouette_avgs.append(silhouette_avg)\n\n# 作圖並印出 k = 2 到 10 的績效\nplt.bar(ks, silhouette_avgs)\nplt.show()\nprint(silhouette_avgs)",
"_____no_output_____"
],
[
"print(cluster_labels)",
"[3 2 0 ... 1 5 0]\n"
],
[
"from IPython.display import HTML\nHTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/0DGtyMBOZ-c\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>')\n# 出處: https://chih-sheng-huang821.medium.com/%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E9%9B%86%E7%BE%A4%E5%88%86%E6%9E%90-k-means-clustering-e608a7fe1b43",
"/opt/anaconda3/lib/python3.7/site-packages/IPython/core/display.py:701: UserWarning: Consider using IPython.display.IFrame instead\n warnings.warn(\"Consider using IPython.display.IFrame instead\")\n"
]
],
[
[
"# KNN(k nearest neighbors)\n## KNN可以做分類或回歸,為監督式學習\n### 1. 設定k值\n### 2. 計算距離公式找出k個最相近的特徵\n### 3. 分類: k個特徵投票、回歸: 平均k個特徵\n<img src=\"https://ww2.mathworks.cn/matlabcentral/mlc-downloads/downloads/03faee64-e85e-4ea0-a2b4-e5964949e2d1/d99b9a4d-618c-45f0-86d1-388bdf852c1d/images/screenshot.gif\">",
"_____no_output_____"
],
[
"### 蒐集資料\npython MLGame.py -i ml_play_template.py -f 200 -r arkanoid NORMAL 3",
"_____no_output_____"
]
],
[
[
"import pickle\nimport numpy as np\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\n#試取資料\nfile = open(\"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_EASY1_1.pickle\", \"rb\")\ndata = pickle.load(file)\nfile.close()\ntype(data['ml'])",
"_____no_output_____"
],
[
"game_info = data['ml']['scene_info']\ngame_command = data['ml']['command']\nprint(game_info)\nprint(game_command)",
"[{'frame': 0, 'status': 'GAME_ALIVE', 'ball': (93, 395), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 1, 'status': 'GAME_ALIVE', 'ball': (93, 395), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 2, 'status': 'GAME_ALIVE', 'ball': (86, 388), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 3, 'status': 'GAME_ALIVE', 'ball': (79, 381), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 4, 'status': 'GAME_ALIVE', 'ball': (72, 374), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 5, 'status': 'GAME_ALIVE', 'ball': (65, 367), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 6, 'status': 'GAME_ALIVE', 'ball': (58, 360), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 7, 'status': 'GAME_ALIVE', 'ball': (51, 353), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 8, 'status': 'GAME_ALIVE', 'ball': (44, 346), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 9, 'status': 'GAME_ALIVE', 'ball': (37, 339), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 10, 'status': 'GAME_ALIVE', 'ball': (30, 332), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 11, 'status': 'GAME_ALIVE', 'ball': (23, 325), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 12, 'status': 'GAME_ALIVE', 'ball': (16, 318), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 13, 'status': 'GAME_ALIVE', 'ball': (9, 311), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 14, 'status': 'GAME_ALIVE', 'ball': (2, 304), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 15, 'status': 'GAME_ALIVE', 'ball': (0, 297), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 16, 'status': 'GAME_ALIVE', 'ball': (7, 290), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 17, 'status': 'GAME_ALIVE', 'ball': (14, 283), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 18, 'status': 'GAME_ALIVE', 'ball': (21, 276), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 19, 'status': 'GAME_ALIVE', 'ball': (28, 269), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 20, 'status': 'GAME_ALIVE', 'ball': (35, 262), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 21, 'status': 'GAME_ALIVE', 'ball': (42, 255), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 22, 'status': 'GAME_ALIVE', 'ball': (49, 248), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 23, 'status': 'GAME_ALIVE', 'ball': (56, 241), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 24, 'status': 'GAME_ALIVE', 'ball': (63, 234), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 25, 'status': 'GAME_ALIVE', 'ball': (70, 227), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 26, 'status': 'GAME_ALIVE', 'ball': (77, 220), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 27, 'status': 'GAME_ALIVE', 'ball': (84, 213), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 28, 'status': 'GAME_ALIVE', 'ball': (91, 206), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 29, 'status': 'GAME_ALIVE', 'ball': (98, 199), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 30, 'status': 'GAME_ALIVE', 'ball': (105, 192), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 31, 'status': 'GAME_ALIVE', 'ball': (112, 185), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 32, 'status': 'GAME_ALIVE', 'ball': (119, 178), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 33, 'status': 'GAME_ALIVE', 'ball': (126, 171), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 34, 'status': 'GAME_ALIVE', 'ball': (133, 164), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 35, 'status': 'GAME_ALIVE', 'ball': (140, 157), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 36, 'status': 'GAME_ALIVE', 'ball': (147, 150), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 37, 'status': 'GAME_ALIVE', 'ball': (154, 143), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 38, 'status': 'GAME_ALIVE', 'ball': (161, 136), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 39, 'status': 'GAME_ALIVE', 'ball': (168, 129), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 40, 'status': 'GAME_ALIVE', 'ball': (175, 122), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 41, 'status': 'GAME_ALIVE', 'ball': (182, 115), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 42, 'status': 'GAME_ALIVE', 'ball': (189, 108), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 43, 'status': 'GAME_ALIVE', 'ball': (195, 101), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 44, 'status': 'GAME_ALIVE', 'ball': (188, 94), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 45, 'status': 'GAME_ALIVE', 'ball': (181, 87), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 46, 'status': 'GAME_ALIVE', 'ball': (174, 80), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 47, 'status': 'GAME_ALIVE', 'ball': (167, 73), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 48, 'status': 'GAME_ALIVE', 'ball': (160, 66), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50), (135, 50)], 'hard_bricks': []}, {'frame': 49, 'status': 'GAME_ALIVE', 'ball': (153, 60), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 50, 'status': 'GAME_ALIVE', 'ball': (146, 67), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 51, 'status': 'GAME_ALIVE', 'ball': (139, 74), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 52, 'status': 'GAME_ALIVE', 'ball': (132, 81), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 53, 'status': 'GAME_ALIVE', 'ball': (125, 88), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 54, 'status': 'GAME_ALIVE', 'ball': (118, 95), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 55, 'status': 'GAME_ALIVE', 'ball': (111, 102), 'platform': (80, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 56, 'status': 'GAME_ALIVE', 'ball': (104, 109), 'platform': (85, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 57, 'status': 'GAME_ALIVE', 'ball': (97, 116), 'platform': (90, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 58, 'status': 'GAME_ALIVE', 'ball': (90, 123), 'platform': (95, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 59, 'status': 'GAME_ALIVE', 'ball': (83, 130), 'platform': (100, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 60, 'status': 'GAME_ALIVE', 'ball': (76, 137), 'platform': (105, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 61, 'status': 'GAME_ALIVE', 'ball': (69, 144), 'platform': (110, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 62, 'status': 'GAME_ALIVE', 'ball': (62, 151), 'platform': (115, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 63, 'status': 'GAME_ALIVE', 'ball': (55, 158), 'platform': (120, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 64, 'status': 'GAME_ALIVE', 'ball': (48, 165), 'platform': (125, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 65, 'status': 'GAME_ALIVE', 'ball': (41, 172), 'platform': (130, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 66, 'status': 'GAME_ALIVE', 'ball': (34, 179), 'platform': (135, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 67, 'status': 'GAME_ALIVE', 'ball': (27, 186), 'platform': (140, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 68, 'status': 'GAME_ALIVE', 'ball': (20, 193), 'platform': (145, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 69, 'status': 'GAME_ALIVE', 'ball': (13, 200), 'platform': (150, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 70, 'status': 'GAME_ALIVE', 'ball': (6, 207), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 71, 'status': 'GAME_ALIVE', 'ball': (0, 214), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 72, 'status': 'GAME_ALIVE', 'ball': (7, 221), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 73, 'status': 'GAME_ALIVE', 'ball': (14, 228), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 74, 'status': 'GAME_ALIVE', 'ball': (21, 235), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 75, 'status': 'GAME_ALIVE', 'ball': (28, 242), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 76, 'status': 'GAME_ALIVE', 'ball': (35, 249), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 77, 'status': 'GAME_ALIVE', 'ball': (42, 256), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 78, 'status': 'GAME_ALIVE', 'ball': (49, 263), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 79, 'status': 'GAME_ALIVE', 'ball': (56, 270), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 80, 'status': 'GAME_ALIVE', 'ball': (63, 277), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 81, 'status': 'GAME_ALIVE', 'ball': (70, 284), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 82, 'status': 'GAME_ALIVE', 'ball': (77, 291), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 83, 'status': 'GAME_ALIVE', 'ball': (84, 298), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 84, 'status': 'GAME_ALIVE', 'ball': (91, 305), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 85, 'status': 'GAME_ALIVE', 'ball': (98, 312), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 86, 'status': 'GAME_ALIVE', 'ball': (105, 319), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 87, 'status': 'GAME_ALIVE', 'ball': (112, 326), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 88, 'status': 'GAME_ALIVE', 'ball': (119, 333), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 89, 'status': 'GAME_ALIVE', 'ball': (126, 340), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 90, 'status': 'GAME_ALIVE', 'ball': (133, 347), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 91, 'status': 'GAME_ALIVE', 'ball': (140, 354), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 92, 'status': 'GAME_ALIVE', 'ball': (147, 361), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 93, 'status': 'GAME_ALIVE', 'ball': (154, 368), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 94, 'status': 'GAME_ALIVE', 'ball': (161, 375), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 95, 'status': 'GAME_ALIVE', 'ball': (168, 382), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 96, 'status': 'GAME_ALIVE', 'ball': (175, 389), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 97, 'status': 'GAME_ALIVE', 'ball': (182, 395), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 98, 'status': 'GAME_ALIVE', 'ball': (189, 388), 'platform': (160, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 99, 'status': 'GAME_ALIVE', 'ball': (195, 381), 'platform': (155, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 100, 'status': 'GAME_ALIVE', 'ball': (188, 374), 'platform': (150, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 101, 'status': 'GAME_ALIVE', 'ball': (181, 367), 'platform': (145, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 102, 'status': 'GAME_ALIVE', 'ball': (174, 360), 'platform': (140, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 103, 'status': 'GAME_ALIVE', 'ball': (167, 353), 'platform': (135, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 104, 'status': 'GAME_ALIVE', 'ball': (160, 346), 'platform': (130, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 105, 'status': 'GAME_ALIVE', 'ball': (153, 339), 'platform': (125, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 106, 'status': 'GAME_ALIVE', 'ball': (146, 332), 'platform': (120, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 107, 'status': 'GAME_ALIVE', 'ball': (139, 325), 'platform': (115, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 108, 'status': 'GAME_ALIVE', 'ball': (132, 318), 'platform': (110, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 109, 'status': 'GAME_ALIVE', 'ball': (125, 311), 'platform': (105, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 110, 'status': 'GAME_ALIVE', 'ball': (118, 304), 'platform': (100, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 111, 'status': 'GAME_ALIVE', 'ball': (111, 297), 'platform': (95, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 112, 'status': 'GAME_ALIVE', 'ball': (104, 290), 'platform': (90, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 113, 'status': 'GAME_ALIVE', 'ball': (97, 283), 'platform': (85, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 114, 'status': 'GAME_ALIVE', 'ball': (90, 276), 'platform': (80, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 115, 'status': 'GAME_ALIVE', 'ball': (83, 269), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 116, 'status': 'GAME_ALIVE', 'ball': (76, 262), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 117, 'status': 'GAME_ALIVE', 'ball': (69, 255), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 118, 'status': 'GAME_ALIVE', 'ball': (62, 248), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 119, 'status': 'GAME_ALIVE', 'ball': (55, 241), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 120, 'status': 'GAME_ALIVE', 'ball': (48, 234), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 121, 'status': 'GAME_ALIVE', 'ball': (41, 227), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 122, 'status': 'GAME_ALIVE', 'ball': (34, 220), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 123, 'status': 'GAME_ALIVE', 'ball': (27, 213), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 124, 'status': 'GAME_ALIVE', 'ball': (20, 206), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 125, 'status': 'GAME_ALIVE', 'ball': (13, 199), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 126, 'status': 'GAME_ALIVE', 'ball': (6, 192), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 127, 'status': 'GAME_ALIVE', 'ball': (0, 185), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 128, 'status': 'GAME_ALIVE', 'ball': (7, 178), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 129, 'status': 'GAME_ALIVE', 'ball': (14, 171), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 130, 'status': 'GAME_ALIVE', 'ball': (21, 164), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 131, 'status': 'GAME_ALIVE', 'ball': (28, 157), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 132, 'status': 'GAME_ALIVE', 'ball': (35, 150), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 133, 'status': 'GAME_ALIVE', 'ball': (42, 143), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 134, 'status': 'GAME_ALIVE', 'ball': (49, 136), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 135, 'status': 'GAME_ALIVE', 'ball': (56, 129), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 136, 'status': 'GAME_ALIVE', 'ball': (63, 122), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 137, 'status': 'GAME_ALIVE', 'ball': (70, 115), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 138, 'status': 'GAME_ALIVE', 'ball': (77, 108), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 139, 'status': 'GAME_ALIVE', 'ball': (84, 101), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 140, 'status': 'GAME_ALIVE', 'ball': (91, 94), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 141, 'status': 'GAME_ALIVE', 'ball': (98, 87), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 142, 'status': 'GAME_ALIVE', 'ball': (105, 80), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 143, 'status': 'GAME_ALIVE', 'ball': (112, 73), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 144, 'status': 'GAME_ALIVE', 'ball': (119, 66), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50), (110, 50)], 'hard_bricks': []}, {'frame': 145, 'status': 'GAME_ALIVE', 'ball': (126, 60), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 146, 'status': 'GAME_ALIVE', 'ball': (133, 67), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 147, 'status': 'GAME_ALIVE', 'ball': (140, 74), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 148, 'status': 'GAME_ALIVE', 'ball': (147, 81), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 149, 'status': 'GAME_ALIVE', 'ball': (154, 88), 'platform': (45, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 150, 'status': 'GAME_ALIVE', 'ball': (161, 95), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 151, 'status': 'GAME_ALIVE', 'ball': (168, 102), 'platform': (45, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 152, 'status': 'GAME_ALIVE', 'ball': (175, 109), 'platform': (40, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 153, 'status': 'GAME_ALIVE', 'ball': (182, 116), 'platform': (45, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 154, 'status': 'GAME_ALIVE', 'ball': (189, 123), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 155, 'status': 'GAME_ALIVE', 'ball': (195, 130), 'platform': (45, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 156, 'status': 'GAME_ALIVE', 'ball': (188, 137), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 157, 'status': 'GAME_ALIVE', 'ball': (181, 144), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 158, 'status': 'GAME_ALIVE', 'ball': (174, 151), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 159, 'status': 'GAME_ALIVE', 'ball': (167, 158), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 160, 'status': 'GAME_ALIVE', 'ball': (160, 165), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 161, 'status': 'GAME_ALIVE', 'ball': (153, 172), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 162, 'status': 'GAME_ALIVE', 'ball': (146, 179), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 163, 'status': 'GAME_ALIVE', 'ball': (139, 186), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 164, 'status': 'GAME_ALIVE', 'ball': (132, 193), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 165, 'status': 'GAME_ALIVE', 'ball': (125, 200), 'platform': (45, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 166, 'status': 'GAME_ALIVE', 'ball': (118, 207), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 167, 'status': 'GAME_ALIVE', 'ball': (111, 214), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 168, 'status': 'GAME_ALIVE', 'ball': (104, 221), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 169, 'status': 'GAME_ALIVE', 'ball': (97, 228), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 170, 'status': 'GAME_ALIVE', 'ball': (90, 235), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 171, 'status': 'GAME_ALIVE', 'ball': (83, 242), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 172, 'status': 'GAME_ALIVE', 'ball': (76, 249), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 173, 'status': 'GAME_ALIVE', 'ball': (69, 256), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 174, 'status': 'GAME_ALIVE', 'ball': (62, 263), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 175, 'status': 'GAME_ALIVE', 'ball': (55, 270), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 176, 'status': 'GAME_ALIVE', 'ball': (48, 277), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 177, 'status': 'GAME_ALIVE', 'ball': (41, 284), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 178, 'status': 'GAME_ALIVE', 'ball': (34, 291), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 179, 'status': 'GAME_ALIVE', 'ball': (27, 298), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 180, 'status': 'GAME_ALIVE', 'ball': (20, 305), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 181, 'status': 'GAME_ALIVE', 'ball': (13, 312), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 182, 'status': 'GAME_ALIVE', 'ball': (6, 319), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 183, 'status': 'GAME_ALIVE', 'ball': (0, 326), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 184, 'status': 'GAME_ALIVE', 'ball': (7, 333), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 185, 'status': 'GAME_ALIVE', 'ball': (14, 340), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 186, 'status': 'GAME_ALIVE', 'ball': (21, 347), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 187, 'status': 'GAME_ALIVE', 'ball': (28, 354), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 188, 'status': 'GAME_ALIVE', 'ball': (35, 361), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 189, 'status': 'GAME_ALIVE', 'ball': (42, 368), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 190, 'status': 'GAME_ALIVE', 'ball': (49, 375), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 191, 'status': 'GAME_ALIVE', 'ball': (56, 382), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 192, 'status': 'GAME_ALIVE', 'ball': (63, 389), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 193, 'status': 'GAME_ALIVE', 'ball': (70, 395), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 194, 'status': 'GAME_ALIVE', 'ball': (77, 388), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 195, 'status': 'GAME_ALIVE', 'ball': (84, 381), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 196, 'status': 'GAME_ALIVE', 'ball': (91, 374), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 197, 'status': 'GAME_ALIVE', 'ball': (98, 367), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 198, 'status': 'GAME_ALIVE', 'ball': (105, 360), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 199, 'status': 'GAME_ALIVE', 'ball': (112, 353), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 200, 'status': 'GAME_ALIVE', 'ball': (119, 346), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 201, 'status': 'GAME_ALIVE', 'ball': (126, 339), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 202, 'status': 'GAME_ALIVE', 'ball': (133, 332), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 203, 'status': 'GAME_ALIVE', 'ball': (140, 325), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 204, 'status': 'GAME_ALIVE', 'ball': (147, 318), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 205, 'status': 'GAME_ALIVE', 'ball': (154, 311), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 206, 'status': 'GAME_ALIVE', 'ball': (161, 304), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 207, 'status': 'GAME_ALIVE', 'ball': (168, 297), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 208, 'status': 'GAME_ALIVE', 'ball': (175, 290), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 209, 'status': 'GAME_ALIVE', 'ball': (182, 283), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 210, 'status': 'GAME_ALIVE', 'ball': (189, 276), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 211, 'status': 'GAME_ALIVE', 'ball': (195, 269), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 212, 'status': 'GAME_ALIVE', 'ball': (188, 262), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 213, 'status': 'GAME_ALIVE', 'ball': (181, 255), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 214, 'status': 'GAME_ALIVE', 'ball': (174, 248), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 215, 'status': 'GAME_ALIVE', 'ball': (167, 241), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 216, 'status': 'GAME_ALIVE', 'ball': (160, 234), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 217, 'status': 'GAME_ALIVE', 'ball': (153, 227), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 218, 'status': 'GAME_ALIVE', 'ball': (146, 220), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 219, 'status': 'GAME_ALIVE', 'ball': (139, 213), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 220, 'status': 'GAME_ALIVE', 'ball': (132, 206), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 221, 'status': 'GAME_ALIVE', 'ball': (125, 199), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 222, 'status': 'GAME_ALIVE', 'ball': (118, 192), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 223, 'status': 'GAME_ALIVE', 'ball': (111, 185), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 224, 'status': 'GAME_ALIVE', 'ball': (104, 178), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 225, 'status': 'GAME_ALIVE', 'ball': (97, 171), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 226, 'status': 'GAME_ALIVE', 'ball': (90, 164), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 227, 'status': 'GAME_ALIVE', 'ball': (83, 157), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 228, 'status': 'GAME_ALIVE', 'ball': (76, 150), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 229, 'status': 'GAME_ALIVE', 'ball': (69, 143), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 230, 'status': 'GAME_ALIVE', 'ball': (62, 136), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 231, 'status': 'GAME_ALIVE', 'ball': (55, 129), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 232, 'status': 'GAME_ALIVE', 'ball': (48, 122), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 233, 'status': 'GAME_ALIVE', 'ball': (41, 115), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 234, 'status': 'GAME_ALIVE', 'ball': (34, 108), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 235, 'status': 'GAME_ALIVE', 'ball': (27, 101), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 236, 'status': 'GAME_ALIVE', 'ball': (20, 94), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 237, 'status': 'GAME_ALIVE', 'ball': (13, 87), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 238, 'status': 'GAME_ALIVE', 'ball': (6, 80), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 239, 'status': 'GAME_ALIVE', 'ball': (0, 73), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 240, 'status': 'GAME_ALIVE', 'ball': (7, 66), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 241, 'status': 'GAME_ALIVE', 'ball': (14, 59), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 242, 'status': 'GAME_ALIVE', 'ball': (21, 52), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 243, 'status': 'GAME_ALIVE', 'ball': (28, 45), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 244, 'status': 'GAME_ALIVE', 'ball': (35, 38), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 245, 'status': 'GAME_ALIVE', 'ball': (42, 31), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 246, 'status': 'GAME_ALIVE', 'ball': (49, 24), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 247, 'status': 'GAME_ALIVE', 'ball': (56, 17), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 248, 'status': 'GAME_ALIVE', 'ball': (63, 10), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 249, 'status': 'GAME_ALIVE', 'ball': (70, 3), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 250, 'status': 'GAME_ALIVE', 'ball': (77, 0), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 251, 'status': 'GAME_ALIVE', 'ball': (84, 7), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 252, 'status': 'GAME_ALIVE', 'ball': (91, 14), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 253, 'status': 'GAME_ALIVE', 'ball': (98, 21), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 254, 'status': 'GAME_ALIVE', 'ball': (105, 28), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 255, 'status': 'GAME_ALIVE', 'ball': (112, 35), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 256, 'status': 'GAME_ALIVE', 'ball': (119, 42), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 257, 'status': 'GAME_ALIVE', 'ball': (126, 49), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 258, 'status': 'GAME_ALIVE', 'ball': (133, 56), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 259, 'status': 'GAME_ALIVE', 'ball': (140, 63), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 260, 'status': 'GAME_ALIVE', 'ball': (147, 70), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 261, 'status': 'GAME_ALIVE', 'ball': (154, 77), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 262, 'status': 'GAME_ALIVE', 'ball': (161, 84), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 263, 'status': 'GAME_ALIVE', 'ball': (168, 91), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 264, 'status': 'GAME_ALIVE', 'ball': (175, 98), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 265, 'status': 'GAME_ALIVE', 'ball': (182, 105), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 266, 'status': 'GAME_ALIVE', 'ball': (189, 112), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 267, 'status': 'GAME_ALIVE', 'ball': (195, 119), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 268, 'status': 'GAME_ALIVE', 'ball': (188, 126), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 269, 'status': 'GAME_ALIVE', 'ball': (181, 133), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 270, 'status': 'GAME_ALIVE', 'ball': (174, 140), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 271, 'status': 'GAME_ALIVE', 'ball': (167, 147), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 272, 'status': 'GAME_ALIVE', 'ball': (160, 154), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 273, 'status': 'GAME_ALIVE', 'ball': (153, 161), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 274, 'status': 'GAME_ALIVE', 'ball': (146, 168), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 275, 'status': 'GAME_ALIVE', 'ball': (139, 175), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 276, 'status': 'GAME_ALIVE', 'ball': (132, 182), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 277, 'status': 'GAME_ALIVE', 'ball': (125, 189), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 278, 'status': 'GAME_ALIVE', 'ball': (118, 196), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 279, 'status': 'GAME_ALIVE', 'ball': (111, 203), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 280, 'status': 'GAME_ALIVE', 'ball': (104, 210), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 281, 'status': 'GAME_ALIVE', 'ball': (97, 217), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 282, 'status': 'GAME_ALIVE', 'ball': (90, 224), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 283, 'status': 'GAME_ALIVE', 'ball': (83, 231), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 284, 'status': 'GAME_ALIVE', 'ball': (76, 238), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 285, 'status': 'GAME_ALIVE', 'ball': (69, 245), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 286, 'status': 'GAME_ALIVE', 'ball': (62, 252), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 287, 'status': 'GAME_ALIVE', 'ball': (55, 259), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 288, 'status': 'GAME_ALIVE', 'ball': (48, 266), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 289, 'status': 'GAME_ALIVE', 'ball': (41, 273), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 290, 'status': 'GAME_ALIVE', 'ball': (34, 280), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 291, 'status': 'GAME_ALIVE', 'ball': (27, 287), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 292, 'status': 'GAME_ALIVE', 'ball': (20, 294), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 293, 'status': 'GAME_ALIVE', 'ball': (13, 301), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 294, 'status': 'GAME_ALIVE', 'ball': (6, 308), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 295, 'status': 'GAME_ALIVE', 'ball': (0, 315), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 296, 'status': 'GAME_ALIVE', 'ball': (7, 322), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 297, 'status': 'GAME_ALIVE', 'ball': (14, 329), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 298, 'status': 'GAME_ALIVE', 'ball': (21, 336), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 299, 'status': 'GAME_ALIVE', 'ball': (28, 343), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 300, 'status': 'GAME_ALIVE', 'ball': (35, 350), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 301, 'status': 'GAME_ALIVE', 'ball': (42, 357), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 302, 'status': 'GAME_ALIVE', 'ball': (49, 364), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 303, 'status': 'GAME_ALIVE', 'ball': (56, 371), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 304, 'status': 'GAME_ALIVE', 'ball': (63, 378), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 305, 'status': 'GAME_ALIVE', 'ball': (70, 385), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 306, 'status': 'GAME_ALIVE', 'ball': (77, 392), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 307, 'status': 'GAME_ALIVE', 'ball': (84, 395), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 308, 'status': 'GAME_ALIVE', 'ball': (91, 388), 'platform': (75, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 309, 'status': 'GAME_ALIVE', 'ball': (98, 381), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 310, 'status': 'GAME_ALIVE', 'ball': (105, 374), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 311, 'status': 'GAME_ALIVE', 'ball': (112, 367), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 312, 'status': 'GAME_ALIVE', 'ball': (119, 360), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 313, 'status': 'GAME_ALIVE', 'ball': (126, 353), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 314, 'status': 'GAME_ALIVE', 'ball': (133, 346), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 315, 'status': 'GAME_ALIVE', 'ball': (140, 339), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 316, 'status': 'GAME_ALIVE', 'ball': (147, 332), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 317, 'status': 'GAME_ALIVE', 'ball': (154, 325), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 318, 'status': 'GAME_ALIVE', 'ball': (161, 318), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 319, 'status': 'GAME_ALIVE', 'ball': (168, 311), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 320, 'status': 'GAME_ALIVE', 'ball': (175, 304), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 321, 'status': 'GAME_ALIVE', 'ball': (182, 297), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 322, 'status': 'GAME_ALIVE', 'ball': (189, 290), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 323, 'status': 'GAME_ALIVE', 'ball': (195, 283), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 324, 'status': 'GAME_ALIVE', 'ball': (188, 276), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 325, 'status': 'GAME_ALIVE', 'ball': (181, 269), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 326, 'status': 'GAME_ALIVE', 'ball': (174, 262), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 327, 'status': 'GAME_ALIVE', 'ball': (167, 255), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 328, 'status': 'GAME_ALIVE', 'ball': (160, 248), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 329, 'status': 'GAME_ALIVE', 'ball': (153, 241), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 330, 'status': 'GAME_ALIVE', 'ball': (146, 234), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 331, 'status': 'GAME_ALIVE', 'ball': (139, 227), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 332, 'status': 'GAME_ALIVE', 'ball': (132, 220), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 333, 'status': 'GAME_ALIVE', 'ball': (125, 213), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 334, 'status': 'GAME_ALIVE', 'ball': (118, 206), 'platform': (50, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 335, 'status': 'GAME_ALIVE', 'ball': (111, 199), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 336, 'status': 'GAME_ALIVE', 'ball': (104, 192), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 337, 'status': 'GAME_ALIVE', 'ball': (97, 185), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 338, 'status': 'GAME_ALIVE', 'ball': (90, 178), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 339, 'status': 'GAME_ALIVE', 'ball': (83, 171), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 340, 'status': 'GAME_ALIVE', 'ball': (76, 164), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 341, 'status': 'GAME_ALIVE', 'ball': (69, 157), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 342, 'status': 'GAME_ALIVE', 'ball': (62, 150), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 343, 'status': 'GAME_ALIVE', 'ball': (55, 143), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 344, 'status': 'GAME_ALIVE', 'ball': (48, 136), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 345, 'status': 'GAME_ALIVE', 'ball': (41, 129), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 346, 'status': 'GAME_ALIVE', 'ball': (34, 122), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 347, 'status': 'GAME_ALIVE', 'ball': (27, 115), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 348, 'status': 'GAME_ALIVE', 'ball': (20, 108), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 349, 'status': 'GAME_ALIVE', 'ball': (13, 101), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 350, 'status': 'GAME_ALIVE', 'ball': (6, 94), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 351, 'status': 'GAME_ALIVE', 'ball': (0, 87), 'platform': (55, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 352, 'status': 'GAME_ALIVE', 'ball': (7, 80), 'platform': (60, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 353, 'status': 'GAME_ALIVE', 'ball': (14, 73), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 354, 'status': 'GAME_ALIVE', 'ball': (21, 66), 'platform': (70, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 355, 'status': 'GAME_ALIVE', 'ball': (28, 59), 'platform': (65, 400), 'bricks': [(35, 50), (60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 356, 'status': 'GAME_ALIVE', 'ball': (30, 52), 'platform': (70, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 357, 'status': 'GAME_ALIVE', 'ball': (23, 45), 'platform': (75, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 358, 'status': 'GAME_ALIVE', 'ball': (16, 38), 'platform': (70, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 359, 'status': 'GAME_ALIVE', 'ball': (9, 31), 'platform': (65, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 360, 'status': 'GAME_ALIVE', 'ball': (2, 24), 'platform': (60, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 361, 'status': 'GAME_ALIVE', 'ball': (0, 17), 'platform': (65, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 362, 'status': 'GAME_ALIVE', 'ball': (7, 10), 'platform': (70, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 363, 'status': 'GAME_ALIVE', 'ball': (14, 3), 'platform': (65, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 364, 'status': 'GAME_ALIVE', 'ball': (21, 0), 'platform': (60, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 365, 'status': 'GAME_ALIVE', 'ball': (28, 7), 'platform': (55, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 366, 'status': 'GAME_ALIVE', 'ball': (35, 14), 'platform': (50, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 367, 'status': 'GAME_ALIVE', 'ball': (42, 21), 'platform': (45, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 368, 'status': 'GAME_ALIVE', 'ball': (49, 28), 'platform': (40, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 369, 'status': 'GAME_ALIVE', 'ball': (56, 35), 'platform': (35, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 370, 'status': 'GAME_ALIVE', 'ball': (63, 42), 'platform': (30, 400), 'bricks': [(60, 50), (85, 50)], 'hard_bricks': []}, {'frame': 371, 'status': 'GAME_ALIVE', 'ball': (70, 45), 'platform': (25, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 372, 'status': 'GAME_ALIVE', 'ball': (77, 38), 'platform': (20, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 373, 'status': 'GAME_ALIVE', 'ball': (84, 31), 'platform': (25, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 374, 'status': 'GAME_ALIVE', 'ball': (91, 24), 'platform': (30, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 375, 'status': 'GAME_ALIVE', 'ball': (98, 17), 'platform': (35, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 376, 'status': 'GAME_ALIVE', 'ball': (105, 10), 'platform': (40, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 377, 'status': 'GAME_ALIVE', 'ball': (112, 3), 'platform': (45, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 378, 'status': 'GAME_ALIVE', 'ball': (119, 0), 'platform': (50, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 379, 'status': 'GAME_ALIVE', 'ball': (126, 7), 'platform': (50, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 380, 'status': 'GAME_ALIVE', 'ball': (133, 14), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 381, 'status': 'GAME_ALIVE', 'ball': (140, 21), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 382, 'status': 'GAME_ALIVE', 'ball': (147, 28), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 383, 'status': 'GAME_ALIVE', 'ball': (154, 35), 'platform': (70, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 384, 'status': 'GAME_ALIVE', 'ball': (161, 42), 'platform': (75, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 385, 'status': 'GAME_ALIVE', 'ball': (168, 49), 'platform': (80, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 386, 'status': 'GAME_ALIVE', 'ball': (175, 56), 'platform': (85, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 387, 'status': 'GAME_ALIVE', 'ball': (182, 63), 'platform': (90, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 388, 'status': 'GAME_ALIVE', 'ball': (189, 70), 'platform': (90, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 389, 'status': 'GAME_ALIVE', 'ball': (195, 77), 'platform': (95, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 390, 'status': 'GAME_ALIVE', 'ball': (188, 84), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 391, 'status': 'GAME_ALIVE', 'ball': (181, 91), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 392, 'status': 'GAME_ALIVE', 'ball': (174, 98), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 393, 'status': 'GAME_ALIVE', 'ball': (167, 105), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 394, 'status': 'GAME_ALIVE', 'ball': (160, 112), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 395, 'status': 'GAME_ALIVE', 'ball': (153, 119), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 396, 'status': 'GAME_ALIVE', 'ball': (146, 126), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 397, 'status': 'GAME_ALIVE', 'ball': (139, 133), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 398, 'status': 'GAME_ALIVE', 'ball': (132, 140), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 399, 'status': 'GAME_ALIVE', 'ball': (125, 147), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 400, 'status': 'GAME_ALIVE', 'ball': (118, 154), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 401, 'status': 'GAME_ALIVE', 'ball': (111, 161), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 402, 'status': 'GAME_ALIVE', 'ball': (104, 168), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 403, 'status': 'GAME_ALIVE', 'ball': (97, 175), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 404, 'status': 'GAME_ALIVE', 'ball': (90, 182), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 405, 'status': 'GAME_ALIVE', 'ball': (83, 189), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 406, 'status': 'GAME_ALIVE', 'ball': (76, 196), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 407, 'status': 'GAME_ALIVE', 'ball': (69, 203), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 408, 'status': 'GAME_ALIVE', 'ball': (62, 210), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 409, 'status': 'GAME_ALIVE', 'ball': (55, 217), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 410, 'status': 'GAME_ALIVE', 'ball': (48, 224), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 411, 'status': 'GAME_ALIVE', 'ball': (41, 231), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 412, 'status': 'GAME_ALIVE', 'ball': (34, 238), 'platform': (115, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 413, 'status': 'GAME_ALIVE', 'ball': (27, 245), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 414, 'status': 'GAME_ALIVE', 'ball': (20, 252), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 415, 'status': 'GAME_ALIVE', 'ball': (13, 259), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 416, 'status': 'GAME_ALIVE', 'ball': (6, 266), 'platform': (115, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 417, 'status': 'GAME_ALIVE', 'ball': (0, 273), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 418, 'status': 'GAME_ALIVE', 'ball': (7, 280), 'platform': (115, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 419, 'status': 'GAME_ALIVE', 'ball': (14, 287), 'platform': (120, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 420, 'status': 'GAME_ALIVE', 'ball': (21, 294), 'platform': (115, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 421, 'status': 'GAME_ALIVE', 'ball': (28, 301), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 422, 'status': 'GAME_ALIVE', 'ball': (35, 308), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 423, 'status': 'GAME_ALIVE', 'ball': (42, 315), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 424, 'status': 'GAME_ALIVE', 'ball': (49, 322), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 425, 'status': 'GAME_ALIVE', 'ball': (56, 329), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 426, 'status': 'GAME_ALIVE', 'ball': (63, 336), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 427, 'status': 'GAME_ALIVE', 'ball': (70, 343), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 428, 'status': 'GAME_ALIVE', 'ball': (77, 350), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 429, 'status': 'GAME_ALIVE', 'ball': (84, 357), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 430, 'status': 'GAME_ALIVE', 'ball': (91, 364), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 431, 'status': 'GAME_ALIVE', 'ball': (98, 371), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 432, 'status': 'GAME_ALIVE', 'ball': (105, 378), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 433, 'status': 'GAME_ALIVE', 'ball': (112, 385), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 434, 'status': 'GAME_ALIVE', 'ball': (119, 392), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 435, 'status': 'GAME_ALIVE', 'ball': (126, 395), 'platform': (110, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 436, 'status': 'GAME_ALIVE', 'ball': (133, 388), 'platform': (105, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 437, 'status': 'GAME_ALIVE', 'ball': (140, 381), 'platform': (100, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 438, 'status': 'GAME_ALIVE', 'ball': (147, 374), 'platform': (95, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 439, 'status': 'GAME_ALIVE', 'ball': (154, 367), 'platform': (90, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 440, 'status': 'GAME_ALIVE', 'ball': (161, 360), 'platform': (85, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 441, 'status': 'GAME_ALIVE', 'ball': (168, 353), 'platform': (80, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 442, 'status': 'GAME_ALIVE', 'ball': (175, 346), 'platform': (75, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 443, 'status': 'GAME_ALIVE', 'ball': (182, 339), 'platform': (70, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 444, 'status': 'GAME_ALIVE', 'ball': (189, 332), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 445, 'status': 'GAME_ALIVE', 'ball': (195, 325), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 446, 'status': 'GAME_ALIVE', 'ball': (188, 318), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 447, 'status': 'GAME_ALIVE', 'ball': (181, 311), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 448, 'status': 'GAME_ALIVE', 'ball': (174, 304), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 449, 'status': 'GAME_ALIVE', 'ball': (167, 297), 'platform': (70, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 450, 'status': 'GAME_ALIVE', 'ball': (160, 290), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 451, 'status': 'GAME_ALIVE', 'ball': (153, 283), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 452, 'status': 'GAME_ALIVE', 'ball': (146, 276), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 453, 'status': 'GAME_ALIVE', 'ball': (139, 269), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 454, 'status': 'GAME_ALIVE', 'ball': (132, 262), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 455, 'status': 'GAME_ALIVE', 'ball': (125, 255), 'platform': (50, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 456, 'status': 'GAME_ALIVE', 'ball': (118, 248), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 457, 'status': 'GAME_ALIVE', 'ball': (111, 241), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 458, 'status': 'GAME_ALIVE', 'ball': (104, 234), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 459, 'status': 'GAME_ALIVE', 'ball': (97, 227), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 460, 'status': 'GAME_ALIVE', 'ball': (90, 220), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 461, 'status': 'GAME_ALIVE', 'ball': (83, 213), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 462, 'status': 'GAME_ALIVE', 'ball': (76, 206), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 463, 'status': 'GAME_ALIVE', 'ball': (69, 199), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 464, 'status': 'GAME_ALIVE', 'ball': (62, 192), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 465, 'status': 'GAME_ALIVE', 'ball': (55, 185), 'platform': (70, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 466, 'status': 'GAME_ALIVE', 'ball': (48, 178), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 467, 'status': 'GAME_ALIVE', 'ball': (41, 171), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 468, 'status': 'GAME_ALIVE', 'ball': (34, 164), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 469, 'status': 'GAME_ALIVE', 'ball': (27, 157), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 470, 'status': 'GAME_ALIVE', 'ball': (20, 150), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 471, 'status': 'GAME_ALIVE', 'ball': (13, 143), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 472, 'status': 'GAME_ALIVE', 'ball': (6, 136), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 473, 'status': 'GAME_ALIVE', 'ball': (0, 129), 'platform': (50, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 474, 'status': 'GAME_ALIVE', 'ball': (7, 122), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 475, 'status': 'GAME_ALIVE', 'ball': (14, 115), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 476, 'status': 'GAME_ALIVE', 'ball': (21, 108), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 477, 'status': 'GAME_ALIVE', 'ball': (28, 101), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 478, 'status': 'GAME_ALIVE', 'ball': (35, 94), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 479, 'status': 'GAME_ALIVE', 'ball': (42, 87), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 480, 'status': 'GAME_ALIVE', 'ball': (49, 80), 'platform': (65, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 481, 'status': 'GAME_ALIVE', 'ball': (56, 73), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 482, 'status': 'GAME_ALIVE', 'ball': (63, 66), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 483, 'status': 'GAME_ALIVE', 'ball': (70, 59), 'platform': (60, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 484, 'status': 'GAME_ALIVE', 'ball': (77, 52), 'platform': (55, 400), 'bricks': [(85, 50)], 'hard_bricks': []}, {'frame': 485, 'status': 'GAME_PASS', 'ball': (80, 45), 'platform': (60, 400), 'bricks': [], 'hard_bricks': []}]\n['SERVE_TO_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'NONE', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'NONE', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'NONE', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'NONE', 'MOVE_LEFT', 'NONE', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_LEFT', 'MOVE_RIGHT', 'MOVE_LEFT', 'MOVE_RIGHT', None]\n"
],
[
"\"\"\"\nfor i in range(2, 103):#EASY1\n path = \"/Users/peggy/Downloads/log/\" + str(i) + \"mo.pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\n\"\"\"\nfor i in range(2, 6):#EASY1\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_EASY1_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\nfor i in range(1, 6):#EASY2\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_EASY2_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\nfor i in range(1, 6):#EASY3\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_EASY3_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\nfor i in range(1, 11):#NORMAL1\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_NORM1_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\nfor i in range(1, 11):#NORMAL2\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_NORM2_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\nfor i in range(1, 11):#NORMAL3\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_NORM3_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\nfor i in range(1, 41):#NORMAL5\n path = \"/Users/peggy/Documents/109-2(2-2)/Introduction to machine learning and its application to gaming/MLGame/games/arkanoid/log/third_NORM5_\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close()\n \nprint(len(game_info))\nprint(len(game_command))",
"141901\n141901\n"
]
],
[
[
"### 特徵整理",
"_____no_output_____"
]
],
[
[
"g = game_info[1]\n#gprev = game_info[i - 1]\n#x_velo = g['ball'][0] - gprev['ball'][0]\n#y_velo = g['ball'][1] - gprev['ball'][1]\n \n#feature = np.array([g['ball'][0], g['ball'][1], g['platform'][0] , 0, 0, 100])#x軸速率,沒有y軸 球上升還下降 板子往左還往右還不動\nfeature = np.array([g['ball'][0], g['ball'][1], g['platform'][0] +20 ,0,0,100])#mos code\nprint(feature)\n\nprint(game_command[1])\ngame_command[1] = 0",
"[ 93 395 95 0 0 100]\nMOVE_LEFT\n"
],
[
"\"\"\"\nfor i in range(2, len(game_info) - 1):\n g = game_info[i]#des_x\n gprev = game_info[i - 1]\n x_velo = g['ball'][0] - gprev['ball'][0]\n y_velo = g['ball'][1] - gprev['ball'][1]\n plat_velo = g['platform'][0] - gprev['platform'][0]\n \n if x_velo == 7 or x_velo == -7:\n des_x = 0\n if g['ball'][1]>gprev['ball'][1]: #下降中\n if g['ball'][0]>gprev['ball'][0]: #正在往右\n des_x=(400-g['ball'][1])+g['ball'][0] #預測球落下的位置 是球x座標加上(運動方向)球與盤子的距離\n else: #正在往左\n des_x=g['ball'][0]-(400-g['ball'][1]) #同上\n if g['ball'][1]<gprev['ball'][1]: #正在往上\n des_x= 80 #不預測位置\n #假設球是以(+-10, +- 7)的角度動的話\n else:\n if g['ball'][1]>gprev['ball'][1]: #下降中\n if g['ball'][0]>gprev['ball'][0]: #正在往右\n des_x = (400-g['ball'][1]) * 10 / 7 + g['ball'][0] #預測球落下的位置\n else: #正在往左\n des_x=g['ball'][0] - (400-g['ball'][1]) * 10 / 7 #同上\n if g['ball'][1]<gprev['ball'][1]: #正在往上\n des_x=100\n \n while des_x>200 or des_x<0:\n if des_x>200:\n des_x=(200-(des_x-200))\n else:\n des_x=-des_x\n \n feature = np.vstack((feature, [g['ball'][0], g['ball'][1], g['platform'][0], x_velo, y_velo, des_x]))#要加其他feature 可加板子跟球是不是同向\n if game_command[i] == \"NONE\": game_command[i] = 0\n elif game_command[i] == \"MOVE_LEFT\": game_command[i] = 1\n else: game_command[i] = 2\n \nanswer = np.array(game_command[1:-1])#answer跟板子有關\n\nprint(feature)\nprint(feature.shape)\nprint(answer)\nprint(answer.shape)\n\"\"\"\nfor i in range(2, len(game_info) - 1):\n g = game_info[i]\n g_last = game_info[i-1]\n \n des_vx = g['ball'][0] - g_last['ball'][0]\n des_vy = g['ball'][1] - g_last['ball'][1]\n plat_velo = g['platform'][0] - g_last['platform'][0]\n \n if des_vy > 0:\n if des_vx > 0:\n des_x = (400 - g['ball'][1]) + g['ball'][0]\n else:\n des_x= g['ball'][0] - (400 - g['ball'][1])\n if des_vy < 0:\n des_x = 100\n \n while des_x > 200 or des_x < 0:\n if des_x > 200:\n des_x = 200 - (des_x - 200)\n else: \n des_x = -des_x\n \n feature = np.vstack((feature, [g['ball'][0], g['ball'][1], g['platform'][0] + 20, des_vx, des_vy,des_x, plat_velo]))#改這行\\n\",\n if game_command[i] == \"NONE\": game_command[i] = 0\n elif game_command[i] == \"MOVE_LEFT\": game_command[i] = 1\n else: game_command[i] = 2 \n \nanswer = np.array(game_command[1:-1])\n \nprint(feature)\nprint(feature.shape)\nprint(answer)\nprint(answer.shape)",
"[[ 93 395 95 0 0 100]\n [ 86 388 90 -7 -7 100]\n [ 79 381 85 -7 -7 100]\n ...\n [181 385 150 -7 7 166]\n [174 392 145 -7 7 166]\n [170 399 150 -4 7 169]]\n(141899, 6)\n[0 1 1 ... 1 2 2]\n(141899,)\n"
]
],
[
[
"### KNN官方文件\nhttps://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html\n### 交叉驗證\nhttps://chih-sheng-huang821.medium.com/%E4%BA%A4%E5%8F%89%E9%A9%97%E8%AD%89-cross-validation-cv-3b2c714b18db",
"_____no_output_____"
]
],
[
[
"#資料劃分\nx_train, x_test, y_train, y_test = train_test_split(feature, answer, test_size=0.3, random_state=9)#從前面丟feature近來\n#參數區間\nparam_grid = {'n_neighbors':[1, 2, 3]}#板子 左 右 不動\n#交叉驗證 \ncv = StratifiedShuffleSplit(n_splits=2, test_size=0.3, random_state=12)\ngrid = GridSearchCV(KNeighborsClassifier(), param_grid, cv=cv, verbose=10, n_jobs=-1) #n_jobs為平行運算的數量\ngrid.fit(x_train, y_train)\ngrid_predictions = grid.predict(x_test)\n\n#儲存\nfile = open('model_mosKNN_ianFILE.pickle', 'wb')\npickle.dump(grid, file)\nfile.close()",
"Fitting 2 folds for each of 3 candidates, totalling 6 fits\n"
]
],
[
[
"### f1-score\nhttps://medium.com/nlp-tsupei/precision-recall-f1-score%E7%B0%A1%E5%96%AE%E4%BB%8B%E7%B4%B9-f87baa82a47",
"_____no_output_____"
]
],
[
[
"#最佳參數\nprint(grid.best_params_)\n#預測結果\n#print(grid_predictions)\n#混淆矩陣\nprint(confusion_matrix(y_test, grid_predictions))\n#分類結果\nprint(classification_report(y_test, grid_predictions))",
"{'n_neighbors': 3}\n[[ 56 547 508]\n [ 679 13521 6515]\n [ 697 6355 13692]]\n precision recall f1-score support\n\n 0 0.04 0.05 0.04 1111\n 1 0.66 0.65 0.66 20715\n 2 0.66 0.66 0.66 20744\n\n accuracy 0.64 42570\n macro avg 0.45 0.45 0.45 42570\nweighted avg 0.65 0.64 0.64 42570\n\n"
]
],
[
[
"### 執行遊戲\npython MLGame.py -i knn.py -f 50 arkanoid NORMAL 3",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec95d6d6cbbb42a044fae643a1aecfe9c74ffb1c | 394,473 | ipynb | Jupyter Notebook | week_01/Seaborn.ipynb | xhguo86/spiced_academy_backup- | 7b65a94d0a03149bb9fc71e35a799074b4412925 | [
"MIT"
] | null | null | null | week_01/Seaborn.ipynb | xhguo86/spiced_academy_backup- | 7b65a94d0a03149bb9fc71e35a799074b4412925 | [
"MIT"
] | null | null | null | week_01/Seaborn.ipynb | xhguo86/spiced_academy_backup- | 7b65a94d0a03149bb9fc71e35a799074b4412925 | [
"MIT"
] | null | null | null | 1,280.756494 | 241,604 | 0.954395 | [
[
[
"import pandas as pd\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"df=pd.read_csv(\"pokemon.csv\",index_col=0)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"sns.boxplot(data=df.iloc[:,4:8], orient=\"h\")\nplt.savefig('sns_boxplot.png', dpi=150)",
"_____no_output_____"
],
[
"sns.countplot(x=\"Type 1\",data=df,palette=\"rainbow\")\nplt.xticks(rotation=-45)\nplt.savefig('sns_countplot.png', dpi=150)",
"_____no_output_____"
],
[
"sns.kdeplot(df.Attack,df.Defense,shade=False,kde=True)\nsns.scatterplot(df['Attack'],df['Defense'], size=0.03, legend=False)\nplt.savefig('sns_kdeplot.png', dpi=150)",
"/home/kristian/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n/home/kristian/anaconda3/lib/python3.6/site-packages/matplotlib/contour.py:960: UserWarning: The following kwargs were not used by contour: 'kde'\n s)\n"
],
[
"sns.heatmap(df.corr(),annot = True)\nplt.savefig('sns_heatmap.png', dpi=150)",
"_____no_output_____"
],
[
"data = df[['Attack', 'Defense', 'Speed', 'HP', 'Legendary']]\nsns.pairplot(data, hue='Legendary', diag_kind=\"kde\", kind=\"scatter\", palette=\"husl\")\nplt.savefig('sns_pairplot.png', dpi=150)",
"/home/kristian/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n/home/kristian/anaconda3/lib/python3.6/site-packages/statsmodels/nonparametric/kde.py:488: RuntimeWarning: invalid value encountered in true_divide\n binned = fast_linbin(X, a, b, gridsize) / (delta * nobs)\n/home/kristian/anaconda3/lib/python3.6/site-packages/statsmodels/nonparametric/kdetools.py:34: RuntimeWarning: invalid value encountered in double_scalars\n FAC1 = 2*(np.pi*bw/RANGE)**2\n/home/kristian/anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py:83: RuntimeWarning: invalid value encountered in reduce\n return ufunc.reduce(obj, axis, dtype, out, **passkwargs)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec95d750da053fbf5fa4212e6e6f658230ddbdc5 | 7,285 | ipynb | Jupyter Notebook | Notebooks/.ipynb_checkpoints/Index-checkpoint.ipynb | WavTiRep/wtest | 95ca225c4b5b9a16a898195705f6d1953b12a9ac | [
"MIT-0"
] | 90 | 2016-09-19T04:16:26.000Z | 2021-09-16T12:56:35.000Z | Notebooks/.ipynb_checkpoints/Index-checkpoint.ipynb | WavTiRep/wtest | 95ca225c4b5b9a16a898195705f6d1953b12a9ac | [
"MIT-0"
] | null | null | null | Notebooks/.ipynb_checkpoints/Index-checkpoint.ipynb | WavTiRep/wtest | 95ca225c4b5b9a16a898195705f6d1953b12a9ac | [
"MIT-0"
] | 37 | 2016-10-05T08:42:42.000Z | 2021-09-23T22:38:16.000Z | 26.587591 | 149 | 0.5849 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec95dc3f7ecfad2957a0fbea10ced2a5854a1195 | 116,087 | ipynb | Jupyter Notebook | matplotlib_examples/scatter_plots.ipynb | zamaoxiaodao/cs109-content | 7dda6fa5f331094897e657b843c476036f21154b | [
"MIT"
] | 1,428 | 2015-01-01T21:47:45.000Z | 2022-03-26T11:32:48.000Z | matplotlib_examples/scatter_plots.ipynb | mohitgujarathi14/content | caffc21c8f7c758c1884852ed023d29dccea063f | [
"MIT"
] | 5 | 2016-01-11T15:07:12.000Z | 2021-11-04T02:07:11.000Z | matplotlib_examples/scatter_plots.ipynb | mohitgujarathi14/content | caffc21c8f7c758c1884852ed023d29dccea063f | [
"MIT"
] | 1,695 | 2015-01-01T16:48:37.000Z | 2022-03-12T13:23:20.000Z | 866.320896 | 72,631 | 0.937719 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec95e63b4410627d88189da59b09842c59f3563e | 10,615 | ipynb | Jupyter Notebook | notebooks/test/db_check.ipynb | maragraziani/multitask_adversarial | 149a6becd225b0d8c498e5c69cf1d5a47ea37ab6 | [
"MIT"
] | 1 | 2022-01-31T02:26:55.000Z | 2022-01-31T02:26:55.000Z | notebooks/test/db_check.ipynb | maragraziani/multitask_adversarial | 149a6becd225b0d8c498e5c69cf1d5a47ea37ab6 | [
"MIT"
] | 5 | 2021-08-23T09:25:16.000Z | 2022-03-12T01:00:50.000Z | notebooks/test/db_check.ipynb | maragraziani/multitask_adversarial | 149a6becd225b0d8c498e5c69cf1d5a47ea37ab6 | [
"MIT"
] | null | null | null | 40.670498 | 141 | 0.585681 | [
[
[
"## Loading OS libraries to configure server preferences\nimport os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport setproctitle\nSERVER_NAME = 'ultrafast'\nEXPERIMENT_TYPE='test_baseline'\nimport time\nimport sys\nimport shutil\n## Adding PROCESS_UC1 utilities\nsys.path.append('../../lib/TASK_2_UC1/')\nfrom models import *\nfrom util import otsu_thresholding\nfrom extract_xml import *\nfrom functions import * \nsys.path.append('../../lib/')\nfrom mlta import *\nimport math\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve, auc\n\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nconfig.gpu_options.visible_device_list = '0'\nkeras.backend.set_session(tf.Session(config=config))\n\nverbose=1 \n\ncam16 = hd.File('/home/mara/adversarialMICCAI/data/ultrafast/cam16_500/patches.hdf5', 'r', libver='latest', swmr=True)\nall500 = hd.File('/home/mara/adversarialMICCAI/data/ultrafast/all500/patches.hdf5', 'r', libver='latest', swmr=True)\nextra17 = hd.File('/home/mara/adversarialMICCAI/data/ultrafast/extra17/patches.hdf5', 'r', libver='latest', swmr=True)\ntumor_extra17=hd.File('/home/mara/adversarialMICCAI/data/ultrafast/1129-1155/patches.hdf5', 'r', libver='latest', swmr=True)\ntest2 = hd.File('/mnt/nas2/results/IntermediateResults/Camelyon/ultrafast/test_data2/patches.hdf5', 'r', libver='latest', swmr=True)\npannuke= hd.File('/mnt/nas2/results/IntermediateResults/Camelyon/pannuke/patches_fix.hdf5', 'r', libver='latest', swmr=True)\n\nglobal data\ndata={'cam16':cam16,'all500':all500,'extra17':extra17, 'tumor_extra17':tumor_extra17, 'test_data2': test2, 'pannuke':pannuke}\nglobal concept_db\nconcept_db = hd.File('/mnt/nas2/results/IntermediateResults/Mara/MICCAI2020/MELBA_normalized_concepts_fix.hd', 'r')\n# Note: nuclei_concepts not supported yet\nglobal nuclei_concepts\nnuclei_concepts=hd.File('/mnt/nas2/results/IntermediateResults/Mara/MICCAI2020/normalized_nuclei_concepts_db_new_try_def.hdf5','r')\n\n#SYSTEM CONFIGS \nCONFIG_FILE = 'doc/config.cfg'\nCOLOR = True\nBATCH_SIZE = 32\n\nseed=1\nprint seed\n\n# SET PROCESS TITLE\nsetproctitle.setproctitle('{}'.format(EXPERIMENT_TYPE))\n\n# SET SEED\nnp.random.seed(seed)\ntf.set_random_seed(seed)\n\n# DATA SPLIT CSVs \ntrain_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/train_shuffle.csv', 'r') # How is the encoding of .csv files ?\nval_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/val_shuffle.csv', 'r')\ntest_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/test_shuffle.csv', 'r')\ntrain_list=train_csv.readlines()\nval_list=val_csv.readlines()\ntest_list=test_csv.readlines()\ntest2_csv = open('/mnt/nas2/results/IntermediateResults/Camelyon/test2_shuffle.csv', 'r')\ntest2_list=test2_csv.readlines()\ntest2_csv.close()\ntrain_csv.close()\nval_csv.close()\ntest_csv.close()\n#data_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/data_shuffle.csv', 'r')\n#data_csv=open('./data/train.csv', 'r')\ndata_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/pannuke/pannuke_train_shuffled.csv', 'r')\ndata_list=data_csv.readlines()\ndata_csv.close()\n\n# STAIN NORMALIZATION\ndef get_normalizer(patch, save_folder=''):\n normalizer = ReinhardNormalizer()\n normalizer.fit(patch)\n np.save('{}/normalizer'.format(save_folder),normalizer)\n np.save('{}/normalizing_patch'.format(save_folder), patch)\n print('Normalisers saved to disk.')\n return normalizer\n\ndef normalize_patch(patch, normalizer):\n return np.float64(normalizer.transform(np.uint8(patch)))",
"_____no_output_____"
],
[
"# LOAD DATA NORMALIZER\nglobal normalizer\ndb_name, entry_path, patch_no = get_keys(data_list[0])\nnormalization_reference_patch = data[db_name][entry_path][patch_no]\nnormalizer = get_normalizer(normalization_reference_patch, save_folder='./')\n# Retrieve Concept Measures\ndef get_concept_measure(db_name, entry_path, patch_no, measure_type=''):\n ### note: The measures in the file should have been scaled beforehand\n # to have zero mean and unit std\n if db_name=='pannuke':\n #import pdb; pdb.set_trace()\n try:\n cm=concept_db[entry_path+' /'+measure_type][0]\n #print 'pannuke ', cm\n return cm\n except:\n print \"[ERR]: {}, {}, {}, {}\".format(db_name, entry_path, patch_no, measure_type)\n print entry_path+' /'+measure_type\n return 1.\n else:\n try: \n cm=concept_db[db_name+'/'+entry_path+'/'+str(patch_no)+'/'+measure_type][0]\n #print 'other ', cm\n return cm\n except:\n print \"[ERR]: {}, {}, {}, {}\".format(db_name, entry_path, patch_no, measure_type)\n #error_log.write('[get_concept_measure] {}, {}, {}, {}'.format(db_name, entry_path, patch_no, measure_type))\n return 1.\ndef get_segmented_concept_measure(db_name, entry_path, patch_no, measure_type=''):\n ### note: The measures in the file should have been scaled beforehand\n # to have zero mean and unit std\n try:\n cm = nuclei_concepts[db_name+'/'+entry_path+'/'+str(patch_no)+'/'+measure_type][0]\n except:\n #error_log.write('[get_segmented_concept_measure] {}, {}, {}, {}'.format(db_name, entry_path, patch_no, measure_type))\n print \"[ERROR] Issue retreiving concept measure for {}, {}, {}, {}\".format(db_name, entry_path, patch_no, measure_type)\n return 1.\n\n# BATCH GENERATORS\ndef get_batch_data(patch_list, batch_size=32):\n num_samples=len(patch_list)\n while True:\n offset = 0\n for offset in range(0,num_samples, batch_size):\n batch_x = []\n batch_y = []\n batch_contrast=[]\n batch_samples=patch_list[offset:offset+batch_size]\n for line in batch_samples[:(num_samples//batch_size)*batch_size]:\n db_name, entry_path, patch_no = get_keys(line)\n patch=data[db_name][entry_path][patch_no]\n patch=normalize_patch(patch, normalizer)\n patch=keras.applications.inception_v3.preprocess_input(patch) \n label = get_class(line, entry_path) \n batch_x.append(patch)\n batch_y.append(label)\n # ONES\n #batch_ones.append(1.)\n # NOISE\n #batch_noise.append(np.random.normal(0.))\n # CONCEPT = contrast\n batch_contrast.append(get_concept_measure(db_name, entry_path, patch_no, measure_type='norm_contrast'))\n # CONCEPT = domain\n #batch_domain.append(get_domain(db_name, entry_path))\n # CONCEPT = nuclei area\n #batch_n_area.append(get_segmented_concept_measure(db_name, entry_path, patch_no, measure_type='area'))\n #batch_contrast.append(get_segmented_concept_measure(db_name, entry_path, patch_no, measure_type='area'))\n # CONCEPT = nuclei counts\n #batch_n_count.append(get_segmented_concept_measure(db_name, entry_path, patch_no, measure_type='count'))\n #batch_contrast.append(get_segmented_concept_measure(db_name, entry_path, patch_no, measure_type='count'))\n #batch_domain=keras.utils.to_categorical(batch_domain, num_classes=6)\n batch_x=np.asarray(batch_x, dtype=np.float32)\n batch_y=np.asarray(batch_y, dtype=np.float32)\n batch_cm=np.asarray(batch_contrast, dtype=np.float32) #ones(len(batch_y), dtype=np.float32)\n #batch_cm=np.ones(len(batch_y), dtype=np.float32)\n yield [batch_x, batch_y, batch_cm], None\n ",
"_____no_output_____"
],
[
"train_generator=get_batch_data(data_list, batch_size=BATCH_SIZE)",
"_____no_output_____"
],
[
"data_list[:10]",
"_____no_output_____"
],
[
"[x,y,cm],_=train_generator.next()",
"_____no_output_____"
],
[
"concept_db['extra17']",
"_____no_output_____"
],
[
"cm",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec95ebba5464425671f8c1842fe52688d0ebeaba | 4,480 | ipynb | Jupyter Notebook | 31-problem-begin_types_marks_and_encoding_channels.ipynb | hanisaf/advanced-data-management-and-analytics-spring2021 | 35178f14b942f2accbcfcbaa5a27e134a9a9f96b | [
"MIT"
] | 6 | 2021-01-21T17:53:34.000Z | 2021-04-20T17:37:50.000Z | 31-problem-begin_types_marks_and_encoding_channels.ipynb | hanisaf/advanced-data-management-and-analytics-spring2021 | 35178f14b942f2accbcfcbaa5a27e134a9a9f96b | [
"MIT"
] | null | null | null | 31-problem-begin_types_marks_and_encoding_channels.ipynb | hanisaf/advanced-data-management-and-analytics-spring2021 | 35178f14b942f2accbcfcbaa5a27e134a9a9f96b | [
"MIT"
] | 13 | 2021-01-20T16:11:55.000Z | 2021-04-28T21:38:07.000Z | 27.654321 | 155 | 0.413839 | [
[
[
"import pandas as pd\nimport altair as alt",
"_____no_output_____"
],
[
"data = pd.read_excel('data/eastmank.xlsx')\ndata.Year = data.Year.apply(str)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
]
],
[
[
"- Create a line chart on which the x is Year and y is Act-Revenue\n- For better presentation label Year as T (temporal type)\n- Plot another line of the same chart that shows the Real-Revenue, make this line red to differentiate between it and the previous line\n- Let us visualize this difference between real and actual revenue with an area mark. Encode x as year, y as real revenue and y2 as actual revenue\n- Let us visualize it again with a bar chart where x is the year, y is the real revenue and y2 is the actual revenue\n- Use color to differentiate bars based on if real revenue is smaller than actual revenue\n- Add a tooltip showing the actual difference between real revenue and actual revenue",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec95ede9c3750f6c0974dd8b00b003f3f642ce92 | 5,576 | ipynb | Jupyter Notebook | Day2_materials/notebooks/exploring_attpc_data.ipynb | davis9ja/MachineLearningMSU | 01e700f3be7a57fc44e214c4260ad377e3898575 | [
"CC0-1.0"
] | null | null | null | Day2_materials/notebooks/exploring_attpc_data.ipynb | davis9ja/MachineLearningMSU | 01e700f3be7a57fc44e214c4260ad377e3898575 | [
"CC0-1.0"
] | null | null | null | Day2_materials/notebooks/exploring_attpc_data.ipynb | davis9ja/MachineLearningMSU | 01e700f3be7a57fc44e214c4260ad377e3898575 | [
"CC0-1.0"
] | null | null | null | 31.502825 | 215 | 0.58967 | [
[
[
"# Further Exploration of AT-TPC Data\n\nNow you will have the opportunity to further explore the Argon 46 data from the AT-TPC. This will be a much more open-ended opportunity for you to play with the data and try new things.\n\nBefore getting started, make sure you are using a GPU-enabled runtime in Google Colab. Go to \"Runtime\" $\\rightarrow$ \"Change runtime type\", then make sure \"GPU\" is selected for \"Hardware accelerator\".",
"_____no_output_____"
],
[
"## Setup\n\nThis is where you can import any Python libraries that you may want to use.",
"_____no_output_____"
]
],
[
[
"import os\n\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport h5py\n\n# This is simply an alias for convenience\nlayers = tf.keras.layers\n\n# Prevent TensorFlow from showing us deprecation warnings\ntf.logging.set_verbosity(tf.logging.ERROR)",
"_____no_output_____"
]
],
[
[
"We also define some utility functions that will be helpful.",
"_____no_output_____"
]
],
[
[
"def get_attpc_class(label):\n \"\"\"Gets the class name for a given label.\n \n Arguments:\n label (int): The integer target label.\n \n Returns:\n The name of the class that corresponds to the given label.\n \"\"\"\n return ['proton', 'carbon', 'junk'][label]\n\ndef load_attpc_data():\n \"\"\"Loads in the AT-TPC data.\n \n Returns:\n A tuple of the form ((real_features, real_targets), (simulated_features, simulated_targets))\n \"\"\"\n simulated_data_origin = 'https://github.com/CompPhysics/MachineLearningMSU/raw/master/Day2_materials/data/simulated-attpc-events.h5'\n real_data_origin = 'https://github.com/CompPhysics/MachineLearningMSU/raw/master/Day2_materials/data/real-attpc-events.h5'\n \n simulated_path = tf.keras.utils.get_file('simulated-attpc-data.h5', origin=simulated_data_origin)\n real_path = tf.keras.utils.get_file('real-attpc-data.h5', origin=real_data_origin)\n \n with h5py.File(simulated_path, 'r') as h5:\n simulated_features = h5['features'][:]\n simulated_targets = h5['targets'][:]\n \n with h5py.File(real_path, 'r') as h5:\n real_features = h5['features'][:]\n real_targets = h5['targets'][:]\n \n return (real_features, real_targets), (simulated_features, simulated_targets)",
"_____no_output_____"
]
],
[
[
"## Loading the AT-TPC data\n\nWe load in the real and simulated AT-TPC data below.",
"_____no_output_____"
]
],
[
[
"(real_features, real_targets), (simulated_features, simulated_targets) = load_attpc_data()",
"_____no_output_____"
]
],
[
[
"If running this notebook on Google Colab, you will not be able to fit all 50,000 simulated events in RAM after they have been normalized. Run the cell below to use only 10,000.",
"_____no_output_____"
]
],
[
[
"sim_features = sim_features[:10000]\nsim_targets = sim_targets[:10000]",
"_____no_output_____"
]
],
[
[
"## How to proceed\n\nWe have provided you with the data, and now you can do with it as you wish. Below is a list of suggestions for things you can try, and you can work off of the CNN notebook from the earlier lecture.\n\n * Try to improve the results of the transfer learning problem from earlier.\n * Perform hyperparameter tuning.\n * Train on more of the simulated data.\n * Experiment with different network architectures (add dropout, change hidden layers, etc).\n * Rather than freezing the convolutional base of the VGG16 model, fine-tune the convolutional layers by training the entire network.\n * Consider a different learning task: train on real data and test on real data (this should get very good results).\n * Build and train a CNN from scratch.\n \nModel your workflow on the previous CNN notebook. Good luck!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec95f6865262d3d46a78a2929b1fcb84e4923b42 | 139,514 | ipynb | Jupyter Notebook | src/tutorial_01_fashion_mnist.ipynb | fumihachi94/docker-tensorflow | ba5028c75c7295b9421ca1db19f46da119aabbd9 | [
"Apache-2.0"
] | null | null | null | src/tutorial_01_fashion_mnist.ipynb | fumihachi94/docker-tensorflow | ba5028c75c7295b9421ca1db19f46da119aabbd9 | [
"Apache-2.0"
] | null | null | null | src/tutorial_01_fashion_mnist.ipynb | fumihachi94/docker-tensorflow | ba5028c75c7295b9421ca1db19f46da119aabbd9 | [
"Apache-2.0"
] | null | null | null | 265.235741 | 57,332 | 0.914009 | [
[
[
"# Tutorial#1 : Fashion MNIST の画像分類\n\n公式チュートリアルの内容になります。\\\n[はじめてのニューラルネットワーク:分類問題の初歩 | TensorFlow Core](https://www.tensorflow.org/tutorials/keras/classification?hl=ja)",
"_____no_output_____"
]
],
[
[
"%%html\n<style>table {float:left}</style>\n<!-- 表を左寄せにするためのコマンド -->",
"_____no_output_____"
],
[
"!pip install -q --upgrade pip\n!pip install -q pillow",
"_____no_output_____"
],
[
"import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image",
"_____no_output_____"
],
[
"print('tensorflow ver.', tf.__version__)",
"tensorflow ver. 2.1.0\n"
],
[
"fashion_mnist = tf.keras.datasets.fashion_mnist\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 2s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n8192/5148 [===============================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n"
],
[
"print('----訓練画像のプロパティ----')\nprint('クラス :', type(train_images))\nprint('データサイズ:', train_images.shape)\nprint('データ型 :', train_images.dtype)\nprint('データ範囲 :', train_images.min(), '-', train_images.max())\nprint('ラベル範囲 :', train_labels.min(), '-', train_labels.max())\nu, count = np.unique(train_labels, return_counts=True)\nprint('データラベル:', u)\nprint('ラベル頻度 :', count)\nprint('\\n----評価画像のプロパティ----')\nprint('データサイズ:', test_images.shape)",
"----訓練画像のプロパティ----\nクラス : <class 'numpy.ndarray'>\nデータサイズ: (60000, 28, 28)\nデータ型 : uint8\nデータ範囲 : 0 - 255\nラベル範囲 : 0 - 9\nデータラベル: [0 1 2 3 4 5 6 7 8 9]\nラベル頻度 : [6000 6000 6000 6000 6000 6000 6000 6000 6000 6000]\n\n----評価画像のプロパティ----\nデータサイズ: (10000, 28, 28)\n"
],
[
"class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"Fashion MNIST のデータラベル一覧\n\n|Label|\tClass|\n|-:-|-:-|\n|0|\tT-shirt/top|\n|1|\tTrouser|\n|2|\tPullover|\n|3|\tDress|\n|4|\tCoat|\n|5|\tSandal|\n|6|\tShirt|\n|7|\tSneaker|\n|8|\tBag|\n|9|\tAnkle boot|",
"_____no_output_____"
],
[
"どんな画像が入っているか確認",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 入力データの正規化・確認データの確保\n\n- 入力画像の画素を0-1で正規化する\n- 5000枚を学習訓練中の評価に利用するデータに用いる",
"_____no_output_____"
]
],
[
[
"train_images, test_images = train_images / 255.0, test_images / 255.0\ntrain_images, valid_images = np.split(train_images, [55000])\ntrain_labels, valid_labels = np.split(train_labels, [55000])",
"_____no_output_____"
]
],
[
[
"# モデルの構築\n- Flatten : 一次元配列に変換\n\n\n- Dense:全結合層、活性化関数を指定\n\n\n- Dropout: dropout率を指定。訓練の間に要素の20%のニューロンがランダムにドロップアウトされることを表す。\n\n※2つめのDense層では、合計が1になる10個の確率配列を返す",
"_____no_output_____"
]
],
[
[
"model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28,28)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation='softmax')\n], name='tf_tutorial_model')\n\nmodel.summary()",
"Model: \"tf_tutorial_model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense (Dense) (None, 128) 100480 \n_________________________________________________________________\ndropout (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 101,770\nTrainable params: 101,770\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"# モデルのコンパイル\n\n- `optimizer` (オプティマイザ): モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定\n \n \n- `loss` (損失関数): 損失関数を指定。損失関数とは訓練中にモデルがどれくらい正確かを測定するもので、この値を最小化するようにしてパラメータを学習させる\n \n \n- `metrics` (メトリクス): 訓練とテストのステップを監視するのに使用される。`accuracy`の場合、画像が正しく分類された比率を使用する。",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='adam', \n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"# モデルの訓練・評価",
"_____no_output_____"
]
],
[
[
"fit = model.fit(train_images, train_labels, epochs=5, verbose=1, validation_data=(valid_images, valid_labels))\n\ntest_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\n\nprint('\\nTest accuracy:', test_acc)",
"Train on 55000 samples, validate on 5000 samples\nEpoch 1/5\n55000/55000 [==============================] - 2s 40us/sample - loss: 0.2786 - accuracy: 0.8956 - val_loss: 0.3117 - val_accuracy: 0.8908\nEpoch 2/5\n55000/55000 [==============================] - 2s 41us/sample - loss: 0.2726 - accuracy: 0.8979 - val_loss: 0.3115 - val_accuracy: 0.8904\nEpoch 3/5\n55000/55000 [==============================] - 2s 40us/sample - loss: 0.2666 - accuracy: 0.9006 - val_loss: 0.3282 - val_accuracy: 0.8836\nEpoch 4/5\n55000/55000 [==============================] - 2s 42us/sample - loss: 0.2599 - accuracy: 0.9025 - val_loss: 0.3221 - val_accuracy: 0.8834\nEpoch 5/5\n55000/55000 [==============================] - 2s 38us/sample - loss: 0.2588 - accuracy: 0.9029 - val_loss: 0.3221 - val_accuracy: 0.8840\n10000/10000 - 0s - loss: 0.3360 - accuracy: 0.8813\n\nTest accuracy: 0.8813\n"
]
],
[
[
"# 画像の分類予測\n\nモデルの訓練が終了したので、このモデルを利用して画像の分類予測が行えるようになりました。",
"_____no_output_____"
]
],
[
[
"predictions = model.predict(test_images)",
"_____no_output_____"
],
[
"print('Prediction of each labels on test image #1')\nplt.figure(figsize=(2,2))\nplt.imshow(test_images[0], cmap=plt.cm.binary)\nplt.xlabel('test image #1 : ' + class_names[test_labels[0]])\nplt.show()\n\nplt.figure()\nplt.bar(class_names, predictions[0])\nplt.xticks(rotation=90)\nplt.show()\n\nfor i in range(10):\n print(class_names[i], ':', predictions[0][i])",
"Prediction of each labels on test image #1\n"
],
[
"def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n\n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n\n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array[i], true_label[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1]) \n predicted_label = np.argmax(predictions_array)\n\n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')",
"_____no_output_____"
],
[
"# X個のテスト画像、予測されたラベル、正解ラベルを表示します。\n# 正しい予測は青で、間違った予測は赤で表示しています。\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ec960a83d81814bfd774035dece6d054ed3ab1a0 | 112,592 | ipynb | Jupyter Notebook | notebooks/11_nonlinear_features.ipynb | raphaelvallat/yasa | 4d23501a77e16d878779250706f16df4f5eb6296 | [
"BSD-3-Clause"
] | 187 | 2019-02-02T06:57:05.000Z | 2022-03-28T17:42:18.000Z | notebooks/11_nonlinear_features.ipynb | raphaelvallat/yasa | 4d23501a77e16d878779250706f16df4f5eb6296 | [
"BSD-3-Clause"
] | 51 | 2019-05-27T08:51:24.000Z | 2022-03-17T20:17:18.000Z | notebooks/11_nonlinear_features.ipynb | raphaelvallat/yasa | 4d23501a77e16d878779250706f16df4f5eb6296 | [
"BSD-3-Clause"
] | 48 | 2019-03-12T11:49:40.000Z | 2022-03-20T17:32:41.000Z | 202.868468 | 70,104 | 0.889308 | [
[
[
"# Non-linear features\n\nThis notebook demonstrates how to use YASA to calculate epoch-per-epoch non-linear features of a full-night single-channel EEG recording.\n\nPlease make sure to install the latest version of YASA first by typing the following line in your terminal or command prompt:\n\n`pip install --upgrade yasa`\n\nIn addition, you will also need to install the [AntroPy](https://github.com/raphaelvallat/antropy) package: `pip install --upgrade antropy`",
"_____no_output_____"
]
],
[
[
"import yasa\nimport numpy as np\nimport pandas as pd\nimport antropy as ant\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(font_scale=1.2)\n\n# Load EEG data\nf = np.load('data_full_6hrs_100Hz_Cz+Fz+Pz.npz')\ndata, ch_names = f['data'], f['chan']\nsf = 100.\ntimes = np.arange(data.size) / sf\n\n# Keep only Cz\ndata = data[0, :]\nprint(data.shape, np.round(data[0:5], 3))",
"/Users/raphael/.pyenv/versions/3.8.3/lib/python3.8/site-packages/outdated/utils.py:14: OutdatedCheckFailedWarning: Failed to check for latest version of package.\nSet the environment variable OUTDATED_RAISE_EXCEPTION=1 for a full traceback.\nSet the environment variable OUTDATED_IGNORE=1 to disable these warnings.\n return warn(\n"
],
[
"# Load the hypnogram data\nhypno = np.loadtxt('data_full_6hrs_100Hz_hypno_30s.txt').astype(int)\nprint(hypno.shape, 'Unique values =', np.unique(hypno))",
"(720,) Unique values = [0 1 2 3 4]\n"
],
[
"# Convert the EEG data to 30-sec data\ntimes, data_win = yasa.sliding_window(data, sf, window=30)\n\n# Convert times to minutes\ntimes /= 60\n\ndata_win.shape",
"_____no_output_____"
]
],
[
[
"## Calculate non-linear features",
"_____no_output_____"
]
],
[
[
"from numpy import apply_along_axis as apply\n\ndf_feat = {\n # Entropy\n 'perm_entropy': apply(ant.perm_entropy, axis=1, arr=data_win, normalize=True),\n 'svd_entropy': apply(ant.svd_entropy, 1, data_win, normalize=True),\n 'sample_entropy': apply(ant.sample_entropy, 1, data_win),\n # Fractal dimension\n 'dfa': apply(ant.detrended_fluctuation, 1, data_win),\n 'petrosian': apply(ant.petrosian_fd, 1, data_win),\n 'katz': apply(ant.katz_fd, 1, data_win),\n 'higuchi': apply(ant.higuchi_fd, 1, data_win),\n}\n\ndf_feat = pd.DataFrame(df_feat)\ndf_feat.head()",
"_____no_output_____"
],
[
"def lziv(x):\n \"\"\"Binarize the EEG signal and calculate the Lempel-Ziv complexity.\n \"\"\"\n return ant.lziv_complexity(x > x.mean(), normalize=True)\n\ndf_feat['lziv'] = apply(lziv, 1, data_win)",
"_____no_output_____"
]
],
[
[
"## Add classic spectral power",
"_____no_output_____"
]
],
[
[
"from scipy.signal import welch\nfreqs, psd = welch(data_win, sf, nperseg=int(4 * sf))\nbp = yasa.bandpower_from_psd_ndarray(psd, freqs)\nbp = pd.DataFrame(bp.T, columns=['delta', 'theta', 'alpha', 'sigma', 'beta', 'gamma'])\ndf_feat = pd.concat([df_feat, bp], axis=1)\ndf_feat.head()",
"_____no_output_____"
],
[
"# Ratio of spectral power\n# df_feat.eval('dt = delta / theta', inplace=True)\n# df_feat.eval('db = delta / beta', inplace=True)\n# df_feat.eval('at = alpha / theta', inplace=True)",
"_____no_output_____"
]
],
[
[
"## Find best features for sleep stage classification",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_selection import f_classif\n\n# Extract sorted F-values\nfvals = pd.Series(f_classif(X=df_feat, y=hypno)[0], \n index=df_feat.columns\n ).sort_values()\n\n# Plot best features\nplt.figure(figsize=(6, 6))\nsns.barplot(y=fvals.index, x=fvals, palette='RdYlGn')\nplt.xlabel('F-values')\nplt.xticks(rotation=20);",
"_____no_output_____"
],
[
"# Plot hypnogram and higuchi\nfig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 6), sharex=True)\n\nhypno = pd.Series(hypno).map({-1: -1, 0: 0, 1: 2, 2: 3, 3: 4, 4: 1}).values\nhypno_rem = np.ma.masked_not_equal(hypno, 1)\n\n# Plot the hypnogram\nax1.step(times, -1 * hypno, color='k', lw=1.5)\nax1.step(times, -1 * hypno_rem, color='r', lw=2.5)\nax1.set_yticks([0, -1, -2, -3, -4])\nax1.set_yticklabels(['W', 'R', 'N1', 'N2', 'N3'])\nax1.set_ylim(-4.5, 0.5)\nax1.set_ylabel('Sleep stage')\n\n# Plot the non-linear feature\nax2.plot(times, df_feat['higuchi'])\nax2.set_ylabel('Higuchi Fractal Dimension')\nax2.set_xlabel('Time [minutes]')\n\nax2.set_xlim(0, times[-1]);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.